id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2307.01001
On the Zeta functions of supersingular isogeny graphs and modular curves
Let $p$ and $q$ be distinct prime numbers, with $q\equiv 1\pmod{12}$. Let $N$ be a positive integer that is coprime to $pq$. We prove a formula relating the Hasse--Weil zeta function of the modular curve $X_0(qN)_{\mathbb{F}_q}$ to the Ihara zeta function of the $p$-isogeny graphs of supersingular elliptic curves defined over $\overline{\mathbb{F}_q}$ equipped with a $\Gamma_0(N)$-level structure. When $N=1$, this recovers a result of Sugiyama.
Antonio Lei, Katharina Müller
2023-07-03T13:31:50Z
http://arxiv.org/abs/2307.01001v2
# On the zeta functions of supersingular ###### Abstract. Let \(p\) and \(q\) be distinct prime numbers, with \(q\equiv 1\pmod{12}\). Let \(N\) be a positive integer that is coprime to \(pq\). We prove a formula relating the Hasse-Weil zeta function of the modular curve \(X_{0}(qN)_{\mathbb{F}_{q}}\) to the Ihara zeta function of the \(p\)-isogeny graphs of supersingular elliptic curves defined over \(\overline{\mathbb{F}_{q}}\) equipped with a \(\Gamma_{0}(N)\)-level structure. When \(N=1\), this recovers a result of Sugiyama. Key words and phrases:Hasse-Weil zeta functions, Ihara zeta functions, modular curves, supersingular isogeny graphs 2020 Mathematics Subject Classification: 11M41, 05C30 (primary), 11G18, 14G35 (secondary) ## 1. Introduction Zeta functions hold significant importance across various realms of number theory. They serve as powerful tools, which encode intricate arithmetic information of mathematical objects. In this article, we study relations between two families of such zeta functions, namely the Hasse-Weil zeta functions attached to modular curves and the Ihara zeta functions attached to supersingular isogeny graphs. Given an algebraic curve \(C\) defined over a finite field \(k\), the Hasse-Weil zeta function attached to \(C\) encodes the number of rational points on \(C\) defined over a finite extension of \(k\) (see [19, SS1.50] for a detailed discussion). We are particularly interested in the case where \(C\) is a modular curve. More specifically, let \(p\) and \(q\) be two distinct prime numbers with \(q\equiv 1\pmod{12}\). Let \(X_{0}(q)\) denote the modular curve classifying isomorphism classes of elliptic curves equipped with a \(\Gamma_{0}(q)\)-level structure. Let \(W(X_{0}(q)_{\mathbb{F}_{p}},S)\in 1+S\mathbb{Z}[\![S]\!]\) be the Hasse-Weil zeta function attached to \(X_{0}(q)_{\mathbb{F}_{p}}\) (See Definition 3.1 for a precise definition). In graph theory, the Ihara zeta function is defined using prime closed geodesics. See [14, Chapter 2] for a comprehensive survey. These functions are related to the adjacency matrix and the valency matrix of a graph. Furthermore, analogous to the analytic class number formula for its number field counterpart, the Ihara zeta function can be used to compute the size of the Jacobian of a graph. In the present article, we are interested in understanding connections between the Hasse-Weil zeta function coming from modular curves to the Ihara zeta functions of supersingular isogeny graphs, whose definition we review below. Let \(\Sigma=\{E_{1},\ldots,E_{n}\}\) denotes a set of representatives of isomorphism classes of supersingular elliptic curves defined over \(\overline{\mathbb{F}_{q}}\), where \(n=\frac{q-1}{12}\). We define a graph \(X_{p}^{q}(1)\) with vertex set \(\Sigma\) and edges induced by \(p\)-isogenies (see Definition 2.1). Let \(Z(X_{p}^{q}(1),S)\in\mathbb{Z}[S]\) be the Ihara zeta function of this graph (see Definition 2.9 for a precise definition). Sugiyama showed in [10, Thereom 1.1] that the two zeta functions discussed above are related by the following explicit equation: \[W(X_{0}(q)_{\mathbb{F}_{p}},S)Z(X_{p}^{q}(1),S)=\frac{1}{(1-S)^{2}(1-pS)^{2}(1 -S^{2})^{\frac{n(q-1)}{2}}} \tag{1.1}\] (note that we have replaced the symbol \(N\) in loc. cit. by \(p\) here; in the present article, \(N\) will denote a positive integer that is not necessarily a prime number). We remark that this relation has also been observed by Li in [12, P.54]. The goal of this article is to generalize this result to more general levels. More specifically, we replace \(X_{0}(q)_{\mathbb{F}_{p}}\) by \(X_{0}(qN)_{\mathbb{F}_{p}}\), where \(N\) is a positive integer coprime to \(pq\). Correspondingly, the graph \(X_{p}^{q}(1)\) shall be replaced by the graph whose vertices are the isomorphism classes of pairs \((E,C)\), where \(E\in\Sigma\) and \(C\) is a cyclic subgroup of order \(N\) in \(E\), while the edges are still induced by \(p\)-isogenies. **Theorem A** (Corollary 3.4).: _The following equality holds:_ \[W(X_{0}(qN)_{\mathbb{F}_{p}},S)W(X_{0}(N)_{\mathbb{F}_{p}},S)^{-2}Z(X_{p}^{q}( N),S)=(1-S^{2})^{\chi(X_{p}^{q}(N))},\] _where \(\chi(X_{p}^{q}(N))\) denotes the Euler characteristic of the graph \(X_{p}^{q}(N)\)._ We recover Sugiyama's main result from [10] on noting that \(\chi(X_{l}^{q}(1))=-\frac{n(q-1)}{2}\) and \(W(X_{0}(1),S)=\frac{1}{(1-S)(1-lS)}\). We briefly describe the proof of Theorem A. Let \(\operatorname{Div}^{0}(X_{p}^{q}(N))\) be the group of zero divisors on the vertices of the graph \(X_{p}^{q}(N)\). Let \(T\) be the Hecke algebra acting on \(\operatorname{Div}^{0}(X_{p}^{q}(N))\) and \(\mathbf{T}\) be the Hecke algebra acting on the space of \(q\)-new cuspforms of weight \(2\) and level \(qN\) (see Definitions 2.5 and 2.6). We make use of a result of Ribet [11], which says that these two Hecke algebras are isomorphic to deduce an isomorphism of \(T\otimes\mathbb{R}\)-modules \[\operatorname{Div}^{0}(X_{p}^{q}(N))\otimes\mathbb{R}\cong S_{2}(\Gamma_{0}(qN ))_{q-\operatorname{new}}\] (see Proposition 2.8). This generalizes the corresponding result for \(N=1\) in [10, Proposition 3.2]. The aforementioned isomorphism allows us to relate the Brandt matrix (see Definition 2.10) to the Ihara zeta function. To conclude the proof, we relate the Brandt matrix to the Hasse-Weil zeta function, which can be described using the Fourier coefficients of cuspforms of level \(qN\). **Acknowledgement**.: The authors' research is supported by the NSERC Discovery Grants Program RGPIN-2020-04259 and RGPAS-2020-00096. ## 2. On the Ihara zeta function of a supersingular isogeny graph The goal of this section is to give an explicit formula of the Ihara zeta function of a supersingular isogeny graph. The main result of the present section is Corollary 2.12. ### Defining supersingular isogeny graphs Let \(p\) and \(q\) be two distinct prime numbers, and \(N\) a nonnegative integer coprime to \(pq\). Assume that \(q\equiv 1\pmod{12}\). Let \(B\) be a quaternion algebra that is only ramified at \(\infty\) and \(q\). Let \(R\) be a fixed maximal order in \(B\) and let \(I_{1},\ldots,I_{n}\) be fixed representatives for the ideal classes in \(R\). For \(1\leq i\leq n\) let \(R_{i}\) be the right order of \(I_{i}\). There are \(n\) distinct isomorphism classes \(\Sigma=\{E_{1},\ldots,E_{n}\}\) of supersingular elliptic curves defined over \(\overline{\mathbb{F}_{q}}\) such that \(\operatorname{End}(E_{i})=R_{i}\). As \(q\equiv 1\pmod{12}\), it follows furthermore that \(R_{i}^{\times}=\{\pm 1\}\) for \(1\leq i\leq n\) and that \(n=\frac{q-1}{12}\) (see the discussion in [10, SS3.1], bearing in mind that the symbol \(N\) in loc. cit. is replaced by \(q\) here). **Definition 2.1**.: _We define an undirected graph \(X_{p}^{q}(N)\) whose set of vertices is given by_ \[V(X_{p}^{q}(N))=\{(E,C)\mid E\in\Sigma,C\subset E[N]\text{ a cyclic subgroup of order }N\}.\] _We draw an edge between \((E,C)\) and \((E^{\prime},C^{\prime})\) whenever there is a \(p\)-isogeny \(\phi\colon E\to E^{\prime}\) such that \(\phi(C)=C^{\prime}\) (loops are allowed)._ _We denote by \(\operatorname{Div}(X_{p}^{q}(N))\) and \(\operatorname{Div}^{0}(X_{p}^{q}(N))\) the divisors and zero divisors of \(X_{p}^{q}(N)\) over \(\mathbb{Z}\), respectively._ ### Modular curves and Hecke algebras Let \(X_{0}(qN)\) be the modular curve of level \(\Gamma_{0}(qN)\). It classifies isomorphism classes of pairs \((E,C)\), where \(E\) is an elliptic curve and \(C\) is a cyclic subgroup of order \(qN\) in \(E\). The curve \(X_{0}(qN)_{\mathbb{F}_{q}}\) consists of two copies of \(X_{0}(N)_{\mathbb{F}_{q}}\) intersecting at supersingular points. We shall relate \(\operatorname{Div}^{0}(X_{p}^{q}(N))\) to the space of weight two \(q\)-newforms of level \(qN\), which we introduce below. **Definition 2.2**.: _We write \(S_{2}(\Gamma_{0}(qN))\) for the \(\mathbb{R}\)-vector space of weight-two cuspforms of level \(\Gamma_{0}(qN)\). Analogously we write \(S_{2}(\Gamma_{0}(N))\) for the \(\mathbb{R}\)-vector space of weigh-two cuspforms of level \(\Gamma_{0}(N)\)._ We have two natural embeddings \[\iota_{1}\colon S_{2}(\Gamma_{0}(N))\to S_{2}(\Gamma_{0}(qN)),\quad f(z) \mapsto f(z)\] and \[\iota_{2}\colon S_{2}(\Gamma_{0}(N))\to S_{2}(\Gamma_{0}(qN)),\quad f(z) \mapsto f(qz).\] **Definition 2.3**.: _We call the space_ \[\iota_{1}(S_{2}(\Gamma_{0}(N))\oplus\iota_{2}(S_{2}(\Gamma_{0}(N))\subset S_{2 }(\Gamma_{0}(qN))\] _the \(N\)**-old space** of \(S_{2}(\Gamma_{0}(qN))\), denoted by \(S_{2}(\Gamma_{0}(qN))_{q-\mathrm{old}}\). We define the \(N\)**-new space**\(S_{2}(\Gamma_{0}(qN))_{q-\mathrm{new}}\) as the orthogonal complement of \(S_{2}(\Gamma_{0}(qN))\) with respect to the Petersson inner product._ We now introduce the definition of Hecke operators. **Definition 2.4**.: _Let \(\ell\) be a prime number that is coprime to \(qN\). We define the action of the Hecke correspondence \(T_{\ell}\) on \(X_{0}(qN)\) which sends \((E,C)\) to_ \[T_{\ell}(E,C)=\sum_{D}(E/D,C+D/D),\] _where the sum runs over all cyclic subgroups \(D\) of \(E\) of order \(\ell\). If \(\ell\mid N\), we define_ \[T_{\ell}(E,C)=\sum_{D}(E/D,C+D/D),\] _where the sum runs over all cyclic subgroups of order \(\ell\) not intersecting \(C\). These Hecke operators preserve the two components of \(X_{0}(qN)_{\mathbb{F}_{q}}\)._ As \(q\) is coprime to \(N\), we can decompose every \(qN\) level structure into a product \(C\times C_{q}\), where \(C\) is of level \(N\) and \(C_{q}\) is of level \(q\). Let \(w_{q}\) be the Atkin-Lehner involution on \(X_{0}(qN)\) defined by sending \((E,C,C_{q})\) to \((E/C_{q},C+C_{q}/C_{q},E[q]/C_{q})\). It turns out that \(T_{q}\) acts as \(-w_{q}\) on the toric part of \(X_{0}(qN)\) and that \(w_{q}\) acts as the Frobenius on the supersingular points [14, propositions 3.7 and 3.8]. Recall that the vertices of \(X_{q}^{p}(N)\) are tuples \((E,C)\) of supersingular elliptic curves \(E\) and cyclic subgroups of order \(N\). Thus, \(T_{\ell}\) acts on \(\mathrm{Div}(X_{p}^{q}(N))\) and \(\mathrm{Div}^{0}(X_{p}^{q}(N))\). **Definition 2.5**.: _Let \(T\) be the \(\mathbb{Z}\) algebra generated by all the operators \(T_{\ell}\) as operators on \(\mathrm{Div}^{0}(X_{p}^{q}(N))\)._ The Hecke operator \(T_{\ell}\) preserves both \(S_{2}(\Gamma_{0}(qN))_{q-\mathrm{old}}\) and \(S_{2}(\Gamma_{0}(qN))_{q-\mathrm{new}}\). **Definition 2.6**.: _Let \(\mathbf{T}\) (resp. \(\mathbf{T}^{\prime}\)) be the subalgebra of \(\mathrm{End}(S_{2}(\Gamma_{0}(qN))_{q-\mathrm{new}}\) (resp. \(\mathrm{End}(S_{2}(\Gamma_{0}(qN)))\) generated by the Hecke operators \(T_{\ell}\) as \(\ell\) runs through all prime numbers._ As both algebras \(T\) and \(\mathbf{T}\) are generated by the Hecke operators \(T_{\ell}\), there is a natural map \(T\to\mathbf{T}^{\prime}\to\mathbf{T}\). We recall the following result of Ribet. **Theorem 2.7**.: _The Hecke algebras \(T\) and \(\mathbf{T}\) isomorphic, i.e an element in \(t\in T\) acts trivially on \(\operatorname{Div}^{0}(X_{l}^{q}(N))\) if and only if it has trivial image in \(\mathbf{T}\)._ Proof.: See [10, Theorem 3.10]. ### Relation between the zero divisor group and \(q\)-newforms We now prove the key technical ingredient towards the proof of Theorem A, where we relate the zero divisor of the graph \(X_{p}^{q}(N)\) to \(S_{2}(\Gamma_{0}(qN))_{q\text{-new}}\). In what follows, we shall regard \(S_{2}(\Gamma_{0}(qN))_{q\text{-new}}\) as a \(T\)-module after identifying \(T\) with \(\mathbf{T}\) via Theorem 2.7. **Proposition 2.8**.: _There is an isomorphism of \(T\otimes\mathbb{R}\)-modules_ \[S_{2}(\Gamma_{0}(qN))_{q\text{-new}}\cong\operatorname{Div}^{0}(X_{p}^{q}(N)) \otimes\mathbb{R}.\lx@note{footnote}{Note that the left-hand side of the isomorphism does not depend on the prime $p$. On the right-hand side, while the prime $p$ appears in the notation, it is in fact independent of $p$. This is because the divisor group is defined in terms of the set of vertices of the graph $X_{p}^{q}(N)$, which is independent of $p$. The prime $p$ is only relevant when we define the edges of the graph.}\] Proof.: Let \(T_{0}\subset T\) be the subalgebra generated by the Hecke operators \(T_{\ell}\) with \((\ell,qN)=1\). The Hecke operators \(T_{\ell}\) with \((\ell,qN)=1\) are represented by commuting symmetric matrices. Let \(\mathcal{S}\) be the set of \(\mathbb{R}\)-valued characters on \(T_{0}\). Then \[\operatorname{Div}^{0}(X_{p}^{q}(N))\otimes\mathbb{R}=\bigoplus_{\gamma\in \mathcal{S}}V(\gamma),\] where \(V(\gamma)\) is a \(T_{0}\otimes\mathbb{R}\)-submodule of \(\operatorname{Div}^{0}(X_{p}^{q}(N))\otimes\mathbb{R}\) on which \(T_{0}\) acts via \(\gamma\). A priori, this is only a decomposition of \(T_{0}\otimes\mathbb{R}\)-modules. Since \(T\) is commutative, \(V(\gamma)\) is invariant under the action of all elements of \(T\). Therefore, the aforementioned decomposition is in fact a decomposition of \(T\otimes\mathbb{R}\)-modules. Note that \(S_{2}(\Gamma_{0}(qN))_{q\text{-new}}\) can equally be decomposed into \(\gamma\)-eigenspaces \(W(\Gamma)\). As the Hecke algebras \(T\) and \(\mathbf{T}\) are isomorphic by Theorem 2.7, we may decompose \(S_{2}(\Gamma_{0}(qN))_{q\text{-new}}\) into submodules on which \(T_{0}\) acts via \(\gamma\) as \(\gamma\) runs over \(\mathcal{S}\). Let \(f\) be a normalized newform of level \(M^{\prime}\mid qN\) and let \(W(f)\) be the subspace of \(S_{2}(\Gamma_{0}(qN))\) generated by \(\{f(dz)\mid d\mid(qN)/M^{\prime}\}\). It is well known that we have a decomposition \[S_{2}(\Gamma_{0}(qN))=\bigoplus_{f}W(f),\] where the sum runs over all newforms of level \(M^{\prime}\mid qN\). Note that \[S_{2}(\Gamma_{0}(qN))=S_{2}(\Gamma_{0}(qN))_{q\text{-old}}\oplus S_{2}(\Gamma _{0}(qN))_{q\text{-new}}\] is a decomposition as \(\mathbf{T}\)-modules. The multiplicity one theorem for cusp forms now implies that for each character \(\gamma\in\mathcal{S}\), there exists a unique normalized cuspform \(f_{\gamma}\) of level \(qM\) dividing \(qN\) such that \(T_{\ell}f_{\gamma}=\gamma(T_{\ell})f_{\gamma}\) for all \(\ell\nmid qN\). Let \(\mathbf{T}_{\gamma}^{\prime}\) be the subalgebra of \(\operatorname{End}(S_{2}(\Gamma_{0}(qM))\) generated by all Hecke operators. Then there is an extension \(\gamma^{\prime}\) of \(\gamma\) such that \(T_{\ell}f_{\gamma}=\gamma^{\prime}(T_{\ell})f_{\gamma}\) for all \(T_{\ell}\in\mathbf{T}_{\gamma}^{\prime}\). By an abuse of notation, we will denote \(\gamma^{\prime}\) by \(\gamma\) from now on. Let \(\mathbf{T}_{\gamma}\) be the Hecke algebra in \(\operatorname{End}(S_{2}(\Gamma_{0}(qN))\) generated by all Hecke operators \(T_{\ell}\) such that \((\ell,N/M)=1\). Let \(\ell\) be a prime number and write \(\ell^{k}\) for the exact power of \(\ell\) dividing \(N/M\). Let \(W(\gamma)\) be the space of modular forms generated over \(\mathbb{R}\) by \[\{f_{\gamma}(dz):d\mid(N/M)\}.\] We consider two cases. **Case 1 - \(\ell\nmid M\):** Let \[A_{\ell}=\begin{pmatrix}0&0&\dots&\dots&0\\ 1&0&\dots&\dots&0\\ 0&1&0&\dots&0\\ \dots&\dots&\dots&\dots&\dots\\ \dots&\dots&\dots&\dots&\dots\\ \dots&\dots&1&0&-l\\ \dots&\dots&\dots&1&\gamma(T_{\ell})\end{pmatrix}\in\operatorname{Mat}_{k+1,k+ 1}(\mathbb{R}).\] This describes the action of \(T_{l}\) on the basis \(\{f_{\gamma}(d\ell^{i}z):i=0,1,\dots,k\}\), where \(d\) is a positive integer coprime to \(\ell\) dividing \(N/M\). In particular, \(T_{\ell}\) acts by a block matrix on \(W(\gamma)\), whose blocks are given by \(A_{\ell}\). Note that the characteristic polynomial and the minimal polynomial of \(A_{\ell}\) coincide and are of degree \(k+1\). Let \(p_{\ell}(x)\) denote this polynomial. We obtain an isomorphism of \(\mathbb{R}[T_{\ell}]\)-modules \[\mathbb{R}[T_{l}\mid W(\gamma)]\cong\mathbb{R}[x]/p_{\ell}(x).\] **Case 2 - \(\ell|M\):** Define \[A_{\ell}=\begin{pmatrix}0&0&\dots&\dots&0\\ 1&0&\dots&\dots&0\\ 0&1&0&\dots&0\\ \dots&\dots&\dots&\dots&\dots\\ \dots&\dots&1&0&0\\ \dots&\dots&\dots&1&\gamma(T_{l})\end{pmatrix}\in\operatorname{Mat}_{k+1,k+1} (\mathbb{R}).\] We let again \(p_{\ell}(x)\) be the characteristic polynomial and see that \[\mathbb{R}[T_{\ell}\mid W(\gamma)]\cong\mathbb{R}[x]/p_{\ell}(x).\] Let \(\ell_{1},\dots,\ell_{s}\) be the primes dividing \(N/M\). Let \(I\) be the ideal generated \(\{p_{\ell_{i}}(x_{i}),1\leq i\leq s\}\) in \(\mathbb{R}[x_{1},\dots,x_{s}]\). Then we have a \(T\)-module isomorphism \[W(\gamma)\cong\mathbb{R}[x_{1},\dots,x_{s}]/I,\] where \(\mathbf{T}_{\gamma}\) acts on both sides via \(\gamma\). For every character \(\gamma\in\mathcal{S}\), let \(T_{\gamma}\) be the quotient of \(T\) acting on \(V(\gamma)\) faithfully. It follows that \[T\otimes\mathbb{R}\cong\bigoplus T_{\gamma}\otimes\mathbb{R}\] and we have a similar decomposition for \(\mathbf{T}\otimes\mathbb{R}\). It follows from Theorem 2.7 that \(T_{\gamma}\otimes\mathbb{R}\) and \(\mathbf{T}_{\gamma}\otimes\mathbb{R}\) are isomorphic as \(\mathbb{R}\)-algebras. In particular, \(p_{\ell}\) is the minimal polynomial of \(T_{\ell}\) as an element in \(T_{\gamma}\). It follows that \(\dim(V(\gamma))\geq\dim(W(\gamma))\). This holds for all characters \(\gamma\in\mathcal{S}\). On the other hand the \(\mathbb{R}\)-vector spaces \(S_{2}(\Gamma_{0}(qN))_{q\text{-new}}\) and \(\operatorname{Div}^{0}(X_{l}^{q}(N))\otimes\mathbb{R}\) have the same dimension (see [13, proof of Theorem 3.10]). Thus, \(\dim_{\mathbb{R}}(V(\gamma))=\dim_{\mathbb{R}}(W(\gamma))\) for all \(\gamma\). Hence the proposition follows. ### The Brandt matrix and Ihara zeta function We recall the definition of the Ihara zeta function: **Definition 2.9**.: _Given a graph \(X\), we write \(\chi(X)\) for its Euler characteristic. The Ihara zeta function of \(X\) is defined to be_ \[Z(X,S)=\frac{(1-S^{2})^{\chi(X)}}{\det(I-AS+(D-I)S^{2})}\in\mathbb{Z}[S],\] _where \(A\) and \(D\) are the adjacency matrix and valency matrix of \(X\) respectively._ Our goal is to describe the Ihara zeta function of \(X_{p}^{q}(N)\) in terms of the Brandt matrix, which we describe below. Recall that \(n=(q-1)/12\). Let \(d_{N}\) be the number of cyclic subgroups of order \(N\) in \(\mathbb{Z}/N\mathbb{Z}\times\mathbb{Z}/N\mathbb{Z}\). Then the graph \(X_{p}^{q}(N)\) has \(nd_{N}=(q-1)d_{N}/12\) vertices. **Definition 2.10**.: _Let \(\{(E_{i},C_{i}):1\leq i\leq nd_{N}\}\) denote the set of vertices of \(X_{p}^{q}(N)\). Let \(\mathcal{X}_{i,j}^{(p)}\) the set of \(p\)-isogenies from \(E_{i}\) to \(E_{j}\) that map \(C_{i}\) to \(C_{j}\). The Brandt matrix \(B_{p}^{q}(N)=(b_{i,j})_{1\leq i,j\leq nd_{N}}\) is defined by_ \[b_{i,j}=\frac{1}{2}\left|\mathcal{X}_{i,j}^{(p)}\right|.\] **Lemma 2.11**.: _The Brandt matrix \(B_{p}^{q}(N)\) is the adjacency matrix of the graph \(X_{p}^{q}(N)\). In particular, \(B_{p}^{q}(N)\) represents the adjacency operator on \(\operatorname{Div}(X_{p}^{q}(N))\)._ Proof.: This essentially follows from definitions; we give the details of the proof for the convenience of the reader. Let \(\phi\colon E_{i}\to E_{j}\) be a isogeny of degree \(p\). Then \(\ker(\phi)\) is a cyclic subgroup of \(E_{i}[p]\) of order \(p\). If conversely \(D\subset E_{i}[p]\) is a cyclic subgroup of order \(p\) such that \(E_{i}/E_{i}[p]\cong E_{j}\), then \(\phi\colon E_{i}\to E_{i}/E_{i}[p]\) defines an isogeny of degree \(p\). We obtain a well-defined surjective map \[\kappa\colon\{p\text{-isogenies from }E_{i}\text{ to }E_{j}\}\] \[\qquad\to\{\text{cyclic subgroups of order }p\text{ in }E_{i}[p]\text{ with }E_{i}/E_{i}[p]\cong E_{j}\}.\] Let \(r\in R_{i}^{\times}=\operatorname{End}(E_{i})^{\times}\). Clearly, \(\phi\circ r\) and \(\phi\) have the same kernel. Furthermore, \(\phi\circ r(C_{i})=\phi(C_{i})\) for all cyclic subgroups \(C\subset E_{i}[p]\). Recall that \(R_{i}^{\times}=\{\pm 1\}\). Thus, \(\kappa\) is a two-to-one map. In particular, there are \(b_{i,j}\) cyclic subgroups \(D\subset E_{i}[p]\) of order \(p\) such that \((E_{i}/D,C_{i}+D/D)=(E_{j},C_{j})\). This concludes the proof of the lemma. We conclude the present section with the following: **Corollary 2.12**.: _The equality_ \[Z(X_{p}^{q}(N),S)=\frac{(1-S^{2})^{\chi(X_{p}^{q}(N))}}{\det(1-B_{p}^{q}(N)S+pS ^{2}I)}\] _holds._ Proof.: We have a tautological exact sequence \[0\to\operatorname{Div}^{0}(X_{p}^{q}(N))\to\operatorname{Div}(X_{p}^{q}(N)) \to\mathbb{Z}_{p}\to 0.\] After tensoring with \(\mathbb{R}\) this sequence splits and we can find an element \(\delta\in\operatorname{Div}(X_{p}^{q}(N))\) such that \[\operatorname{Div}(X_{p}^{q}(N))\otimes\mathbb{R}=(\operatorname{Div}^{0}(X_ {p}^{q}(N))\otimes\mathbb{R})\oplus\mathbb{R}\delta\] as \(T\otimes\mathbb{R}\)-modules. For all \(\ell\nmid qN\), we have \(T_{\ell}\delta=(\ell+1)\delta\). Thus, Proposition 2.8 implies that \[\det(1-B_{p}^{q}(N)S-lS^{2})=(1-S)(1-pS)\det(1-T_{p}S+Sp^{2}\mid S_{2}(\Gamma_{ 0}(Nq))_{q\text{-new}}). \tag{2.1}\] As \(p\) is coprime to \(qN\), each \((E,C)\) admits \(p+1\) isogenies of degree \(p\). In particular, the graph \(X_{p}^{q}(N)\) is \((p+1)\)-regular Thus, \(D-I=pI\). The corollary now follows from Lemma 2.11. ## 3. Proof of Theorem A The goal of this section is to prove Theorem A given in the introduction. As before, \(p\) is a fixed prime that is coprime to \(qN\). We consider \(X_{0}(qN)_{\mathbb{F}_{p}}\), the modular curve \(X_{0}(qN)\) as a curve over \(\mathbb{F}_{p}\). Since \(p\) is fixed throughout, we shall drop the subscript \(\mathbb{F}_{p}\) from the notation for simplicity and simply write \(X_{0}(qN)\) and \(X_{0}(N)\) for the curves defined over \(\mathbb{F}_{p}\). The final step of our proof of Theorem A is to relate the Brandt matrix to the Hasse-Weil Zeta function, whose definition we recall below. **Definition 3.1**.: _Given an algebraic curve \(C\) over \(\mathbb{F}_{p}\), we define the Hasse-Weil zeta function of \(C\) by_ \[W(C,S)=\prod_{x\in|C|}\frac{1}{1-S^{\deg(x)}}\in 1+S\mathbb{Z}[[S]],\] _where \(|C|\) is the set of closed points in \(C\)._ _Remark 3.2_.: If \(N=1\), we have \(X_{0}(N)=\mathbf{P}^{1}\). In this case, the Hasse-Weil zeta function of \(X_{0}(N)\) is given by \[W(X_{0}(N),S)=\frac{1}{(1-S)(1-pS)}.\] **Lemma 3.3**.: _Let \(B_{p}^{q}(N)\) be the Brandt matrix given in Definition 2.10. Then_ \[\det(1-B_{p}^{q}(N)S-pS^{2})=W(X_{0}(qN),S)W(X_{0}(N),S)^{-2}.\] Proof.: As discussed in [19, page 12], we can write \[W(X_{0}(qN),S)=(1-S)^{-1}(1-pS)^{-1}\prod_{i=1}^{g(qN)}(1-\lambda_{i}(p)S+pS^ {2}),\] where \(g(qN)\) is the genus of \(X_{0}(qN)\) and \(\lambda_{i}\) are eigenvalues of \(T_{p}\) on \(X_{0}(qN)\) counted with multiplicites. Therefore, dividing by \(W(X_{0}(N),S)^{2}\) gives \[W(X_{0}(qN),S)W(X_{0}(N),S)^{-2}\] \[=(1-S)(1-pS)\prod_{f}(1-a_{p}(f)S+pS^{2}),\] where the product runs over the set of normalized eigen-\(q\)-newforms \(f\) (counted with multiplicities) in \(S_{2}(\Gamma_{0}(qN))\). On fixing a \(T_{p}\)-eigenbasis we have \[\prod_{f}(1-a_{p}(f)S+pS^{2})=\det(1-ST_{p}+pS^{2}\mid S_{2}(\Gamma_{0}(qN))_ {q\text{-}\text{new}}).\] Theorem 2.7 tells us that the right-hand side is equal to \[\det(1-ST_{p}+pS^{2}\mid\text{Div}^{0}(X_{p}^{q}(N))\otimes\mathbb{R}).\] Thus, (2.1) implies that \[\prod_{f}(1-a_{p}(f)S+pS^{2})(1-S)(1-pS)=\det(1-B_{p}^{q}(N)S+pS^{2}),\] from which the result follows. We can now conclude the proof of Theorem A: **Corollary 3.4**.: _We have_ \[W(X_{0}(qN),S)W(X_{0}(N),S)^{-2}Z(X_{p}^{q}(N),S)=(1-S^{2})^{\chi(G_{p}^{q}(N ))}.\] Proof.: This follows from combining Lemma 3.3 with Corollary 2.12.
2306.00022
Ephemeris Updates for Seven Selected HATNet Survey Transiting Exoplanets
We refined the ephemeris of seven transiting exoplanets HAT-P-6b, HAT-P-12b, HAT-P-18b, HAT-P-22b, HAT-P-32b, HAT-P-33b, and HAT-P-52b. We observed 11 transits from eight observatories in different filters for HAT-P-6b and HAT-P-32b. Also, the Exoplanet Transit Database (ETD) observations for each of the seven exoplanets were analyzed, and the light curves of five systems were studied using Transiting light Exoplanet Survey Satellite (TESS) data. We used Exofast-v1 to simulate these ground- and space-based light curves and estimate mid-transit times. We obtained a total of 11, 175 and 67 mid-transit times for these seven exoplanets from our observations, ETD and TESS data, respectively, along with 155 mid-transit times from the literature. Then, we generated transit timing variation (TTV) diagrams for each using derived mid-transit times as well as those found in the literature. The systems' linear ephemeris was then refined and improved using the Markov Chain Monte Carlo (MCMC) method. All of the studied exoplanets, with the exception of the HAT-P-12b system, displayed an increasing trend in the orbital period in the TTV diagrams.
A. Poro, F. Ahangarani Farahani, E. Jahangiri, A. Sarostad, M. Gozarandi, M. Haghgou, F. Abolhassani, A. Fakhrabadi, Y. Jongen, A. Wünsche, R. Naves, P. Guerra, A. Marchini, M. Salisbury, R. Ehrenberger, V-P. Hentunen
2023-05-30T20:21:02Z
http://arxiv.org/abs/2306.00022v2
# Ephemeris Updates for Seven Selected HATNet Survey Transiting Exoplanets ###### Abstract We refined the ephemeris of seven transiting exoplanets HAT-P-6b, HAT-P-12b, HAT-P-18b, HAT-P-22b, HAT-P-32b, HAT-P-33b, and HAT-P-52b. We observed 11 transits from eight observatories in different filters for HAT-P-6b and HAT-P-32b. Also, the Exoplanet Transit Database (ETD) observations for each of the seven exoplanets were analyzed, and the light curves of five systems were studied using Transiting light Exoplanet Survey Satellite (TESS) data. We used Exofast-v1 to simulate these ground- and space-based light curves and estimate mid-transit times. We obtained a total of 11, 175 and 67 mid-transit times for these seven exoplanets from our observations, ETD and TESS data, respectively, along with 155 mid-transit times from the literature. Then, we generated transit timing variation (TTV) diagrams for each using derived mid-transit times as well as those found in the literature. The systems' linear ephemeris was then refined and improved using the Markov Chain Monte Carlo (MCMC) method. All of the studied exoplanets, with the exception of the HAT-P-12b system, displayed an increasing trend in the orbital period in the TTV diagrams. keywords: planetary systems - planets and satellites: gaseous + Footnote †: journal: 0000-0002-4810-788X]A. Poro 0000-0002-4810-788X]F. Ahangarani Farahani 0000-0002-4810-788X]E. Jahangiri 0000-0002-4810-788X]A. Saorstad 0000-0002-4810-788X]M. Gozarandi 0000-0002-4810-788X]M. Haghou 0000-0002-4810-788X]F. Abolhassani 0000-0002-4810-788X]A. Fakhrabadi 0000-0002-4810-788X]Y. Jongen 0000-0002-4810-788X]A. Wunsche 0000-0002-4810-788X]R. Naves 0000-0002-4810-788X]P. Guerra 0000-0002-4810-788X]A. Marchini ## 1 Introduction The number of exoplanets discovered and characterized each year has been increasing since the results of the first exoplanet detection [1]. Hot Jupiters are an important type of planetary gas giant with masses and radii similar to Jupiter but orbiting their host stars with short orbital periods (most less than 10 days), making them a good target system to discover and study [2][3]. The transit technique is the most efficient way to improve our understanding of exoplanets through ground- and space-based surveys. Furthermore, photometric transit surveys combined with radial velocity data have become one of the most successful methods for detecting transiting exoplanets over the past decade [4]. High-precision transit observations provide information to refine planetary parameters such as the planet's size, mass, atmosphere, and orbital ephemerides [5][6]. Moreover, photometric transit surveys allow us to study the variations of the orbital periods through TTV analysis. Space telescopes have longer available observational time, and they are not affected by the Earth's atmosphere as well. TESS is one of the most significant space-based survey missions for the discovery and observation of transiting exoplanets. TESS was launched in 2018 to observe new exoplanets orbiting bright nearby stars that are brighter than Kepler mission stars [7]. Furthermore, when combined with previous work, this space mission provides precise transit timing for discovered exoplanets, which is critical for obtaining a better transit ephemeris [8]. Based on our observations, TESS, ETD1, and literature observations, we updated orbital ephemeris for the HAT-P-6b, HAT-P-12b (TESS ID 198108326), HAT-P-18b (TESS ID 21744120), HAT-P-22b (TESS ID 252479260), HAT-P-32b, HAT-P-33b (TESS ID 239154970), and HAT-P-52b (TESS ID 436875934). These exoplanets were discovered by the Hungarian-made Automated Telescope Network (HATNet) survey. ## 2 Observations and method ### Observation and data reduction Observations in this study have been made regarding exoplanets HAT-P-6b and HAT-P-32b during the years 2018 to 2022. A total of nine observation nights have been done for these two exoplanets; five and four nights for HAT-P-6b and HAT-P-32b, respectively. All these photometric observations have been done with small telescopes at eight observatories. We used CCD and standard filters in these observations. The information about the observatories, telescopes, CCDs, and data reduction software that we used is listed in Table 1. In Table 1, an abbreviated name has been determined for each observatory just to identify them in this study. The basic data reduction for the dark, bias, and flat field of each CCD image was carried out in accordance with the standard technique. ### ETD data To obtain the refined orbital ephemeris of selected HATNet exoplanets, we also collected also light curves, which were sourced from astronomers through the ETD archive [9]. Light curves were obtained from various filters and time scales. We used data in ETD that we were confident enough to be appropriate; for example, we did not use data whose declared time was less than three digits. We used those which generally have a quality index (DQ) of less than three [9]. All times in the data were converted from JD or HJD to \(BJD_{TDB}\) based on the geographic location of observation and RA(J2000) and DEC(J2000) from the Simbad2 astronomical database. Footnote 2: [http://simbad.u-strasbg.fr/simbad/](http://simbad.u-strasbg.fr/simbad/) In some ETD light curves, the airmass effect has been ignored, so airmass must be calculated based on the observers' location, which influences and improves the measured mid-transit times of related light curves. Therefore, we computed the airmass using the Astropy package in Python [10]. ### TESS data Five of these exoplanets were observed by TESS, and HAT-P-6b, and HAT-P-32b have no TESS data yet. TESS observed the five host stars at 120-second cadences. We collected TESS data from the Mikulski space telescope archive (MAST). TESS style curves were extracted by LightKurve3 code from the MAST Python package. Footnote 3: [https://docs.lightkurve.org/](https://docs.lightkurve.org/) ### Method We relied on the AstrolmageJ software [11] to normalize all of the data. Figure 1 shows the folded TESS light curves for five selected exoplanets. Finally, all ground- and space-based light curves were applied to Exofast-v1 [12] for modeling purposes; as a consequence, the output mid-transit times and associated uncertainties were employed. Figure 2 provides an example of a modeled observation of TESS and this study's observation. The extracted transit mid-times from our observations and TESS data are provided in Tables 2 and 3. Tables 5-11 include the literature and ETD transit mid-times. We plotted TTV diagrams for seven selected exoplanets using derived mid-transit times and those available within the literature. Our MCMC analysis of these timings enabled us to refine the linear ephemeris of the systems. We applied the MCMC method, or sampling from the posterior probability distributions of the coefficients (100 walkers, 10000 step number, and 1000 burn-in) using the Pymc3 package in Python [13]. Figure 3 shows all TTV-diagrams of studied exoplanets and also displays the posterior distributions for the fitted parameters using the MCMC method (dT and dP). curves of HAT-P-12b, and with existing literature data, they came up with an improved ephemeris. [22] also refined the absolute physical properties of the star-planet system. [23] updated the ephemeris of HAT-P-12b according to six transits for this system by applying a least-squares linear fit to all available transit times. According to their results, no long-term TTVs were apparent. [24] studied HAT-P-12b in bands \(V\) and \(I\) to investigate the transmission spectrum of this system. [24] observed 23 new photometric transit light curves, and analysis showed no indication of star-spot influence on the calculated transit parameters. [25] studied this exoplanet's atmosphere. In fact, the goal of this research was to specify an appropriate solution for future studies of other exoplanetary atmospheres. Spectroscopic observations using the Large Binocular Telescope (LBT) were done by [26] to obtain an atmosphere transmission spectrum of this exoplanet. They found no evidence of Na or K absorption features in the relatively flat transmission spectrum, which is in agreement with the HST transmission spectrum. Furthermore, [27] included six new mid-transit times to determine a new ephemeris by a linear fit to a satisfactory level. [28] also reported an infrared transmission photometry of HAT-P-12b with the other 48 exoplanets with the largest analysis of Spitzer/IRAC observations to study the influence of infrared photometry on atmospheric chemical properties. We used mid-transit times conducted from the modeling of ETD light curves and TESS in association with data published in previous literature for plotting a new TTV diagram. We extracted 27 mid-transit times from ETD and 6 mid-transit times from sector 23 of TESS for HAT-P-12b. ### HAT-P-18b HAT-P-18b is a low-density Saturn-mass exoplanet orbiting a supersolar metallicity K2 dwarf star [29]. The discovery observations of this exoplanet have been made by [29] using the transit method to obtain the orbital and physical properties of the system. [29] reported a non-zero (\(e=0.0840.048\)) eccentricity for HAT-P-18b. Complementary new photometric observations of the full transit were also analyzed by [30] in order to independently estimate the parameters of the host star and HAT-P-18b. [31] performed the TTV study of HAT-P-18b with a limited number of existing high-quality data and they presented ground-based transmission spectroscopy of HAT-P-18b. This exoplanet was described as a hot Jupiter by [32], who also found Rayleigh-scattering in the atmosphere, and their results confirmed that ground-based observations are suitable to determine the opacity sources of exoplanets' atmospheres. [33] observed HAT-P-18b as a part of the original research by the young twinkle students (ORBYTS) program to refine its transit ephemerides. The atmosphere of this exoplanet has been studied by [34]. Moreover, [35] derived the refined ephemeris from observations provided by the ExoClock network in combination with previous literature data. For HAT-P-18b, we obtained seven mid-transit times from sectors 25 and 26 of TESS, and 21 mid-transit times from ETD. ### HAT-P-22b [17] reported the discovery of the exoplanet HAT-P-22b. It is among the moderately massive and compact hot Jupiters, orbiting a fairly metal-rich dwarf star with a \(V\)=9.732 magnitude. [36] presented the first photometric follow-up observation of bright transiting exoplanets by using a defocusing technique. Following this, [37] performed a \begin{table} \begin{tabular}{c c c c} \hline \hline Exoplanet & Observatory & \(T_{c}(BJD_{TDB})\) & Error & Filter & Epch & O-C \\ \hline HAT-P-6b & MO & 2455430.46458 & 0.00110 & Optee \(R\) cousins & 362 & 0.0075 \\ HAT-P-6b & RO & 2458312.05306 & 0.00063 & Baader imaging \(G\) & 1110 & 0.0155 \\ HAT-P-6b & AA & 2458389.56389 & 0.00101 & Baader \(J-CV\) & 1130 & 0.0143 \\ HAT-P-6b & AO & 2459441.43053 & 0.00101 & Johns-Cousins \(I\) & 1403 & 0.0161 \\ HAT-P-6b & RO & 2459441.43128 & 0.00095 & Baader imaging \(R\) & 1403 & 0.0168 \\ HAT-P-6b & RO & 2459468.40283 & 0.00124 & Baader imaging \(R\) & 1410 & 0.0175 \\ HAT-P-32b & CO & 2459107.46235 & 0.00097 & Johns-Cousins \(R_{c}\) & 2180 & -0.0015 \\ HAT-P-32b & TO & 2459191.31427 & 0.00034 & Baader Bessel photometric \(R\) & 2219 & 0.0001 \\ HAT-P-32b & BO & 2459507.36578 & 0.00017 & Johns-Cousins \(V\) & 2366 & 0.0005 \\ HAT-P-32b & PO & 2459593.36707 & 0.00024 & Johns-Cousins \(R_{c}\) & 2406 & 0.0015 \\ HAT-P-32b & BO & 2459593.37024 & 0.00041 & Johns-Cousins \(R_{c}\) & 2406 & 0.0046 \\ \hline \hline \end{tabular} \end{table} Table 2: Extracted ground-based transit times for HAT-P-6b and HAT-P-32b in this study. \begin{table} \begin{tabular}{c c c c} \hline \hline Observatory & Telescope & CCD & Data reduction Software \\ \hline Rasteau Observatory, France (RO) & PlaneWave CDK 17\({}^{*}\) & SBIG STXL11004 & Muniwin / C-munipack \\ Montcaber private observatory, Spain (MO) & SCT 12\({}^{*}\) & SBIG ST8-XME & Fotodiff \\ Observatori Astronomic Albanya, Spain (AA) & Meade ACF 16\({}^{*}\) & Moravian Instruments G4-9000 & Fotodiff \\ Astronomical Observatory, University of Sicna (K54), Italy (AO) & MCT 300 mm & SBIG STL-6303 & Muniwin / C-munipack \\ Observatoric des Baronnies Provenales, France (BO) & Cassegrain 430 mm & Zwo ASIG200 Pro mono & Muniwin / C-munipack \\ Private Observatory, Czech Republic (PO) & 400 mm & SBIG ST-10 XME & Astrolmagel 3.2.10 \\ Crow-Observatory Vranová, Czech Republic (CO) & NWT 300 mm & Moravian Instruments G2-3200 & Muniwin / C-munipack \\ Taurus Hill Observatory, Finland (TO) & SCT 14\({}^{*}\) & SBIG ST-8 XME & AlP4Win v2.4.10 \\ \hline \hline \end{tabular} \end{table} Table 1: The observatories of this study and the instruments that were employed. follow-up transit observation using a defocusing technique and they derived one complete transit and computed the mid-transit times for HAT-P-22b. The near-\(UV\) and optical photometric observations of HAT-P-22b were made by [38] to study the atmosphere of this exoplanet. [38] also refined the planetary parameters and ephemerides of HAT-P-22b hot Jupiter. Accordingly, all derived parameters were in agreement with the discovery values by [17], and any non-spherical asymmetries were not seen in their data. In order to plot a TTV diagram for this exoplanet, we extracted 30 and 14 mid-transit times resulting from the modeling of ETD and TESS light curves (sectors 21 and 48), respectively, as well as data from previous publications. \begin{table} \begin{tabular}{l c c} \hline \hline Exoplanet & New ephemeris (\(BJD_{TDB}\)) & Reference ephemeris (\(BJD_{TDB}\)) \\ \hline HAT-P-6b & 2454035.6769526(3) + 3.85300(15) \(\times\)\(E\) & 2454035.67652(28) + 3.852985(5) \(\times\)\(E\)[14] \\ \hline HAT-P-12b & 2454419.19585(6) + 3.21305852(8) \(\times\)\(E\) & 2454419.19556(20) + 3.2130598(21) \(\times\)\(E\)[15] \\ \hline HAT-P-18b & 2454715.022802(97) + 5.5080288(2) \(\times\)\(E\) & 2454715.02174(20) + 5.508023(6) \(\times\)\(E\)[16] \\ \hline HAT-P-22b & 2454930.22043(16) + 3.2122330(1) \(\times\)\(E\) & 2454930.22001(25) + 3.212220(9) \(\times\)\(E\)[17] \\ \hline HAT-P-32b & 2454420.44713(6) + 2.15000821(5) \(\times\)\(E\) & 2454420.44637(9) + 2.150008(1) \(\times\)\(E\)[16] \\ \hline HAT-P-33b & 2455110.92683(12) + 3.4744769(1) \(\times\)\(E\) & 2455110.92595(22) + 3.474474(1) \(\times\)\(E\)[16] \\ \hline HAT-P-52b & 2455852.10370(23) + 2.7535989(3) \(\times\)\(E\) & 2455852.10326(41) + 2.7535953(94) \(\times\)\(E\)[18] \\ \hline \hline \end{tabular} \end{table} Table 4: The new ephemeris derived by a linear fit on the TTV diagram of each exoplanet and reference ephemeris for computing epochs and the TTV values. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Exoplanet & \(T_{c}(BJD_{TDB})\) & Error & Epoch & O-C & Exoplanet & \(T_{c}(BJD_{TDB})\) & Error & Epoch & O-C \\ \hline HAT-P-12b & 2458933.54320 & 0.00053 & 1405 & -0.0014 & HAT-P-33b & 2459506.13961 & 0.00051 & 1265 & 0.0040 \\ HAT-P-12b & 2458936.75601 & 0.00052 & 1406 & -0.0016 & HAT-P-33b & 24595906.61506 & 0.00052 & 1266 & 0.0050 \\ HAT-P-12b & 2458939.96969 & 0.00048 & 1407 & -0.0010 & HAT-P-33b & 2459516.56424 & 0.00058 & 1268 & 0.0053 \\ HAT-P-12b & 2458946.39512 & 0.00055 & 1409 & -0.0017 & HAT-P-33b & 2459520.03824 & 0.00054 & 1269 & 0.0048 \\ HAT-P-12b & 2458949.60784 & 0.00050 & 1410 & -0.0020 & HAT-P-33b & 2459523.21520 & 0.00048 & 1270 & 0.0044 \\ HAT-P-12b & 2458952.82114 & 0.00048 & 1411 & -0.0018 & HAT-P-33b & 2459525.98761 & 0.00052 & 1271 & 0.0052 \\ HAT-P-18b & 2458989.25305 & 0.00059 & 776 & 0.0055 & HAT-P-33b & 2459533.46206 & 0.00057 & 1272 & 0.0052 \\ HAT-P-18b & 2458994.76038 & 0.00048 & 777 & 0.0048 & HAT-P-33b & 2459533.93584 & 0.00055 & 1273 & 0.0045 \\ HAT-P-18b & 2459005.77787 & 0.00050 & 779 & 0.0062 & HAT-P-33b & 2459537.41024 & 0.00057 & 1274 & 0.0044 \\ HAT-P-18b & 2459011.28554 & 0.00051 & 780 & 0.0059 & HAT-P-33b & 2459540.88529 & 0.00065 & 1275 & 0.0050 \\ HAT-P-18b & 2459016.79255 & 0.00059 & 781 & 0.0048 & HAT-P-33b & 2459544.35953 & 0.00062 & 1276 & 0.0048 \\ HAT-P-18b & 2459027.809606 & 780 & 0.0060 & HAT-P-33b & 2459547.83335 & 0.00063 & 1277 & 0.0041 \\ HAT-P-18b & 2459033.31868 & 0.00060 & 784 & 0.0069 & HAT-P-33b & 2459558.2573899 & 0.00053 & 1279 & 0.0058 \\ HAT-P-22b & 2458887.63055 & 0.00020 & 1227 & 0.0166 & HAT-P-33b & 2459558.25712 & 0.00055 & 1280 & 0.0044 \\ HAT-P-22b & 24588874.84229 & 0.00018 & 1228 & 0.0161 & HAT-P-33b & 2459561.73071 & 0.00057 & 1281 & 0.0036 \\ HAT-P-22b & 2458887.05478 & 0.00018 & 1229 & 0.0164 & HAT-P-33b & 2459568.68054 & 0.00053 & 1283 & 0.0044 \\ HAT-P-22b & 2458882.16687 & 0.00019 & 1230 & 0.0163 & HAT-P-33b & 2459572.15466 & 0.00054 & 1284 & 0.0041 \\ HAT-P-22b & 24588887.69120 & 0.00017 & 1232 & 0.0161 & HAT-P-33b & 2459573.60336 & 0.00053 & 1285 & 0.0053 \\ HAT-P-22b & 2458890.90361 & 0.00020 & 1233 & 0.0163 & HAT-P-33b & 2459582.57829 & 0.00053 & 1287 & 0.0043 \\ HAT-P-22b & 2458894.11620 & 0.00018 & 1234 & 0.0167 & HAT-P-33b & 2459586.05330 & 0.00053 & 1288 & 0.0048 \\ HAT-P-22b & 2458897.32739 & 0.00020 & 1235 & 0.0157 & HAT-P-33b & 2459589.52750 & 0.0051 & 1289 & 0.0046 \\ HAT-P-2b & 2459613.65575 & 0.00019 & 1458 & 0.0190 & HAT-P-33b & 245959905.038 & 0.00055 & 1292 & 0.0040 \\ HAT-P-2b & 2459616.86844 & 0.00018 & 1459 & 0.0194 & HAT-P-33b & 2459603.42543 & 0.00053 & 1293 & 0.0046 \\ HAT-P-22b & 2459620.08042 & 0.00018 & 1460 & 0.0192 & HAT-P- ### HAT-P-32b The planet HAT-P-32b was discovered by the HATNet survey in 2011 and it is a hot Jupiter exoplanet orbiting a late-F-early-G dwarf star with \(V\)=11.289 magnitude. In this discovery, radial velocity measurements were taken with High-Resolution Echelle Spectrometer and [39] transit model was used in order to describe the HATNet photometry [16]. [21] presented a \(JH\)-band photometry observation of HAT-P-32b and extracted precise mid-transit times. [21] declared that HAT-P-32b system parameters were in agreement with those reported in the [16] study and derive a period of this exoplanet with im Figure 1: Folded TESS light curves in each sector of all selected exoplanets were obtained from the LightKurve code. Figure 2: Left: HAT-P-33b observational and theoretical light curves using TESS sector 45 data; Right: The observational light curve of HAT-P-32b from this study in the \(V\) filter and the theoretical light curve. proved uncertainty. Following this, [40] reported two primary transits of HAT-P-32b during Gemini-North Gemini Multi-Object Spectrograph observations. They used white light curve analysis in order to refine the parameters of this exoplanet and derive new ephemeris. [42] updated the system properties by analyzing the results of 45 transit observations, which were observed by using the young exoplanet transit initiative (YETI) network. Moreover, [41] studied the TTV diagram to investigate the existence of an additional planet in the HT-P-32b system. [42] performed a global fit for the HAT-P-32b system based on their new photometric observations and previously published RV data in order to update the system parameters. [42] also analyzed the TTV diagram for this system and according to the results, there was no significant TTV signals. Some follow-up high-quality observations of this exoplanet were done with small observatories operated by citizen scientists in 2020 [43]. The accurate mid-transit times for HAT-P-32b were obtained from the available data for plotting an updated TTV diagram. We extracted a total of five and 72 mid-transit times from our observations and the ETD, respectively. ### Hat-P-33b The planet HAT-P-33b was among the first exoplanets discovered by the HATNet survey in 2011 and was confirmed by high-precision photometry and additional radial velocity measurements [16]. HAT-P-33b is an inflated hot Jupiter orbiting a Late-F dwarf star with a short orbital period. [16] reported that HAT-P-33b has a radius of \(\sim 1.7~{}R_{J}\) which is among one of the largest measured radius for all transiting exoplanets. HAT-P-33b also has an equilibrium temperature of more than 1600 K, which is the result of the high luminosity of its host star. The TTV study of HAT-P-33b was analyzed by the transiting exoplanet monitoring project (TEMP) in the study of [44]. Figure 3: The TTV diagrams of seven studied exoplanets with the linear fit on the data points and posterior distributions for the fitted parameters using MCMC (dT and dP).
2305.16310
Securing Deep Generative Models with Universal Adversarial Signature
Recent advances in deep generative models have led to the development of methods capable of synthesizing high-quality, realistic images. These models pose threats to society due to their potential misuse. Prior research attempted to mitigate these threats by detecting generated images, but the varying traces left by different generative models make it challenging to create a universal detector capable of generalizing to new, unseen generative models. In this paper, we propose to inject a universal adversarial signature into an arbitrary pre-trained generative model, in order to make its generated contents more detectable and traceable. First, the imperceptible optimal signature for each image can be found by a signature injector through adversarial training. Subsequently, the signature can be incorporated into an arbitrary generator by fine-tuning it with the images processed by the signature injector. In this way, the detector corresponding to the signature can be reused for any fine-tuned generator for tracking the generator identity. The proposed method is validated on the FFHQ and ImageNet datasets with various state-of-the-art generative models, consistently showing a promising detection rate. Code will be made publicly available at \url{https://github.com/zengxianyu/genwm}.
Yu Zeng, Mo Zhou, Yuan Xue, Vishal M. Patel
2023-05-25T17:59:01Z
http://arxiv.org/abs/2305.16310v1
# Securing Deep Generative Models with Universal Adversarial Signature ###### Abstract Recent advances in deep generative models have led to the development of methods capable of synthesizing high-quality, realistic images. These models pose threats to society due to their potential misuse. Prior research attempted to mitigate these threats by detecting generated images, but the varying traces left by different generative models make it challenging to create a universal detector capable of generalizing to new, unseen generative models. In this paper, we propose to inject a universal adversarial signature into an arbitrary pre-trained generative model, in order to make its generated contents more detectable and traceable. First, the imperceptible optimal signature for each image can be found by a signature injector through adversarial training. Subsequently, the signature can be incorporated into an arbitrary generator by fine-tuning it with the images processed by the signature injector. In this way, the detector corresponding to the signature can be reused for any fine-tuned generator for tracking the generator identity. The proposed method is validated on the FFHQ and ImageNet datasets with various state-of-the-art generative models, consistently showing a promising detection rate. Code will be made publicly available at [https://github.com/zengxianyu/genwm](https://github.com/zengxianyu/genwm). ## 1 Introduction Recent advances in deep generative models [1; 2] have enabled the generation of highly realistic synthetic images, which benefits a wide range of applications such as neural rendering [3; 4; 5; 6], text-to-image generation [7; 8; 9; 10], image inpainting [11; 12], super-resolution [13], among others. As a side effect, synthetic but photo-realistic images pose significant societal threats due to their potential misuse, including copyright infringement, the dissemination of misinformation, and the compromise of biometric security systems. To mitigate these risks, one straightforward approach is to imprint digital watermarks on generated images during a post-processing phase. However, this post-processing step is usually decoupled from the main model, making it difficult to enforce. Therefore, recent work has focused on a more enforceable solution: using a deep model as a detector to identify synthetic images [14; 15; 16; 17; 18]. They manifest effectiveness against known generators, _i.e._, those involved in the training dataset, but suffer from a performance drop against unseen generators. This is due to the variability of the model "signatures" hidden in the generated images across different models, as illustrated in Fig. 1 (a). Consequently, these detection-based systems require frequent retraining upon the release of each new generator to maintain their effectiveness, which is impractical in real-world applications. In this work, we propose a more robust approach to identify synthetic images by integrating a model-agnostic "signature" into any pre-trained generator. Since the signature is concealed within the model parameters, it becomes non-trivial for malicious users to erase, and is inevitably included in the generated contents, thereby facilitating detection. By using a universal signature (_i.e._, model-agnostic signature), we can leverage the same detector to identify images from different generators, eliminating the need for retraining the detector with the introduction of new generators. To determine the optimal signature for images, we first train a signature injector \(W\) in an adversarial manner against a classifier \(F\) (the detector). In particular, the injector \(W\) learns to add a minimal alternation \(\kappa\) to a given image \(\mathbf{x}\) to produce a slightly modified image \(\hat{\mathbf{x}}\). The injector aims to make the alternation \(\kappa\) as small as possible in order to retain image quality, while simultaneously maximizing the detector's accuracy, as shown in Fig. 1 (b). Importantly, the detector \(F\) is not necessarily designed to be a binary classifier. It can be a multi-class classifier that produces a multi-bit binary code to convey additional information to help track and identify the source of a generated image. To implant such signatures into an arbitrary pre-trained image generative model \(G\), we fine-tune \(G\) using a set of images processed by \(W\), resulting in a secured generator \(\hat{G}\), as demonstrated in Fig. 1 (c). Images generated by the secured generator \(\hat{G}\) can be identified by \(F\) as \(\hat{G}\) inherits the signatures from \(W\). In addition, as shown in Fig. 1 (d), the detector \(F\) is no longer associated with specific generators during the training process, and therefore can be shared across different generators. As the injector \(W\) and detector \(F\) can be reused for different pre-trained generators, the adversarially learned signature becomes universal (model-agnostic) among all secured generators. To demonstrate the effectiveness of the proposed method, we conduct extensive experiments on the FFHQ [19] and ImageNet [20] datasets. Three state-of-the-art generative models, namely LDM [21], ADM [22], and StyleGAN2 [23], are used in evaluations. The experimental results demonstrate that a given generative model can learn to add the adversarial signatures to its outputs, making them more recognizable by the generated image classifier, while not sacrificing the image quality. The contributions of this paper are summarized as follows: 1. We propose to learn the optimal universal signatures through adversarial learning against a classifier (_i.e._ the detector). 2. We propose to inject the universal adversarial signatures to secure an arbitrary pre-trained image generative model. Secured generative models can share the same detector. 3. Our proposed universal adversarial signature is capable of carrying additional information, such as the generator identity, for tracking the source of the images. ## 2 Related Works **Deep Generative Models**[1; 2] have been greatly improved recently, enabling realistic large-scale image and text synthesis [7], [24]. This field has undergone various mainstream models, including autoregressive models [25], variational autoencoders [26], normalizing flows [27], generative adversarial models (GANs) [28; 23; 29], and more recently, denoising diffusion models (DDMs) [30; 21]. In particular, GANs and DDMs are capable of imposing threats to society due to their potential abuse. This paper focuses on mitigating their potential threats. Figure 1: Illustration of securing deep generative models through universal adversarial signature. **Generated Image Detection** is committed to mitigating the potential threats of the generated images. The existing methods extract features to discover artifacts either in the spatial domain [14; 16; 18] or frequency domain [15; 17]. However, these passive detection models may not generalize well for unseen generative models. In this paper, learning the adversarial signature entails actively modifying a generator to make its outputs more recognizable, which is different from the existing work focused on generated image detection. The scope of this paper is general-purpose generated image detection, which is not limited to a specific type of media such as deepfake. **Image Watermarking**[31; 32] can limit the misuse of digital images, including the generated ones. Although watermarks vary in their visibility [33; 34], it is difficult for them to achieve robustness, imperceptibility, and high capacity at the same time [32]. Besides, deep-learning-based methods involve adding templates to the real images [35], or inserting watermarks into the generated image [36]. However, these methods are subject to an impractical assumtion that malicious users will apply watermarks. Instead, we modify generative models to make adversarial signatures inevitable. **Neural Network Fingerprinting** addresses the challenges of visual forensics and intellectual property protection posed by deep models [37; 38]. Model-specific GAN fingerprints, either learned [39] or manually-crafted [40], can be used for generated image detection and source tracking, but still has to be re-trained against new generators. In contrast, our detector can be reused for any future generator. ## 3 Our Approach Given a set of images \(X\), a typical deep generative model \(G\) learns the underlying distribution \(p_{X}\) of \(X\), and can generate a realistic image from a random noise \(\mathbf{z}\), _i.e._, \(\mathbf{y}\triangleq G(\mathbf{z})\). Due to the threats posed by the potential abuse of the outputs from the generator \(G\), it is necessary to develop a classifier \(F\) to distinguish the generated (signed) images from real ones, where \(F(\cdot)\in(0,1)\). A real image \(\mathbf{x}\in X\) is labeled as \(0\), while a generated image \(\mathbf{y}\) is labeled as \(1\). As discussed in Section 1, we explore modifying the parameters of a given generator \(G\), to make its outputs more recognizable by \(F\), and hence securing the generator \(G\). Our approach is a two-stage method. Firstly, a signature injector \(W\) learns to generate adversarial signatures, while a classifier \(F\) learns to detect them. The signature injector \(W\) is subsequently used for teaching an arbitrary generative model \(G\) to generate recognizable images. The proposed method is illustrated in Figure 1 and summarized in Algorithms 1-2. ### Optimal Adversarial Signature Consider a system consisting of a signature injector \(W\) and a classifier \(F\). In the optimal case, \(F\) can discriminate the signed images from clean images based on the subtle and imperceptible alternation made by \(W\) (imperceptibility). The system is robust to image restoration attack if augmented by noise (persistency), _i.e._ the signature cannot be removed by an image restoration model \(M\). The following propositions state the imperceptibility and persistency of the adversarial signatures in details. **Proposition 3.1**.: _(**Imperceptibility**) There exist optimal pairs of signature injector \(W\) and classifier \(F\): \(\mathbb{R}^{n}{\mapsto}\{0,1\}\), so that for any image \(\forall\mathbf{x}{\in}\mathbb{R}^{n}\), \(\forall\epsilon{>}0\), its distance to the signed image \(W(\mathbf{x})\) is smaller than \(\epsilon\), and \(F\) correctly discriminates them, i.e., \(\|W(\mathbf{x})-\mathbf{x}\|{<}\epsilon\), and \(F(W(\mathbf{x}))\neq F(\mathbf{x})\)._ **Proposition 3.2**.: _Let \(\mathbf{e}\) be a zero-mean, positive-variance additive noise. There exist noise augmented \(W,F\) that satisfy the following condition: \(\forall\epsilon>0,\mathbb{E}_{\mathbf{e}}[\|W(\mathbf{x}+\mathbf{e})-\mathbf{x}\|]<\epsilon\) and \(F(W(\mathbf{x}))\neq F(\mathbf{x})\)._ **Proposition 3.3**.: _(**Persistency**) The noise augmented \(W,F\) stated in Proposition B.3 is robust to image restoration attack, as optimizing \(\min_{M}\mathbb{E}_{\mathbf{x},\mathbf{e}}[\|M(W(\mathbf{x}+\mathbf{e}))-\mathbf{x}\|]\) will result in \(M\) being an identity mapping._ Proof.: Please refer to the supplementary material. _Remark 3.4_.: Intuitively, when \(W(\mathbf{x}+\mathbf{e})\) is close enough to \(\mathbf{x}\), the training of \(M\) to remove signatures tends to fall into a trivial sub-optimal solution of copying the input to the output. Therefore, even if \(W\) is disclosed to malicious users, it is still difficult to erase the signature. ### Universal Adversarial Signature Injector Given an image \(\mathbf{x}\), the signature injector model \(W\) adds an imperceptible alternation to it, resulting in a "signed" image \(\hat{\mathbf{x}}\triangleq W(\mathbf{x})\), of the same size as \(\mathbf{x}\). The difference \(\kappa\triangleq\hat{\mathbf{x}}-\mathbf{x}\) is termed as the "adversarial signature", which varies with the input image \(\mathbf{x}\) and the injector \(W\). Meanwhile, the classifier \(F\) aims to discriminate the signed image \(\hat{\mathbf{x}}\) from the clean image \(\mathbf{x}\). In this paper, the signed image \(\hat{\mathbf{x}}\) is labeled as \(1\), while the clean image \(\mathbf{x}\) is labeled as \(0\). To find the desired pair of \(W\) and \(F\) as discussed above, the goal is to ensure that the signed image \(\hat{\mathbf{x}}\) is as close to the clean image \(\mathbf{x}\) as possible, while the classifier \(F\) should correctly recognize the signed images. The goal can be expressed as the following optimization problem: \[\min_{W,F}\,\mathbb{E}_{\mathbf{x}}\|W(\mathbf{x})-\mathbf{x}\|_{2}^{2},\quad\text{s.t.} \quad\mathbb{E}_{\mathbf{x}}\left[F(\mathbf{x})+(1-F(W(\mathbf{x}))\right]=0. \tag{1}\] By introducing the Lagrange multiplier, we obtain the following loss function: \[\mathcal{L}=\mathbb{E}_{\mathbf{x}}\big{[}\underbrace{\|W(\mathbf{x})-\mathbf{x}\|_{2}^{2 }}_{L_{\text{res}}}+\lambda\underbrace{(F(\mathbf{x})+1-F(W(\mathbf{x})))}_{L_{\text{ cls}}}\big{]}. \tag{2}\] The \(L_{\text{res}}\) term in Eq. (2) is the mean squared error that enforces the signatures to be imperceptible (not obviously impacting the image quality). The \(L_{\text{cls}}\) term can be seen as a classification loss that encourages the classifier to distinguish the signed images from the clean images. In practice, we find directly optimizing Eq. (2) through gradient descent methods results in \(\lambda=0\), and the model copying the input to the output. Therefore, we empirically fix \(\lambda\) to a small value. In addition, we replace the \(L_{\text{cls}}\) part with the commonly used cross-entropy loss. Therefore, \(W\) and \(F\) are jointly trained by optimizing the following approximated loss function: \[L=\mathbb{E}_{\mathbf{x}\sim p_{X}}\{L_{\text{res}}(\mathbf{x};W)+ \lambda\cdot L_{\text{cls}}(\mathbf{x};W,F)\}, \tag{3}\] \[\text{where}\quad L_{\text{res}}(\mathbf{x};W)=\|W(\mathbf{x})-\mathbf{x}\|_{2 }^{2},\] (4) \[\text{and}\quad L_{\text{cls}}(\mathbf{x};W,F)=\log F(\mathbf{x})+\log(1-F (W(\mathbf{x}))). \tag{5}\] During training, the signature injector \(W\) and the generated image classifier \(F\) are, in fact, adversarial against each other. The minimization of \(L_{\text{cls}}\) requires the injector \(W\) to add a sufficiently large and easy-to-identify signature \(\kappa\) to make \(\hat{\mathbf{x}}\) separately from \(\mathbf{x}\); while the minimization of \(L_{\text{rec}}\) requires the signature injector \(W\) to shrink the norm of \(\kappa\) for the sake of its imperceptibility, which makes the signed image \(\hat{\mathbf{x}}\) more difficult to be separated from \(\mathbf{x}\). The overall process of this stage is summarized in Algorithm 1. Note, to make the signature \(\kappa\) robust, both the original image \(\mathbf{x}\) and the signed image \(\hat{\mathbf{x}}\) are transformed before being fed to \(F\). The transformations involve commonly used augmentation operations, which are detailed in Section 4. Our method resembles letting \(W\) produce adversarial examples to flip the prediction of \(F\). Albeit the goal is similar to the C&W attack [41]. Our method generates the signed images in a single forward pass (instead of iteratively), and jointly trains \(F\) (instead of fixing its parameters ). **Binary Code Extension.** By extending the binary classifier \(F\) to multiple outputs, the adversarial signature will be able to carry additional information such as generator identity for tracking the source of the generated images. To achieve this, we can first determine the binary code length as \(n\) bits, which directly decides the number of all possible binary codes as \(2^{n}\). The selection of \(n\) (\(n>0\)) depends on the number of user-predefined messages to be represented by the binary codes. For instance, when \(n=2\), the binary codes for generators are 01, 10, and 11, while the code 00 is reserved for real images. During the training process, a random binary code except for 00 from the \(2^{n}-1\) possible binary codes is chosen for every generated image. Next, the single classification layer in \(F\) is extended into \(n\) classification layers in parallel for binary code prediction. Meanwhile, the binary code is converted by a code embedding module into an embedding vector. It comprises of two linear layers and SiLU [42] activation. The resulting binary code embedding is fed into injector \(W\) via AdaIN [19] after every convolution layer for modulating the signatures. Note, in the default case where \(n=1\), a constant vector is used as the binary code embedding. ### Securing Arbitrary Generative Model In order to make the adversarial signatures inevitable, it would be better if they could be integrated into the model parameters through, for example, fine-tuning. In this way, the outputs from the generators will be detectable by \(F\), and hence the generative model is secured. Therefore, in this stage, the signature injector \(W\) will process the training data, based on which an arbitrary given (pre-trained) generative model is fine-tuned to learn the adversarial signatures. This conceptually shifts the distribution the generator has learned towards the distribution of the signed images. Specifically, given a set of training images \(X\), the already trained signature injector \(W\) is used to apply an adversarial signature to each image \(\mathbf{x}\in X\), resulting in a signed image \(\hat{\mathbf{x}}\). Assume we have an arbitrary already trained deep generative model \(G\), which can generate an image \(\mathbf{y}\) from a random noise \(\mathbf{z}\), _i.e._, \(\mathbf{y}=G(\mathbf{z})\). Then, the model \(G\) is fine-tuned using the signed images, resulting in the model \(\hat{G}\), which generates a signed image \(\hat{\mathbf{y}}\) from a random noise \(\mathbf{z}\), _i.e._, \(\hat{\mathbf{y}}=\hat{G}(\mathbf{z})\). By default, the concrete loss function during fine-tuning is consistent with the original training loss of \(G\). An optional loss term, _i.e._, \(\xi\cdot\log(1-F(\hat{G}(\mathbf{z})))\) can be appended to guide the training of \(\hat{G}\) using the trained classifier \(F\) (fixed), where \(\xi\) is a constant that controls the weight of this loss term. The overall procedure of stage two is summarized in Algorithm 2. ``` 1:Input: A set of images \(X\); 2:Output: (1) Signature injector \(W\); 3: (2) Binary classifier \(F\); 4: Randomly initialize \(W\) and \(F\); 5:for\(i=1\)to MaxIteration_stage1 do 6: Randomly sample \(\mathbf{x}\in X\); 7:\(\hat{\mathbf{x}}\gets W(\mathbf{x})\); 8:\(L_{\text{rec}}\leftarrow\|\hat{\mathbf{x}}-\mathbf{x}\|_{2}^{2}\); 9: Random transformation for \(\mathbf{x}\) and \(\hat{\mathbf{x}}\); 10:\(L_{\text{cls}}\leftarrow\log F(\mathbf{x})+\log(1-F(\hat{\mathbf{x}}))\); 11:\(L\gets L_{\text{rec}}+\lambda\cdot L_{\text{cls}}\); 12:\(\Delta W,\Delta F\leftarrow\nicefrac{{\partial L}}{{\partial W}},\nicefrac{{ \partial L}}{{\partial F}}\); 13:\(W,F\leftarrow\text{Adam}(W,F;\Delta W,\Delta F)\); 14:endfor ``` **Algorithm 1**Training Signature Injector As the fine-tuning process is agnostic to generator architecture, it is applicable to a wide range of generative models, including but not limited to GANs [1] and DDMs [2]. As the \(W\) and \(F\) are fixed in the second stage, they are reusable for different generators. **Binary Code Extension.** In this stage, a binary code can be assigned to a specific \(G\). Every signed image \(\hat{\mathbf{x}}\) for fine-tuning \(G\) is generated by \(W\) with the assigned code. **Inference Stage.** As the fine-tuned model \(\hat{G}\) is expected to learn the signatures, the classifier \(F\) from the first stage can be directly used to identify whether \(\hat{\mathbf{y}}\) is a generated (signed) image. ## 4 Experiments In this section, we present experimental results to demonstrate the effectiveness of the proposed method. Our method is implemented in PyTorch [43]. The code will be released in the future. **Datasets & Models.** We adopt the U-Net [30] architecture for signature injector \(W\), and ResNet-34 [44] as the classifier \(F\). The proposed method is evaluated with two datasets: FFHQ [19] and ImageNet [20]; using three generative models: LDM [21], ADM [22], and StyleGAN2 [23] at \(256\times 256\) resolution. We use their official training code for the experiments, except for StyleGAN2. A third-party implementation2 is used for StyleGAN2. We sample 1,000 images from FFHQ as the test set and use the remaining images for training. For experiments on ImageNet, we use the official training split for training, and sample 1,000 images from the validation split as our test set. The image quality (FID, PSNR) and classification accuracy (Acc, ROC) are evaluated on the test sets (1,000 images). The only exceptions are the FID scores in Tab. 3, which are evaluated on 50K randomly generated images against the corresponding training sets following [21]. **Hyper-Parameters.** In stage one, the balance factor \(\lambda\) in Eq. (3) is set as \(0.05\) for the FFHQ dataset, and \(1.0\) for the ImageNet dataset. The batch size is set as \(24\). The models are trained using the Adam [45] optimizer for \(10^{6}\) iterations, with the learning rate of \(10^{-4}\). In stage two, we follow the parameter settings of the respective generative models. The parameter \(\xi\) is empirically set as \(0.05\) for StyleGAN2, and \(0\) for the remaining models. **Data Augmentation.** The image transformation operations used to process \(\mathbf{x}\) and \(\hat{\mathbf{x}}\) for training \(F\) are random rotation (the angle is uniformly sampled within \([-30^{\circ},30^{\circ}]\)), random horizontal flip (with \(0.5\) probability), and Gaussian blur (the variance is uniformly sampled within \([0.01,10]\)). Any output of \(W\) and input to \(F\) will be clipped to \([0,1]\), and padded with the smallest constant error to make it an integer multiple of \(\nicefrac{{1}}{{255}}\), to ensure validity as an image. **Binary Code.** By default, the binary code length is \(n=1\), which means \(F\) only predicts whether the input is generated or not. For the \(n>1\) case, we specifically choose \(n=2\) to ensure a certain level of generator diversity, while avoiding some unnecessary experiment cost for demonstration. **Evaluation Protocol.** The experimental results are reported based on the test sets. (1) Signature injector \(W\): The \(\kappa\) is expected to be imperceptible to retain the image quality of \(\hat{\mathbf{x}}\) compared to \(\mathbf{x}\). Therefore, \(W\) is quantitatively evaluated using the peak signal-to-noise ratio (PSNR) and FID [46] of its outputs. (2) Generated image classifier \(F\): The generated/real binary classification and binary code prediction are evaluated in classification accuracy. (3) Generator \(\hat{G}\): the fine-tuning process of \(\hat{G}\) is expected to make \(\hat{G}\) add adversarial signatures while retaining image quality. Hence, the FID of the generated signed image \(\hat{\mathbf{y}}\) and the accuracy of \(F\) againsts \(\hat{G}\)'s outputs is reported. ### Validating \(W\) and \(F\) in the First Stage In the first stage, we validate the signature injector \(W\) for the image quality, and the classifier \(F\) for the accuracy against the outputs of \(W\). The experiments are conducted on FFHQ and ImageNet, respectively. The corresponding results are shown in Table 1 and Table 2 for the \(n=1\) case and the \(n=2\) case, respectively. According to Table 1, when the binary code is \(1\)-bit, the adversarial signature can be added to the outputs of \(W\) while retaining good image quality. This is reflected by \(51.4\) PSNR and \(0.52\) FID on FFHQ, and \(38.4\) PSNR and \(5.71\) FID on ImageNet. The results on ImageNet (natural images) are \begin{table} \begin{tabular}{c|c c|c} \hline \hline **Dataset** & \begin{tabular}{c} Signature Injector \(W\) \\ PSNR \(\uparrow\) \\ \end{tabular} & \begin{tabular}{c} Classifier \(F\) \\ FID \(\downarrow\) \\ \end{tabular} \\ \hline FFHQ & 44.9 & 2.68 & 99.9 \\ \hline \hline \end{tabular} \end{table} Table 2: Validating \(W\) and \(F\) in the first stage when the length of the binary code is \(n=2\). The symbols “\(\downarrow\)” and “\(\downarrow\)” denote “the higher the better” and “the lower the better”, respectively. \begin{table} \begin{tabular}{c|c c|c} \hline \hline **Dataset** & \begin{tabular}{c} Signature Injector \(W\) \\ PSNR \(\uparrow\) \\ \end{tabular} & \begin{tabular}{c} Classifier \(F\) \\ FID \(\downarrow\) \\ \end{tabular} \\ \hline FFHQ & 44.9 & 2.68 & 99.9 \\ \hline \hline \end{tabular} \end{table} Table 1: Validating \(W\) and \(F\) in the first stage when the length of the binary code is \(n=1\). The symbols “\(\downarrow\)” and “\(\downarrow\)” mean “the higher the better” and “the lower the better”, respectively. Figure 3: Sample outputs from the signature injector \(W\) in stage one. The two columns on the left correspond to FFHQ, while the rest correspond to ImageNet. The signature \(\kappa\) is visualized as the pixel-wise L-\(2\) norm, where the peak value varies across inputs. Figure 2: The ROC curves of \(F\) against \(W\)’s outputs on the FFHQ (left) and ImageNet (right) datasets in the first stage (\(n=1\)). slightly worse than that on FFHQ (face images) due to the more complex distribution. Some images with signature are visualized in Figure 3. Apart from the injector, the classifier \(F\) achieves \(100.0\%\) and \(99.9\%\) accuracy on FFHQ and ImageNet, respectively. The corresponding ROC curves can be found in Figure 2. These results suggest that although the learned signatures are small in L-\(2\) norm, they are still highly recognizable by \(F\). According to Table 2, when the binary code length is \(n=2\), our method remains effective, as suggested by the good image quality for \(W\) and high classification accuracy of \(F\). Notably, since the \(n=2\) case requires \(W\) to learn different variants of \(\kappa\) for different binary codes, the learning becomes more difficult than in the \(n=1\) case, resulting in a slight performance gap. ### Validating \(\hat{G}\) and \(F\) in the Second Stage In the second stage, a pre-trained \(G\) is fine-tuned with \(W\) and \(F\) being fixed. We conduct experiments accordingly to validate the fine-tuned generator \(\hat{G}\), and the classifier \(F\) against the outputs of \(\hat{G}\). The results can be found in Table 3 and Table 4 for the \(n{=}1\) and \(n{=}2\) cases, respectively. According to Table 3, when the binary code length is \(n=1\), the generator \(\hat{G}\) can learn the adversarial signatures from \(W\), which makes its outputs more recognizable by \(F\). Take the LDM model on the FFHQ dataset as an example. The fine-tuned model \(\hat{G}\) achieves a similar FID to its original counterpart \(G\). This indicates no significant output quality difference between \(G\) and \(\hat{G}\). To demonstrate this qualitatively, we visualize some generated images in Figure 4. Although the adversarial signatures the generator \(\hat{G}\) has "inherited" are imperceptible, they are still highly recognizable by \(F\). This is quantitatively demonstrated by the \(100.0\%\) generated/real image classification accuracy. The corresponding ROC curves can be found in Figure 5. Figure 4: Sample outputs from the fine-tuned generator \(\hat{G}\) in stage two. The three columns on the left correspond to the FFHQ dataset, while the two on the right correspond to the ImageNet dataset. \begin{table} \begin{tabular}{c c|c|c|c|c|c} \hline \hline **Dataset** & **Model** (code) & \(\begin{array}{c}G\\ \text{FID}^{*}\end{array}\) & \(\begin{array}{c}\hat{G}\\ \text{FID}^{*}\end{array}\) & \(\begin{array}{c}F\\ \text{FID}^{*}\end{array}\) & \(\begin{array}{c}\hat{F}\\ \text{FID}^{*}\end{array}\) & \(\begin{array}{c}F\\ \text{Acc. (\%)}\end{array}\) \(\uparrow\) \\ \hline \multirow{3}{*}{FFHQ} & LDM (01) & \multirow{3}{*}{\(\begin{array}{c}\text{LDM (01)}\\ \text{LDM (10)}\\ \text{LDM (11)}\end{array}\)} & \multirow{3}{*}{\(\begin{array}{c}\text{LDM (11)}\\ \text{LDM (11)}\end{array}\)} & \multirow{3}{*}{\(\begin{array}{c}\text{LDM (01)}\\ \text{LDM (10)}\\ \text{LDM (11)}\end{array}\)} & \(\begin{array}{c}9.86\\ 9.20\\ 10.35\end{array}\)} & \multirow{3}{*}{\(\begin{array}{c}\text{LDM (01)}\\ \text{LDM (11)}\end{array}\)} \\ \hline \hline \end{tabular} \end{table} Table 4: Validating \(\hat{G}\) and \(F\) in the second stage when the length of binary code is \(n=2\). See the reproduction with the respective official code; “Acc.” caption for Table 3 for the meaning of “FID”, “FID”, and “Acc.”. According to Table 4, when the binary code length is \(n=2\), the adversarial signatures can also be effectively learned by the generators, which can still be detected by \(F\). ### Comparison to State-of-the-art Methods After verifying the effectiveness of our proposed method, we compare it with a baseline method and the state-of-the-art methods on FFHQ. The baseline method corresponds to directly training the classifier \(F\) (ResNet-34) to differentiate the generated images \(\mathbf{y}\) from the original images \(\mathbf{x}\). As shown in the first row of Table 5, if all three generators (_i.e._, LDM, ADM, and StyleGAN2) are _seen_ by \(F\), its accuracy is close to \(100\%\). However, in the second row, the baseline method suffers from poor generalization against _unseen_ generators under the leave-one-out setting. For instance, in the first column, the ADM and StyleGAN2 are seen by \(F\), but not LDM. The accuracy of \(F\) against the LDM outputs drops to mere \(51.6\%\). The corresponding ROC curves can be found in Figure 5. The generalization issue against _unseen_ generators also exists with the state-of-the-art methods including [47; 48], as shown in Table 5. In contrast, our method can reuse the \(W\) and \(F\) for any generative model, and achieve high accuracy as long as its input is from a fine-tuned generator. clear trend can be seen where \(W\) tries to sacrifice image quality in exchange for a lower cross-entropy loss. When \(\lambda=0\), \(W\) is expected to learn the identity mapping, and \(F\) is not trained. As a result, the reconstructed image is of high quality, and \(F\) behaves the same as a random classifier. Most importantly, a nearly optimal pair of \(W\) and \(F\) can be found even if \(\lambda\) is very small, which leads to a negligible image quality drop. This supports our theory in Proposition B.1. **Pre-trained \(F\).** To better understand the distinction between adversarial signatures and the features used by baseline detectors, we replace the \(F\) with the pre-trained and fixed "Baseline (Seen)" classifier from Table 5 in the first stage. This leads to significantly worse performance as shown in Table 8. The results suggest that there is hardly any resemblance in features between our signature-based classifier and a baseline classifier. Therefore, adversarial signature is different from the features used by the baseline detectors, and \(W\) and \(F\) should be jointly optimized. ### Characteristics of Adversarial Signature **Imperceptibility.** This is enforced by Eq. (4). The imperceptibility is demonstrated by Table 1-2, Figure 3 for the outputs of \(W\); and Table 3-4, Figure 4 for the outputs of \(\hat{G}\). **Persistency. (1)** To make the signature in \(\hat{\mathbf{y}}\) hard to be invalidated by image transformations, it is learned with data augmentation (see Section 4). According to Table 7, \(\hat{F}\) has gained robustness against the image transformations. **(2)** A possible adaptive attack from a malicious user may involve obtaining the inverse function of \(W\), namely the restoration attack mentioned in Proposition B.5. To achieve this, \(M\) learns to restore the original image \(\mathbf{x}\) from the signed image \(\hat{\mathbf{x}}\): \(L_{\text{M}}=\|M[W(\mathbf{x})]-\mathbf{x}\|^{2}\). Accordingly, the classifier \(F\) has to recognize the outputs of \(M\) by an extra loss term on top of Eq. (3): \(L_{\text{aux}}\)=\(\mathbb{E}_{M}\{\log(1-F(M[W(\mathbf{x})]))\}\). In the implementation, we approximate the expectation over \(M\) using multiple snapshots of \(M\) jointly trained with \(W,F\). The experimental results on FFHQ can be found in Table 9 and Fig. 6. The default setting (Table 1) is without the noise \(\mathbf{e}\) (see Section 3.1), nor the \(L_{\text{aux}}\). When both the noise \(\mathbf{e}\) and \(L_{\text{aux}}\) are applied, it is still difficult to remove the adversarial signatures even if the proposed method is disclosed to malicious users. The results support Proposition B.5. **Inevitability.** Once the generative model is fine-tuned, the adversarial signature will be inevitably included in its outputs. Restoring \(G\) from \(\hat{G}\) may require access to the training images without signatures, with which a malicious user can already train new generators instead of using \(\hat{G}\). **Efficiency.** (1) Inference: Our method only changes the generative model parameters. The inference cost for \(\hat{G}\) is identical to that of \(G\). (2) Training: Assume \(r\) generative models are to be released one by one. The complexity of re-training a detector every time upon the release of a new generator is \(O(r^{2})\). In contrast, the complexity of the proposed method is \(O(r)\), because \(W\) and \(F\) are reused once trained. Our method is efficient in terms of complexity. **Limitations.** (1) The binary code length \(n\) limits the amount of additional information it can represent. (2) We assume the training dataset without adversarial signature is not available to malicious users. But once it is available, the malicious user is able to train a new generator instead of using \(\hat{G}\). ## 6 Conclusions The proposed method aims to modify a given generative model, making its outputs more recognizable due to adversarial signatures. The adversarial signature can also carry additional information for tracking the source of generated images. The experimental results on two datasets demonstrate the effectiveness of the proposed method. Figure 6: ROC for \(F(W(\mathbf{x}))\) & \(F(M(W(\mathbf{x})))\) in Table 9. **Supplementary Material** ## Appendix A Additional Results Table 6 in the main paper shows the effect of varying the parameter \(\lambda\) on the PSNR, FID and classification accuracy. Here we visualize the signed images with different \(\lambda\) in Fig A. We can see that the signed images are almost visually indistinguishable from the original images for \(\lambda\in[1e-5,0.1]\). ## Appendix B Proof of Propositions **Proposition B.1**.: _(**Imperceptibility**) There exist optimal pairs of signature injector \(W\) and classifier \(F\): \(\mathbb{R}^{n}{\mapsto}\{0,1\}\), so that for any image \(\forall\mathbf{x}{\in}\mathbb{R}^{n}\), \(\forall\epsilon{>}0\), its distance to the signed image \(W(\mathbf{x})\) is smaller than \(\epsilon\), and \(F\) correctly discriminates them, i.e., \(\|W(\mathbf{x}){-}\mathbf{x}\|{<}\epsilon\), and \(F(W(\mathbf{x}))\neq F(\mathbf{x})\)._ Proof.: For simplicity, we consider the case when \(x\in\mathbb{R}^{1}\). Let \(W(x)\) be an arbitrary irrational number within \((x-\epsilon,x+\epsilon)\) when \(x\) is rational, and otherwise an arbitrary rational number within \((x-\epsilon,x+\epsilon)\). Let \(F\) be a classifier that discriminates rational/irrational numbers. This pair of \(W,F\) satisfies the given condition, and proves the existence of optimal watermarking systems. _Remark B.2_.: The \(W,F\) presented in the proof are not feasible for implementation in practice. However, when \(W,F\) are deep neural networks, the existence of adversarial samples [49] implies that one can find a \(W(\mathbf{x})\) that flips the prediction of \(F\) while being very close to \(\mathbf{x}\). **Proposition B.3**.: _Let \(\mathbf{e}\) be a zero-mean, positive-variance additive noise. There exist noise augmented \(W,F\) that satisfy the following condition: \(\forall\epsilon>0,\mathbb{E}_{\mathbf{e}}[\|W(\mathbf{x}+\mathbf{e})-\mathbf{x}\|]<\epsilon\) and \(F(W(\mathbf{x}))\neq F(\mathbf{x})\)._ Proof.: We can prove the existence of such \(W,F\) by constructing an example similar to the one in Proposition B.1 and set \(\mathbf{e}\) to a rational noise. The existence of such \(W,F\) can be proved in a similar way as Proposition B.1, by setting \(\mathbf{e}\) to a rational noise. Then we have \(\mathbb{E}[\|W(\mathbf{x}+\mathbf{e})-\mathbf{x}\|]=\mathbb{E}[\|W(\mathbf{x}+\mathbf{e})-(\mathbf{x}+ \mathbf{e})+\mathbf{e}\|]\leq\mathbb{E}[\|W(\mathbf{x}+\mathbf{e})-(\mathbf{x}+\mathbf{e})\|]+\mathbb{ E}[\|\mathbf{e}\|]<\epsilon\). **Lemma B.4**.: _Let \(\mathbf{x}\) and \(\mathbf{e}\) be zero-mean positive-variance random variables. For any non-constant mapping \(M\), we have \(\mathbb{E}_{\mathbf{x},\mathbf{e}}[\|M(W(\mathbf{x}+\mathbf{e}))-\mathbf{x}\|]>0\)._ Proof.: Assume that \(\mathbb{E}[\|M(W(\mathbf{x}+\mathbf{e}))-\mathbf{x}\|]=0\). Then \(\forall\mathbf{x},\mathbf{e}\), \(M(W(\mathbf{x}+\mathbf{e}))=\mathbf{x}\). If we let \(\mathbf{x}=\mathbf{0}\), then \(M(W(\mathbf{e}))=\mathbf{0}\), which is contradictory to the definition of \(M\). Since the equal sign does not hold, and an L-2 norm is always greater than or equal to \(0\), we have \(\mathbb{E}[\|M(W(\mathbf{x}+\mathbf{e}))-\mathbf{x}\|]>0\). Figure A: Visualization of the signed images with varying \(\lambda\). **Proposition B.5**.: _(**Persistency)** The noise augmented \(W,F\) stated in Proposition B.3 is robust to image restoration attack, as optimizing \(\min_{M}\mathbb{E}_{\mathbf{x},\mathbf{e}}[\|M(W(\mathbf{x}+\mathbf{e}))-\mathbf{x}\|]\) will result in \(M\) being an identity mapping._ Proof.: Please refer to the supplementary material. _Proof._ As shown in Proposition B.3, \(\forall\epsilon>0\), \(\mathbb{E}[\|W(\mathbf{x}+\mathbf{e})-\mathbf{x}\|]<\epsilon\). According to Lemma B.4, we have \(\mathbb{E}[\|M(W(\mathbf{x}+\mathbf{e}))-\mathbf{x}\|]>0\). Therefore, for any mapping \(M\), \(\mathbb{E}[\|W(\mathbf{x}+\mathbf{e})-\mathbf{x}\|]\leq\mathbb{E}[\|M(W(\mathbf{x}+\mathbf{e}))- \mathbf{x}\|]\). Hence, \(W(M(\mathbf{x}))=M(\mathbf{x})\) is the solution for \(\min_{M}\mathbb{E}[\|M(W(\mathbf{x}+\mathbf{e}))-\mathbf{x}\|]\). ## Appendix C Broader Impact This work is intended to develop a system to mitigate the risk of image generation models by tracking the source of generated images based on signatures. Malicious users may attack this system with fake signatures, _e.g._ by adding a signature on a real image to make it classified as generated, and compromise the credibility of true information. Potential mitigation strategies includes gated release of the watermark injectors, the use of longer multi-bit code and only releasing the code to the corresponding owners of generative models. ## Appendix D Limitations A limitation of the proposed method is that it requires to finetune a pretrained generative model to embed the signature. A direction for future work is to explore the training-free framework to secure deep generative models, _e.g._ by directly modifying model parameters. ## Appendix E Compute The signature injector is trained on a RTX A6000 GPU. The generative models are finetuned using 4 RTX A6000 GPUs.
2306.10231
GLIMMER: generalized late-interaction memory reranker
Memory-augmentation is a powerful approach for efficiently incorporating external information into language models, but leads to reduced performance relative to retrieving text. Recent work introduced LUMEN, a memory-retrieval hybrid that partially pre-computes memory and updates memory representations on the fly with a smaller live encoder. We propose GLIMMER, which improves on this approach through 1) exploiting free access to the powerful memory representations by applying a shallow reranker on top of memory to drastically improve retrieval quality at low cost, and 2) incorporating multi-task training to learn a general and higher quality memory and live encoder. GLIMMER achieves strong gains in performance at faster speeds compared to LUMEN and FiD on the KILT benchmark of knowledge-intensive tasks.
Michiel de Jong, Yury Zemlyanskiy, Nicholas FitzGerald, Sumit Sanghai, William W. Cohen, Joshua Ainslie
2023-06-17T01:54:25Z
http://arxiv.org/abs/2306.10231v1
# GLIMMER: generalized late-interaction memory reranker ###### Abstract Memory augmentation is a powerful approach for efficiently incorporating external information into language models, but leads to reduced performance relative to retrieving text. Recent work introduced lumen, a memory-retrieval hybrid that partially pre-computes memory and updates memory representations on the fly with a smaller live encoder. We propose glimmer, which improves on this approach through 1) exploiting free access to the powerful memory representations by applying a shallow reranker on top of memory to drastically improve retrieval quality at low cost, and 2) incorporating multi-task training to learn a general and higher quality memory and live encoder. glimmer achieves strong gains in performance at faster speeds compared to lumen and FiD on the KILT benchmark of knowledge-intensive tasks. ## 1 Introduction Retrieval-augmented language models achieve strong performance, but are computationally expensive due to the need to process retrieved passages. A large body of work attempts to reduce the cost of reading retrieved passages through conditional computation (Ainslie et al., 2023; Varshney et al., 2022; Schuster et al., 2022), reranking (Wang et al., 2018; Yu et al., 2022; Wang et al., 2018), or memory (de Jong et al., 2022; Wu et al., 2022; Li et al., 2022). Reranking improves retrieval quality and therefore reduces the number of passages that need to be processed by the reader. However, neural reranking is expensive, as each retrieved candidate is processed by a neural network. Late interaction rerankers (Khattab and Zaharia, 2020; Cohen et al., 2022; MacAvaney et al., 2020) pre-compute intermediate token representations and apply a smaller neural model on the fly to combine query and document representations and produce a ranking score. Late interaction drastically improves speed at the cost of storage and pre-computation overhead and machinery. Recently the idea of late-interaction has also been applied to retrieval augmented generation: lumen(de Jong et al., 2023) interpolates between memory and retrieval augmentation to achieve a better quality-compute trade-off. We propose glimmer (Generalized Late-Interaction Memory Reranker), a late interaction approach that combines these lines of work by _unifying reranking and memory into a single end-to-end model_. Like lumen, glimmer consists of a memory encoder that generates pre-computed token representations for retrieval documents, and a live encoder that combines the representations of retrieved documents with the query. After the first layers of the live-encoder, a ranking layer selects the most relevant passages which are retained for further processing. The model is trained to rank passages by usefulness to the reader through a perplexity distillation auxiliary loss (Izacard et al., 2022). glimmer also improves on lumen by using a single general memory and live encoder over all tasks, trained with multi-task fine-tuning over knowledge intensive datasets. We evaluate on the KILT benchmark of knowledge-intensive tasks (Petroni et al., 2020). We first find that multi-task training of the memory and live encoders strongly improves model quality relative to training on a single task, especially when devoting less capacity to the live encoder. Moreover, glimmer strongly improves over both multi-task trained lumen and FiD in both quality and speed. In general, glimmer successfullyifies reranking and memory into a single efficient, high-quality model. ## 2 Background We are interested in achieving the best possible trade-off between quality and inference compute. The following section describes FiD and lumen, the baseline methods that glimmer is built on, and their computational properties. A more in-depth analysis of these methods can be found in de Jong et al. (2023). ### Fusion-in-Decoder Fusion-in-Decoder (Izacard and Grave, 2021) is based on a T5 encoder-decoder model (Raffel et al., 2020). For each input, a number of relevant text passages are retrieved, and the input is prepended to each passage. The resulting input-passage pairs are encoded separately by the encoder, and the encoded pairs are then concatenated into a flat sequence of token representations and attended to by the decoder to produce a target output. For each model, live components are in blue and components pre-computed before inference in orange. \[G=\textbf{Dec}\Big{[}\textbf{Enc}(Q;\textbf{Passage}_{1});\ldots\textbf{Enc}(Q ;\textbf{Passage}_{k})\Big{]}\] Let \(k\) be the number of passages, \(n_{p}\) be the number of tokens per passage, \(n_{t}\) the number of target tokens, \(L\) the number of layers, and \(d\) the dimension of the model. Following analysis from de Jong et al. (2022, 2023), the FLOPs for a single inference sample of FiD (ignoring attention score computation) is given by \[F_{FiD}=\underbrace{kn_{p}\cdot L\cdot 14d^{2}}_{\text{Encoder and cross-attention}}+ \underbrace{n_{t}\cdot L\cdot 14d^{2}}_{\text{Decoder}}\] with factors \(8d^{2}\) per token from feedforward layers, \(4d^{2}\) from self-attention projection layers, and \(2d^{2}\) from cross-attention projection layers. de Jong et al. (2023) contains a derivation of FiD model complexity in greater detail. ### lumen Typically the combined length of retrieved passages is much larger than the target length, such that the majority of FLOPs are consumed by the encoder processing retrieved passages. lumen reduces encoder inference cost by partially pre-computing the encoder representation for retrieved passages. At inference time, lumen retrieves the intermediate layer representations rather than the text. More precisely, lumen is initialized from a pre-trained T5 encoder-decoder model. The decoder functions the same as the standard FiD decoder, but the T5 encoder is divided into a large memory encoder which contains the first \(1-\alpha\) proportion of layers, and a smaller live encoder with the remaining \(\alpha\) proportion of layers. The memory encoder is applied offline to passages in the corpus to pre-compute memory representations, which are later updated conditioned on input and task on the fly by Figure 1: Overview of glimmer architecture. **Memory:** The memory encoder is updated during multi-task training, unlike lumen, before being applied to the corpus to generate partially pre-computed memory representations. The memory encoder is also applied during inference to generate partial question representations that are compatible with the memory. **Live:** Each passage memory is concatenated with the question representation, and a live encoder (proportion \(\alpha\) of the total model) is then applied to condition the passage on the input in two stages. After the first stage, consisting of a fraction \(\beta\) of live layers, a scoring layer selects a small subset of high-scoring relevant passages to keep and less relevant passages are discarded. The selected passage representations are updated by the second stage of the live encoder. Finally, the conditioned representations are concatenated and attended to by the decoder as in FiD. the fine-tuned live encoder. In order to ensure that memory representations and input are compatible, lumen applies the memory encoder1 to the input before prepending the question representation to the memory representation. Footnote 1: The original lumen implementation used a separate question encoder, but we show this is unnecessary. \[H_{i}=\begin{bmatrix}\mathbf{MemEnc}(Q);&\mathbf{MemEnc}(\text{ Passage}_{i})\end{bmatrix}\] \[G=\mathbf{Dec}\Big{[}Q;\mathbf{LiveEnc}(H_{1});\ldots\mathbf{ LiveEnc}(H_{k})\Big{]}\] Choosing \(\alpha=1\) yields a model very close to FiD while \(\alpha=0\) is a full memory model. During inference lumen applies only a proportion \(\alpha\) of the layers, leading to a fraction \(\alpha\) of FiD reader FLOPs for any given model size. \[F_{\textsc{lumen}} =\underbrace{kn_{p}\cdot\alpha L\cdot 12d^{2}}_{\text{ Encoder}}\] \[+\underbrace{kn_{p}\cdot L\cdot 2d^{2}}_{\text{Cross-attention}}+ \underbrace{n_{t}\cdot L\cdot 14d^{2}}_{\text{Decoder}}\] ## 3 glimmer glimmer builds on lumen with two major differences: glimmer incorporates a built-in reranker, and shares the memory and live encoder across many tasks. Standard reranking approaches struggle with a trade-off: smaller models may not be sufficiently powerful to judge whether a passage is relevant to an input, while the cost of larger models defeats a large part of the purpose of using a reranker in the first place. The lumen architecture offers an opportunity to circumvent this trade-off, as the majority of the passage representations are pre-computed. glimmer re-uses the initial layers of the live encoder for reranking, yielding a powerful re-ranking model at relatively modest computational cost. Sharing weights across tasks, meanwhile, allows for training the memory encoder without storing duplicate pre-computed representations, and strongly increases the effectiveness of the live encoder. Figure 1 shows an overview of the glimmer architecture. ### Architecture Compared to lumen, glimmer divides the live encoder into two components, where the first component is responsible for initial interaction and reranking and the second component performs further processing on representations of selected passages. The first component contains \(\beta\) proportion of live encoder layers with the remainder of layers in the second component. After the first live encoder, a linear projection layer is applied to the first token of each input-passage pair to generate a relevance score for the passage. The top-\(m\) passages with the highest scores out of the original \(k\) are processed by the second live encoder, and the other passages are discarded. The output of the second live encoder is fed to the decoder as in FiD and lumen. \[H_{i}=\begin{bmatrix}\mathbf{MemEnc}(Q);&\mathbf{MemEnc}(\text{ Passage}_{i})\end{bmatrix}\] \[H^{\prime}_{i}=\mathbf{LiveEnc}\mathbf{A}(H_{i})\] \[R_{j}=H^{\prime}_{i}\text{ s.t. Rank }[\mathbf{Score}(H^{\prime}_{i})]=j\] \[G=\mathbf{Dec}\Big{[}Q;\mathbf{LiveEnc}\mathbf{B}(R_{1});\ldots \mathbf{LiveEnc}\mathbf{B}(R_{m})\Big{]}\] ### Training The memory encoder, both live encoder components, the scoring projection and the decoder are all trained end-to-end. Unlike in lumen, the memory encoder does not need to be frozen as we share a single memory encoder between all tasks. In order to train the scoring projection and encourage the memory and first live encoder to produce representations suitable for reranking, we employ an auxiliary perplexity distillation loss Izacard et al. (2022). This loss encourages the model to rank passages by how much they lower the perplexity of the final generation, if that input-passage was fed to the decoder by itself. In particular, perplexity distillation minimizes the KL-divergence between the distribution implied by the reranking scores (computed from the output of the first live encoder component applied to concatenation of input and passage representations) and the distribution implied by the resulting perplexities: \[p_{k}^{\text{rank}}=\frac{\exp(\text{Score}(\text{Passage}_{k},Q)/\tau)}{\sum_{ i}\exp(\text{Score}(\text{Passage}_{i},Q)/\tau)}\] \[p_{k}^{\text{LM}}=\frac{\exp(\log p_{LM}(\text{Answer}|\text{Passage}_{k},Q)/ \tau)}{\sum_{i}\exp(\log p_{LM}(\text{Answer}|\text{Passage}_{i},Q)/\tau)}\] \[\mathcal{L}_{\text{pdist}}=KL(p^{\text{rank}},\,p^{\text{LM}})\] ### Computational analysis The difference in computational complexity between glimmer and lumen lies in reranking. The \(m\) selected passages are processed by the entire live encoder and then fed through the decoder, yielding computational cost equal to applying lumen with \(m\) passages (less than the full number of retrieved passages \(k\)). However, for the passages that were not selected, glimmer still applied the first live encoder component, leading to a reranking cost: \[F_{\text{glimmer}}=F_{\text{lumen}}^{m}+\underbrace{(k-m)n_{p}\cdot\beta\alpha L \cdot 12d^{2}}_{\text{Reranking}}\] If we use a small number of selected passages \(m<<k\) and small fraction of reranking layers \(\beta<<1\), then glimmer is significantly less computationally intensive than lumen. with k retrievals. We note that this computational analysis is limited to FLOPs, rather than practical latency. For autoregressive inference, the decoder is often bottlenecked by memory bandwidth rather than FLOPs (Shazeer, 2019; de Jong et al., 2022). However, many recent techniques ameliorate this constraint, such as flavors of multi-query attention (Shazeer, 2019; Ainslie et al., 2023), layer sparsity (de Jong et al., 2022), speculative decoding (Leviathan et al., 2022; Chen et al., 2023), and others. Any model deployed in an environment where inference speed is important will likely employ one or more such techniques, such that FLOPs are a binding constraint. For the rest of this paper, we will measure computational cost in FLOPs; de Jong et al. (2023) contains analysis for how FLOPs and latency interact for lumen. As we will show, glimmer represents a better quality-compute trade-off than lumen and FiD. ## 4 Experiments ### Experimental setup Model configurationglimmer is based on the T5.1.1 architecture (Raffel et al., 2020) like lumen, implemented in JAX (Heek et al., 2020), Flax (Heek et al., 2020) and Flaxformer. All models are initialized from public T5.1.1 checkpoints. FiD is fine-tuned according to the recipe from the original paper (Izacard and Grave, 2021). For lumen and glimmer, given proportion of live layers \(\alpha\), the memory encoder is initialized with the first 1 - \(\alpha\) proportion of layers of the T5 encoder, and the live encoder is initialized with the last \(\alpha\) proportion of layers of the T5 encoder. Main experiments use \(\alpha=\frac{1}{3}\). Fine-tuningFor fine-tuning we use the Adafactor optimizer (Shazeer and Stern, 2018) with constant learning rate of 0.0001, batch size 128, and dropout rate 0.1 for all tasks. For multi-task training we sample uniformly from tasks. We allocate 48 tokens for the question and 304 tokens for each passage. In addition to the standard language modeling loss, reranking experiments use an auxiliary Figure 2: **glimmer is faster and higher quality than lumen which in turn is faster and higher quality than FiD.** Comparison of glimmer, lumen and FiD XXL model average performance on KILT dev set, and inference speed. FiD uses 5 retrieved passages, lumen uses 10 retrieved passages, and glimmer uses 25 retrieved passages, reranked to 5 final passages. lumen and glimmer have live proportion \(\alpha=\frac{1}{3}\). perplexity distillation loss with weight and temperature 1.0. We train until convergence and select the checkpoint with the highest performance on the dev set. We use greedy decoding for inference. DataWe train and evaluate on a subset of datasets from the KILT benchmark of knowledge-intensive tasks Petroni et al. (2020). In particular, this includes question answering datasets Natural Questions Kwiatkowski et al. (2019), TriviaQA Joshi et al. (2017), and HotPotQA Yang et al. (2018), fact verification dataset FEVER Thorne et al. (2018), and slot-filling datasets Zero Shot RE Levy et al. (2017) and T-REx ElSahar et al. (2018). We apply the relevance filtering procedure from Hofstatter et al. (2022) to ameliorate problems from imbalanced datasets. RetrievalWe employ the retrieval procedure from Hofstatter et al. (2022). Wikipedia is divided into chunks up to 200 words, and we retrieve the passages with the highest similarity score to the query, computed by a pre-trained GTR-Base model Ni et al. (2021). ### Main results For our main results, we compare FiD, lumen (with updated architecture and multi-task training) and glimmer. Due to in-built reranking, glimmer processes passages more efficiently and can therefore retrieve more documents than lumen, which in turn can retrieve more documents than FiD. As Figure 2 shows, this efficiency translates into a higher quality and faster model, with glimmer outperforming lumen and FiD at faster speed. ### Retrieval and reranking The main results indicate that glimmer can achieve higher quality at lower cost than FiD and lumen by retrieving more passages initially and reranking to a much smaller number of passages. Here we investigate how different choices regarding retrieval and reranking affect the results. Number of retrieved and selected passagesFigure 3 shows how performance varies with the total number of retrieved passages and the number of selected passages after reranking. Performance strongly increases in the total number of retrieved passages, with sharply diminishing returns in the number of _selected_ passages. These results indicate that the reranker effectively selects useful passages, such that the bottleneck is whether or not the relevant information is present in original retrieved passages. The former intuition is further supported by Figure 4, as applying sufficient reranking layers almost Figure 4: Average dev performance on KILT for glimmer-Large with live proportion \(\frac{1}{3}\), 25 retrieved passages and 5 selected passages as a function of rerank proportion \(\beta\). Baseline \(\beta\) is 0.25, equivalent to 2 reranking layers out of 8 total live layers. Figure 3: Average dev performance on KILT for glimmer-Large with live proportion \(\frac{1}{3}\) and rerank proportion \(\frac{1}{4}\) as a function of number of retrievals with 5 selected passages (left) and number of selected passages with 25 retrievals (right). recovers the performance of using all 25 retrievals. On the other hand, some neural reranking with full interaction is clearly helpful, as using rerank proportion fewer than 0.25 (fewer than 2 reranking layers) strongly harms performance. Interestingly, as shown in Figure 5, with a large number of retrievals, selection is sufficiently accurate that selecting more passages harms performance due to distraction from irrelevant context. The optimal number of selected passages is lower with more reranking layers, as the top ranked passages better capture all useful information. Separate rerankerIt is also informative to consider the effect of using the live encoder to perform the reranking, as opposed to a separate reranker. Table 1 compares performance of glimmer with using a separate reranker, initialized from T5 or trained from scratch. We note that using a separate reranker achieves comparable performance at the cost of a more complicated model, and additional memory and computation overhead. Initializing the reranker from pre-trained weights is important - attempting to learn reranking layers from scratch significantly lowers performance. ### Multi-task training The second major improvement in glimmer is sharing the memory and live encoder between tasks, and consequently training the memory encoder. We present experiments that attempt to disentangle the effects of these improvements. Figure 6 demonstrates the effect of multi-task training by comparing performance on NQ between models trained only on NQ and models trained on KILT. To isolate the effect of multi-task training, we compare FiD and lumen, and train the memory for all models in this comparison. Multi-task training significantly benefits all models, but is disproportionately impactful for lumen, especially with lower live proportions. Figure 7 shows the difference between single and multi-task training as a function of live proportion, with multi-task performance leveling out earlier, further showing larger impact for smaller live proportion. The late interaction that the live encoder is responsible for is rather different from its pre-training task, so it is intuitive that the live encoder would disproportionately benefit from increased size and diversity of data. Multi-task training also enables learning a memory encoder. Table 2 shows that training the memory encoder is important for performance, which is expected as the pre-trained encoder is not designed to function as a memory encoder out of the box. ### Other ablations There are a number of other interesting decisions in the glimmer architecture and training proce \begin{table} \begin{tabular}{l c} **Reranker** & **Performance** \\ \hline glimmer (shared) & 69.8 \\ Separate (from T5) & 70.0 \\ Separate (from scratch) & 68.7 \\ \hline \hline \end{tabular} \end{table} Table 1: Average performance on KILT dev sets for glimmer-Large with 25 retrieved and 5 selected passages for different configurations of the reranker: shared, separately initialized from T5, and separately initialized from scratch. Figure 5: Average dev performance on KILT for glimmer-Large with live proportion \(\frac{1}{3}\) with 40 retrievals as a function of number of selected passages. Figure 6: **Multi-task training disproportionately benefits lumen relative to FiD. Exact match on Natural Questions dev set when trained only on Natural Questions vs on set of KILT tasks for FiD, glimmer-\(\frac{1}{3}\) and glimmer-\(\frac{1}{8}\) Large models.** dure. Table 3 presents ablations of some of these decisions. The original lumen implementation featured a separate question encoder, which was necessary because the memory encoder was not fine-tuned. Here, we update the memory encoder with multi-task training, so we opt to re-use the memory encoder for encoding the question, simplifying the architecture and reducing the number of parameters. We see that this simplification comes at a small cost in performance. There are also a number of parameter choices regarding the reranking: the weight of the perplexity distillation loss, the temperature of the score and perplexity distributions, and the method for generating a reranking score. Over or under-weighting reranking loss leads to lower performance. However, using a lower temperature for the score and perplexity distributions does help - Izacard et al. (2022) argue that the effect of most individual passages on perplexity is small, and a lower temperature helps distinguish those differences. Finally, it appears that using the first token of each passage performs similarly to generating a score from mean-pooled representations. ## 5 Related Work Retrieval augmentation (Izacard and Grave, 2021; Borgeaud et al., 2022; Lewis et al., 2020; Khandelwal et al., 2020; Guu et al., 2020) is a powerful technique to improve language model performance by augmenting the input with additional context. Our work is focused on improving the quality-compute trade-off for retrieval-augmented language models. It does so by unifying three lines of research: late-interaction memory, late-interaction reranking, and learning to retrieve. Our approach uses the architecture skeleton from Fusion-in-Decoder (Izacard and Grave, 2021), one of the most common retrieval augmented models. We employ multi-task training on KILT (Petroni et al., 2020) as in Hofstatter et al. (2022). MemoryRetrieval augmentation is expensive due to the additional context that needs to be processed by the language model. Memory models such as TOME (de Jong et al., 2022), Memorizing Transformer (Wu et al., 2022), and many others (Li et al., 2022; Zhong et al., 2022; Chen et al., 2022; Wu et al., 2022; Yogatama et al., 2021; Bertsch et al., 2023) attempt to avoid this cost by pre-computing representations and storing them into a memory, such that representations can be retrieved directly rather than processed on the fly. However, such approaches sacrifice quality as memory representations are not conditioned on each individual input (Li et al., 2022; de Jong et al., 2023). _Late-interaction memory_(de Jong et al., 2023; Milbauer et al., 2023) improves the quality of memory approaches by only partially pre-computing retrieval representations, and performing some interaction between memory and input \begin{table} \begin{tabular}{l c} **Model** & **Performance** \\ \hline glimmer & 69.8 \\ Frozen memory & 69.0 \\ \hline \hline \end{tabular} \end{table} Table 2: **Training memory is a significant factor in strong glimmer performance.** Average performance on KILT dev sets for glimmer-Large with 25 retrieved and 5 selected passages, with and without training memory. \begin{table} \begin{tabular}{l c} **Model** & **Performance** \\ \hline glimmer & 69.8 \\ \hline Separate Qenc & 70.0 \\ PDist \(\lambda=0.1\) & 69.5 \\ PDist \(\lambda=10\) & 69.5 \\ PDist \(\tau=0.1\) & 70.1 \\ PDist \(\tau=5\) & 69.4 \\ Mean pool & 69.8 \\ \hline \hline \end{tabular} \end{table} Table 3: glimmer ablations: separate question encoder, different perplexity distillation loss weight, perplexity distillation temperature, and mean pool scoring method. Each model is Large size with 25 retrievals and 5 selected passages, evaluated on the KILT dev set. Figure 7: Performance on Natural Questions dev set for lumen-Large trained on KILT vs NQ-only as a function of live proportion. on the fly. In particular, our work is very closely based on lumen (de Jong et al., 2023). RerankingLike the language model itself, retrieval procedures face a trade-off between expensive online ranking with full interaction (Chen et al., 2020) and the more common dual encoder approaches such as DPR (Karpukhin et al., 2020) and GTR (Ni et al., 2021) that scores based on inner product similarity with a corpus of pre-computed passage representations. Often different models for retrieval are applied in a pipeline approach, with an initial cheap scoring model followed by a more powerful and expensive reranker (Mao et al., 2021; Wang et al., 2018; Yu et al., 2022). Many rerankers also make use of late interaction to obtain a good trade-off between ranking quality and speed, such as COLBERT (Khattab and Zaharia, 2020; Santhanam et al., 2022), PreTTR (MacAvaney et al., 2020), SDR (Cohen et al., 2022), and Poly-encoders (Humeau et al., 2020). glimmer combines late-interaction memory and reranking into a single model, sharing the pre-computed representations for both use cases. Learning to retrieveRetrieval models are often trained with supervised data (Karpukhin et al., 2020; Ni et al., 2021), using gold retrievals from datasets such as MS-MARCO (Nguyen et al., 2016) or TREC CAR (Dietz et al., 2018). When selecting passage to use for retrieval-augmented generation, we have an additional signal, namely which passages are most helpful for the reader model. A number of existing works use this signal to improve retrieval (Guu et al., 2020; Sachan et al., 2021; Jiang et al., 2022; Sachan et al., 2021; Izacard et al., 2022). We follow ATLAS (Izacard et al., 2022) and employ perplexity distillation to train our reranker to select passages that help lower reader model perplexity. ## 6 Conclusion Retrieval-augmented language models are powerful but slow in inference, while pre-computed memory-augmented models are fast at the cost of quality. Hybrid late-interaction models such as lumen present a good quality-compute trade-off. We introduce glimmer, an improved late-interaction model that also incorporates learned end-to-end reranking and multi-task training to achieve an even better trade-off. glimmer achieves strong gains in quality at faster speeds compared to lumen and FiD on the KILT benchmark of knowledge-intensive tasks. ## Acknowledgements We thank Luke Vilnis, Tania Bedrax-Weiss and others at Google Research for insightful comments and discussion.
2305.09230
Lower Bounds for Non-Adaptive Shortest Path Relaxation
We consider single-source shortest path algorithms that perform a sequence of relaxation steps whose ordering depends only on the input graph structure and not on its weights or the results of prior steps. Each step examines one edge of the graph, and replaces the tentative distance to the endpoint of the edge by its minimum with the tentative distance to the start of the edge, plus the edge length. As we prove, among such algorithms, the Bellman-Ford algorithm has optimal complexity for dense graphs and near-optimal complexity for sparse graphs, as a function of the number of edges and vertices in the given graph. Our analysis holds both for deterministic algorithms and for randomized algorithms that find shortest path distances with high probability.
David Eppstein
2023-05-16T07:17:11Z
http://arxiv.org/abs/2305.09230v1
# Lower Bounds for Non-Adaptive ###### Abstract We consider single-source shortest path algorithms that perform a sequence of relaxation steps whose ordering depends only on the input graph structure and not on its weights or the results of prior steps. Each step examines one edge of the graph, and replaces the tentative distance to the endpoint of the edge by its minimum with the tentative distance to the start of the edge, plus the edge length. As we prove, among such algorithms, the Bellman-Ford algorithm has optimal complexity for dense graphs and near-optimal complexity for sparse graphs, as a function of the number of edges and vertices in the given graph. Our analysis holds both for deterministic algorithms and for randomized algorithms that find shortest path distances with high probability. ## 1 Introduction Dijkstra's algorithm finds shortest paths in directed graphs when all edge weights are non-negative, but the problem becomes more difficult when negative edge weights (but not negative cycles) are allowed. In this case, despite recent breakthroughs on near-linear time bounds for graphs with small integer edge weights [5], the best strongly-polynomial time bound for single-source shortest paths remains that of the Bellman-Ford algorithm [18, 4, 10], which takes time \(O(mn)\) on graphs with \(m\) edges and \(n\) vertices, or \(O(n^{3})\) on dense graphs. Both Dijkstra's algorithm and the Bellman-Ford algorithm (as well as an unnamed linear-time algorithm for single-source shortest paths in directed acyclic graphs) can be unified under the framework of _relaxation algorithms_, also called _label-correcting algorithms_[8]. These algorithms initialize tentative distances \(D[v]\) from the source vertex to each other vertex \(v\), by setting \(D[s]=0\) and \(D[v]=+\infty\) for \(v\neq s\). Then, they repeatedly _relax_ the edges of the graph. This means, that for a given edge \(u\to v\), the algorithm updates \(D[v]\) to \(D[u]+\text{length}(u\to v)\). In Dijkstra's algorithm, each edge \(u\to v\) is relaxed once, in sorted order by the tentative distance \(D[u]\). In the Bellman-Ford algorithm, an edge can be relaxed many times. The algorithm starts with the tentative distance equal to the correct distance for \(s\), but not for the other vertices. Whenever the algorithm relaxes an edge \(u\to v\) in the shortest path tree, at a time when \(u\) already has the correct distance, the tentative distance to \(v\) becomes correct as well. Thus, the goal in designing the algorithm is to perform these distance-correcting relaxations while wasting as little effort as possible on other relaxations that do not correct any distance, and on the overhead in selecting which relaxation to perform. We would like to prove or disprove the optimality of the Bellman-Ford algorithm among a general class of strongly-polynomial shortest path algorithms, without restricting the types of computation such an algorithm can perform, but such a result appears to remain far out of reach. Instead, in this work we focus only on relaxation algorithms, asking: how few relaxation steps are needed? Note that, without further assumptions, a shortest path algorithm could "cheat", computing a shortest path tree in some other way and then performing only \(n-1\) relaxation steps in a top-down traversal of a shortest path tree. To focus purely on relaxation, and prevent such cheating, we consider _non-adaptive relaxation algorithms_, in which the sequence of relaxation steps is determined only by the structure of the given graph, and not on its weights nor on the outcome of earlier relaxation steps. Dijkstra's algorithm is adaptive, but the linear-time DAG algorithm is non-adaptive. Another example of a non-adaptive algorithm comes from past work on the graphs in which, like DAGs, it is possible to relax every edge once in a fixed order and guarantee that all tentative distances are correct [12]. As usually described, the Bellman-Ford algorithm is adaptive. Its typical optimizations include adaptive rules that disallow repeatedly relaxing any edge \(u\to v\) unless the tentative distance to \(u\) has decreased since the previous relaxation, and that stop the entire algorithm when no more allowed relaxations can be found. However, its same asymptotic time bounds can be achieved by a non-adaptive version of the Bellman-Ford algorithm, with a _round-robin_ relaxation sequence, one that merely repeats \(n-1\) rounds of relaxing all edges in the same order per round. A non-adaptive asynchronous distributed form of the Bellman-Ford algorithm is widely used in _distance vector routing_ of internet traffic, to maintain paths of minimum hop count between major internet gateways [13]. ### Known Upper Bounds We do not require non-adaptive relaxation algorithms to be round-robin, but we are unaware of any way to take advantage of this extra flexibility. Nevertheless, among round-robin algorithms, there is still freedom to choose the ordering of edges within each round, and this freedom can lead to improved constant factors in the number of relaxation steps performed by the Bellman-Ford algorithm. Yen [21] described a method based on the following idea. Choose an arbitrary linear ordering for the vertices, and partition the edges into two subsets: the edges that are directed from an earlier vertex to a later vertex in the ordering, and the edges that are directed from a later vertex to an earlier vertex. Both of these two edge subsets define directed acyclic subgraphs of the given graph, with the chosen linear ordering or its reverse as a topological ordering. Use a round-robin edge ordering that first relaxes all of the edges of the first subgraph, in its topological order, and then relaxes all of the edges of the second subgraph, in its topological order. If any shortest path is divided into contiguous subpaths that lie within one of these two DAGs, then each two consecutive subpaths from the first and second DAG will be relaxed in order by each round of the algorithm. In the worst case, there is a single shortest path of \(n-1\) edges, alternating between the two DAGs, requiring \(\lceil n/2\rceil\) rounds of relaxation. For complete directed graphs, this method uses \(\big{(}\frac{1}{2}+o(1)\big{)}n^{3}\) relaxation steps, instead of the \(\big{(}1+o(1)\big{)}n^{3}\) that might be used by a less-careful round-robin method. As we showed in earlier work [2], an additional constant factor savings can be obtained by a randomized algorithm that selects from a random distribution of non-adaptive relaxation sequences, and that obtains a correct output with high probability rather than with certainty. To do so, use Yen's method, but choose the vertex ordering as a uniformly random permutation of the vertices, rather than arbitrarily. In any shortest path tree, each vertex with more than one child reduces the number of steps from the source to the deepest leaf by one, reducing the number of alternations between the two DAGs. For each remaining vertex with one child in the tree, the probability that it lies between its parent and child in the randomly selected ordering is \(\frac{1}{3}\), and when this happens, it does not contribute to the bound on the number of alternations. With high probability, the number of these non-contributing vertices is close to one third of the single-child vertices. Therefore, with high probability, the maximum number of alternations between the two DAGs among paths on the shortest path tree is \(\big{(}\frac{2}{3}+o(1)\big{)}n\), and an algorithm that uses this method to perform \(\big{(}\frac{1}{3}+o(1)\big{)}n^{3}\) relaxation steps will find the correct shortest paths with high probability. The worst-case asymptotic time of these methods remains \(O(n^{3})\) for complete graphs, and \(O(mn)\) for arbitrary graphs with \(m\) vertices and \(n\) edges. Both Yen's method and the randomized permutation method can also be used in adaptive versions of the Bellman-Ford algorithm, with better constant factors and in the randomized case leading to a Las Vegas algorithm rather than a Monte Carlo algorithm, but it is their non-adaptive variants that concern us here. ### New Lower Bounds We provide the following results: * Any deterministic non-adaptive relaxation algorithm for single-source shortest paths on a complete directed graph with \(n\) vertices must use \(\big{(}\frac{1}{6}-o(1)\big{)}n^{3}\) relaxation steps. * Any randomized non-adaptive relaxation algorithm for shortest paths on a complete directed graph with \(n\) vertices, that with high probability sets all distances correctly, must use \(\big{(}\frac{1}{12}-o(1)\big{)}n^{3}\) relaxation steps. * For any \(m\) and \(n\) with \(n\leq m\leq 2\binom{n}{2}\), there exists a directed graph on \(m\) edges and \(n\) vertices on which any deterministic or high-probability randomized non-adaptive relaxation algorithm for shortest paths must use \(\Omega(mn/\log n)\) relaxation steps. When \(m=\Omega(n^{1+\varepsilon})\) for some \(\varepsilon>0\), the lower bound improves to \(\Omega(mn)\). These lower bounds hold even on graphs for which all edges weights are zero and one, for which an adaptive algorithm, Dial's algorithm, can find shortest paths in linear time [9]. ### Related Work Although we are not aware of prior work in the precise model of computation that we use, variants of the Bellman-Ford algorithm have been studied and shown optimal for some other related problems: * The \(k\)-walk problem asks for a sequence of exactly \(k\) edges, starting and one vertex and ending at the other, allowing repeated edges. The Bellman-Ford algorithm can be modified to find the shortest \(k\)-walk between two vertices in time \(O(kn^{2})\), non-adaptively. In any non-adaptive relaxation algorithm, the only arithmetic operations on path lengths and edge weights are addition and minimization, and these operations are performed in a fixed order. Therefore, the sequence of these operations can be expanded into a circuit, with two kinds of gates: minimization and addition. The resulting \((\min,+)\)-circuit model of computation is somewhat more general than the class of relaxation algorithms, because the sequence of operations performed in this model does not need to come from a sequence of relaxation steps. The \(k\)-walk version of the Bellman-Ford algorithm is nearly optimal in the \((\min,+)\)-circuit model: circuit size \(\Omega\big{(}k(n-k)n\big{)}\) is necessary [14]. However, this \(k\)-walk problem is different from the shortest path problem, so this bound does not directly apply to shortest paths. * Under conditional hypotheses that are standard in fine-grained complexity analysis, the \(O(km)\) time of Bellman-Ford for finding paths of at most \(k\) steps, for graphs of \(m\) edges, is again nearly optimal: neither the exponent of \(k\) nor the exponent of \(m\) can be reduced to a constant less than one. For large-enough \(k\), the shortest path of at most \(k\) steps is just the usual shortest path, but this lower bound applies only for choices of \(k\) that are small enough to allow the result to differ from the shortest path [15]. * Another related problem is the all hops shortest path problem, which asks to simultaneously compute \(k\) paths, having distinct numbers of edges from one to a given parameter \(k\). Again, this can be done in time \(O(km)\) by a variant of the Bellman-Ford algorithm, and it has an unconditional \(\Omega(km)\) lower bound for algorithms that access the edge weights only by path length comparisons, as Bellman-Ford does [6, 11]. Because it demands multiple paths as output, this lower bound does not apply to algorithms that compute only a single shortest path. * Meyer et al. [17] study a version of the Bellman-Ford algorithm, in which edges are relaxed in a specific (adaptive) order. They construct sparse graphs, with \(O(n)\) edges, on which this algorithm takes \(\Omega(n^{2})\) time, even in the average case for edge weights uniformly drawn from a unit interval. This bound applies only to this algorithm and not to other relaxation orders. ## 2 Deterministic Lower Bound for Complete Graphs The simplest of our results, and the prototype for our other results, is a lower bound on the number of relaxations needed by a deterministic non-adaptive relaxation algorithm, in the worst case, on a complete directed graph with \(n\) vertices. Theorem 3.1: _Any deterministic non-adaptive relaxation algorithm for single-source shortest paths on a complete directed graph with \(n\) vertices must use at least \(\left(\frac{1}{6}-o(1)\right)n^{3}\) relaxation steps._ Proof: Fix the sequence \(\sigma\) of relaxation steps chosen by any such algorithm. We will find an assignment of weights for the complete directed graph, such that the distances obtained by the relaxation algorithm are not all correct until \(\left(\frac{1}{6}-o(1)\right)n^{3}\) relaxation steps have taken place. Therefore, in order for the algorithm to be correct, it must make this many steps. For the weights we choose, the shortest path tree will form a single directed path, of \(n-1\) edges, starting at the source vertex. In order for the relaxation algorithm to achieve correct distances to all vertices, its sequence of relaxations must include a subsequence consisting of all path edges in order. The weights of these edges are unimportant (because we are considering only non-adaptive algorithms) so we may set all path edges to have weight zero and all other edges to have weight one. To determine this path, we choose one at a time its edges in even positions: its second, fourth, sixth, etc., edge. These chosen edges include every vertex in the path, so choosing them will also determine the edges in odd positions. When choosing the \(i\)th edge (for an even number \(i\)), we make the choice greedily, to maximize the position in \(\sigma\) of the step that relaxes this edge and makes its endpoint have the correct distance. Let \(s_{i}\) denote this position, with \(s_{0}=0\) as a base case recording the fact that, before we have relaxed any edges, the source vertex already has the correct distance. Then the length of \(\sigma\) is at least equal to the telescoping sum \[(s_{2}-s_{0})+(s_{4}-s_{2})+(s_{6}-s_{4})+\cdots.\] When choosing edge \(i\), for an even position \(i\), there are \(i-1\) earlier vertices, whose position in the shortest path is already determined, and \(n-i+1\) remaining vertices. Between step \(s_{i-2}\) and step \(s_{i}\) of the relaxation sequence \(\sigma\), it must relax all \(n-i+1\) edges from the last endpoint of edge \(i-2\) to one of these remaining vertices, and all \(2\binom{n-i+1}{2}\) edges between pairs of the vertices that remain to be corrected. For, if it did not do so, there would be an edge that it had not relaxed, and choosing this edge next would cause \(s_{i}\) to be greater; but this would violate the greedy choice of edge \(i\) to make \(s_{i}\) as large as possible. Therefore, \[s_{i}-s_{i-2}\geq(n-i+1)+2\binom{n-i+1}{2}=(n-i+1)^{2}.\] Summing over all \(\lfloor(n-1)/2\rfloor\) choices of edges in even positions gives, as a lower bound on the total number of relaxation steps, \[\sum_{i=2,4,6,\ldots}s_{i}-s_{i-2}\geq\sum_{i=2,4,6,\ldots}(n-i+1)^{2}=\frac{ n^{3}-n}{6},\] where the closed form for the summation follows easily by induction. Randomized Lower Bound for Complete Graphs It does not make much sense to consider expected time analysis for non-adaptive algorithms, because these algorithms have a fixed stopping time (determined as a function of the given graph), and we want their output to be correct with high probability rather than in any expected sense. Nevertheless, it is often easier to lower-bound the expected behavior of randomized algorithms, by using Yao's principle [20], according to which the expected cost of a randomized algorithm on its worst-case input can be lower bounded by the cost of the best deterministic algorithm against any random distribution of inputs. In order to convert high-probability time bounds into expectations, we consider randomized non-adaptive algorithms that are guaranteed to produce the correct distances, and we define the _reduced cost_ of such an algorithm to be the number of relaxations that it performs until all distances are correct, ignoring any remaining relaxations after that point. Lemma 1: _If a randomized non-adaptive relaxation algorithm \(\mathcal{A}\) takes \(s(G)\) steps on any weighted input graph \(G\) and computes all distances from the source vertex correctly with probability \(1-o(1)\), then there exists a randomized non-adaptive relaxation algorithm \(\mathcal{B}\) that is guaranteed to produce correct distances and whose expected reduced cost, on weighted graphs \(G\) with \(n\) vertices and \(m\) edges, is at most \(s(G)+o(mn)\)._ Proof: Construct algorithm \(\mathcal{B}\) by using the relaxation sequence from algorithm \(\mathcal{A}\), appending onto it the sequence of relaxations from a conventional non-adaptive deterministic Bellman-Ford algorithm. Then with probability \(1-o(1)\) the relaxed cost of \(\mathcal{B}\) counts only the relaxation sequence from algorithm \(\mathcal{A}\), of length \(s(G)\). With probability \(o(1)\) the relaxed cost extends into the deterministic Bellman-Ford part of the sequence, of length \(O(mn)\). Because this happens with low probability, its contribution to the expected reduced cost is \(o(mn)\). Corollary 1: _Any lower bound on expected reduced cost is also a valid lower bound, up to an additive \(o(mn)\) term, on the number of relaxation steps for a randomized non-adaptive relaxation algorithm that produces correct distances with high probability._ With this conversion to expected values in hand, we may now formulate Yao's principle as it applies to our problem. We need the following notation: Definition 1: For any graph \(G\), with a specified source vertex, let \(W_{G}\) be the family of assignments of real weights to edges of \(G\). Let \(\mathcal{D}_{G}\) be the family of probability distributions of weights in \(W_{G}\), and let \(\Sigma_{G}\) be the class of relaxation sequences on \(G\) that are guaranteed to produce correct distances from the specified source vertex. For any randomized non-adaptive relaxation algorithm \(\mathcal{A}\) and weight vector \(w\in W_{G}\), let \(r_{G}(\mathcal{A},w)\) denote the expected reduced cost of running algorithm \(\mathcal{A}\) on \(G\) with edges weighted by \(w\). For \(\sigma\in\Sigma_{G}\) and \(D\in\mathcal{D}_{G}\) let \(\rho_{G}(\sigma,D)\) be the expected reduced cost of sequence \(\sigma\) on weight vectors drawn from \(D\). Lemma 2 (Yao's principle): _For any graph \(G\) with specified source vertex, and any randomized non-adaptive relaxation algorithm \(\mathcal{A}\),_ \[\min_{\mathcal{A}}\max_{w\in W_{G}}r_{G}(\mathcal{A},w)=\max_{D\in\mathcal{D}_{ G}}\min_{\sigma\in\Sigma_{G}}\rho_{G}(\sigma,D).\] Proof: This is just the minimax principle for zero-sum games, applied to a game in which one player chooses a relaxation sequence \(\sigma\in\Sigma_{G}\), the other player chooses a weight vector \(w\in W_{G}\), and the outcome of the game is the reduced cost for \(\sigma\) on \(w\). According to that principle, the value of the best mixed strategy for the sequence player, against its worst-case pure strategy (the left hand side of the equality in the lemma) equals the value of the best mixed strategy for the weight player, against its worst-case pure strategy (the right hand side). Corollary 2: _For any weight distribution \(D\in\mathcal{D}_{G}\), \(\min_{\sigma\in\Sigma_{G}}\rho_{G}(\sigma,D)\) is a valid lower bound on the expected reduced cost of any randomized non-adaptive relaxation algorithm that is guaranteed to produce correct distances._ Proof: An arbitrary algorithm \(\mathcal{A}\) can only have a greater or equal value to the left hand side of Lemma 2, and an arbitrary weight distribution \(D\) can only have a smaller or equal value to the right hand side. So the expected reduced cost of the algorithm, on a worst-case input, can only be greater than or equal to the value given for \(D\) in the statement of the corollary. Theorem 2.2: _Any randomized non-adaptive relaxation algorithm for shortest paths on a complete directed graph with \(n\) vertices, that with high probability sets all distances correctly, must use at least \(\big{(}\frac{1}{12}-o(1)\big{)}n^{3}\) relaxation steps._ Proof: We apply Corollary 2 to a weight distribution \(D\) defined as follows: we choose a random permutation of the vertices of the given complete graph, starting with the source vertex, we make the weight of edges connecting consecutive vertices in order along this permutation zero, and we make all other weights one. Thus, each weighting of the complete graph drawn from this distribution will have a unique shortest path tree in the form of a single path, with all paths from the source vertex equally likely. For any weight vector \(w\) drawn from \(D\), let \(\pi_{w}\) be this path. Let \(\sigma\) be any relaxation sequence in \(\Sigma_{D}\). As in the proof of Theorem 2.1, we define \(s_{i}\) (for a weight vector \(w\) to be determined) to be the step at which the second endpoint of the \(i\)th edge of \(\pi_{w}\) has its shortest path distance set correctly. Let \(C_{i}\) denote the conditional probability distribution obtained from \(D\) by fixing the choice of the first \(i\) edges of \(\pi_{w}\). Under condition \(C_{i}\), the remaining \(n-i-1\) vertices remain equally likely to be permuted in any order. There are \(2\binom{n-i-1}{2}\) choices for edge \(i+2\), each of which is equally likely. Therefore, the expected value of \(s_{i+2}-s_{i}\) is greater than or equal to the average, among these edges, of their distance along sequence \(\sigma\) from position \(s_{i}\). (It is greater than or equal, rather than equal, because this analysis does not take into account the requirement that edge \(i+1\) must be relaxed first, before we relax edge \(i+2\).) Sequence \(\sigma\) can minimize this average if, in \(\sigma\), the next \(2\binom{n-i-1}{2}\) relaxation steps after \(s_{i}\) are exactly these distinct edges. When \(\sigma\) packs the edges in this minimizing way, the average is \(2\binom{n-i-1}{2}/2\); for other sequences it can only be greater. Therefore, \[E[s_{i+2}-s_{i}\mid C_{i}]\geq\binom{n-i-1}{2}.\] Summing these expected differences, over the sequence of values \(s_{i}\) for even \(i\), and applying Corollary 1 and Corollary 2, gives the result. ## 4 Lower Bounds for Incomplete Graphs In our lower bounds for complete graphs, the edges in even and odd positions of the shortest paths perform very different functions. The edges in even positions are the ones that, at each step in the shortest path, force the relaxation sequence to have a large subsequence of relaxation steps. Intuitively, this is because there are many possible choices for the edge at the next step and all of these possibilities (in the deterministic bound) or many of these possibilities (in the randomized bound) must be relaxed before reaching the edge that is actually chosen. The edges in odd positions, on the other hand, do not contribute much directly to the length of the sequence of relaxation steps. Instead, they are used to connect the edges in the even positions into a single shortest path. To construct graphs that are not complete, for which we can prove analogous lower bounds, we make this dichotomy more explicit. For a chosen "capacity" parameter \(c\), we will construct graphs that have two designated subsets of \(c\) vertices, \(S\) and \(T\) (with the source vertex contained in subset \(S\)). We will connect the vertices in \(T\) to the vertices in \(S\) by a biregular bipartite directed graph of some degree \(d\approx m/2c\), a graph in which each vertex in \(T\) has exactly \(d\) outgoing neighbors and each vertex in \(S\) has exactly \(d\) incoming neighbors. This biregular Figure 1: Schematic view of the graphs used for our lower bound construction graph will perform the function of the even position edges in our complete graph lower bounds: it will have many edges to choose from, forcing any relaxation algorithm to make a long subsequence of relaxations between each two chosen edges. The detailed structure of this graph is not important for our bounds. In the other direction, from \(S\) to \(T\), we will construct a special graph with the property that, no matter which sequence of disjoint edges we choose from the biregular graph, we can complete this sequence to a path. A schematic view of this construction is depicted in Fig. 1. We begin the more detailed description of this structure by defining the graphs we need to connect from \(S\) to \(T\). The following definition is standard: Definition 2: A _rearrangeable non-blocking network_ of capacity \(c\) is a directed graph \(G\) with \(c\) vertices labeled as inputs, and another \(c\) vertices labeled as outputs, with the following property. For all systems of pairs of inputs and outputs that include each input and output vertex at most once, there exists in \(G\) a system of vertex-disjoint paths from the input to the output of each pair. Observation 3: _A complete bipartite graph \(K_{c,c}\), with its edges directed from \(c\) input vertices to \(c\) output vertices, is a rearrangeable non-blocking network of capacity \(c\), with \(2c\) vertices and \(c^{2}\) edges. In this case, the disjoint paths realizing any system of disjoint input-output pairs is just a matching, formed by the edges from the input to the output in each pair._ Lemma 4: _For any capacity \(c\), there exist rearrangeable non-blocking network of capacity \(c\) with \(O(c\log c)\) vertices and edges._ Pippenger [19] credits the proof of Lemma 4 to Beizer [3], who used a recursive construction. A more recent construction of Alon and Capalbo [1] is based on blowing up an expander graph, producing enough copies of each vertex that a system of edge-disjoint paths in the expander can be transformed into a system of vertex-disjoint paths in the non-blocking network. Their networks are non-blocking in a stronger sense (the vertex-disjoint paths can be found incrementally and efficiently), but we do not need that additional property. A simple counting argument shows that \(o(c\log c)\) edges is not possible: to have enough subsets of edges to connect \(c!\) possible systems of pairs, the number of edges must be at least \(\log_{2}c!\). For non-blocking networks with fewer vertices and more edges we turn to an older construction of Clos [7]: Lemma 5 (Clos [7]): _Suppose that there exists a rearrangeable non-blocking network \(G_{c}\) of capacity \(c\) with \(n\) vertices and \(m\) edges. Then there exists a rearrangeable non-blocking network of capacity \(c^{2}\) with \(3cn-2c^{2}\) vertices and \(3cm\) edges._ Proof: Construct \(3c\) copies of \(G_{c}\), identified as \(c\) input subunits, \(c\) internal subunits, and \(c\) output subunits. The input subunits have together \(c^{2}\) input vertices, which will be the inputs of the whole network. Similarly, the output subunits have together \(c^{2}\) output vertices, which will be the outputs of the whole network. Identify each output vertex of an input subunit with an input vertex of an internal subunit, in such a way that each pair of these subunits has exactly one identified vertex. Similarly, identify each output vertex of an internal subunit with an input vertex of an output subunit, in such a way that each pair of these subunits has exactly one identified vertex. An example of this network, for \(c=4\) and \(G_{c}=K_{4,4}\), can be seen in an expanded form as the middle network of Fig. 2. For greater legibility of the figure, instead of identifying pairs of vertices between subunits, these pairs have been connected by added edges. Contracting these edges would produce the network described above. To produce vertex-disjoint paths connecting any system of disjoint pairs of inputs and outputs, consider these pairs as defining a multigraph connecting the input subunits to the output subunits of the overall network. This multigraph has maximum degree \(c\) (each input or output subunit participates in at most \(c\) pairs), and we may apply a theorem of Denes Konig according to which every bipartite multigraph with maximum degree \(c\) has an edge coloring using \(c\) colors [16]. These colors may be associated with the \(c\) internal subunits, and used to designate which internal subunit each path should pass through. Once this designation is made, each subunit has its own system of disjoint pairs of inputs and outputs through which its paths should go, and the paths through each subunit can be completed using the assumption that it is rearrangeable non-blocking. Corollary 3: _For any constant \(\varepsilon>0\) and any integer \(c\geq 1\), there exist rearrangeable non-blocking networks of capacity \(c\) with \(O(c)\) vertices and \(O(c^{1+\varepsilon})\) edges._ Proof: We prove the result by induction on the integer \(i=\lceil\log_{2}1/\varepsilon\rceil\). As a base case this is true for \(\varepsilon=1\) (for which \(i=0\)) and for arbitrary \(c\), using the complete bipartite graph as the network. For smaller values of \(\varepsilon\), apply the induction Figure 2: Three rearrangeable non-blocking networks of capacity 16. Each network’s input vertices are in its left column and its output vertices are in the right column. Left: Complete bipartite graph. Center: Three-stage Clos network, with pairs of input and output vertices in consecutive stages connected by edges rather than being identified as single vertices. Right: Nine-stage network obtained by expanding each subunit of the center network into a three-stage network. hypothesis with the parameters \(2\varepsilon\) and \(\lceil\sqrt{c}\rceil\), to produce a rearrangeable non-blocking network \(N\) of capacity \(\lceil\sqrt{c}\rceil\) with \(O(\sqrt{c})\) vertices and \(O(c^{1/2+\varepsilon})\) edges. Applying Lemma 5 to \(N\) produces a rearrangeable non-blocking network of capacity \(\geq c\) with \(O(c)\) vertices and \(O(c^{1+\varepsilon})\) edges, as desired. Deleting excess vertices to reduce the capacity to exactly \(c\) completes the induction. Theorem 3.1: _For any \(m\) and \(n\) with \(n\leq m\leq 2\binom{n}{2}\), there exists a directed graph on \(m\) edges and \(n\) vertices on which any deterministic or high-probability randomized non-adaptive relaxation algorithm for shortest paths must use \(\Omega(mn/\log n)\) relaxation steps. When \(m=\Omega(n^{1+\varepsilon})\) for some \(\varepsilon>0\), the lower bound improves to \(\Omega(mn)\)._ Proof: We construct a graph according to the construction outlined above, in which we choose a capacity \(c\), set up two disjoint sets \(S\) and \(T\) of \(c\) vertices, connect \(T\) to \(S\) by a biregular bipartite digraph of some degree \(d\), and connect \(S\) to \(T\) by a rearrangeable non-blocking network of capacity \(c\). We allocate at least \(m/2\) edges to the biregular graph, and the rest to the non-blocking network, giving \(d\approx m/2c\). For the \(\Omega(mn/\log n)\) bound, we use the non-blocking network of Lemma 4, with \(c=\Theta(n/\log n)\). For the \(\Omega(mn)\) bound, we use the non-blocking network of Corollary 3, with \(c=\Theta(n)\). In both cases, we can choose the parameters of these networks to achieve these asymptotic bounds without exceeding the given numbers \(n\) and \(m\) of vertices and edges. We pad the resulting graph with additional vertices and edges in order to make the numbers of vertices and edges be exactly \(n\) and \(m\), and set the weights of these padding edges to be high enough that they do not interfere with the remaining construction. Next, we choose a random distribution on weights for the resulting network so that, for every relaxation sequence \(\sigma\), the expected reduced cost of \(\sigma\), for weights from this distribution, matches the lower bound in the statement of the lemma. For deterministic non-adaptive relaxation algorithms, this will give the desired lower bound directly, via the simple fact that the worst case of any distribution is always at least its expectation. For randomized algorithms, the lower bound will follow using Corollary 1 and Corollary 2 to convert the lower bound on expected reduced cost into a high-probability lower bound. As in Theorem 2.2, the random distribution on weights that we use is determined from a random distribution on paths from the source, such that the shortest path tree for the weighted graph will contain the chosen path. We can accomplish this by setting the lengths of the path edges to zero and all other edge lengths to one. Unlike in Theorem 2.2, these paths will not necessarily include all vertices in the graph and the shortest path tree may contain other branches. To choose a random path, we simply choose a sequence of edges in the biregular graph, one at a time, in order along the path. In each step, we choose uniformly at random among the subset of edges in the biregular graph that are disjoint from already-chosen edges. Because of the biregularity of the biregular part of our graph, each chosen edge is incident to at most \(2(d-1)\) other edges, and eliminates these other edges from being chosen later. At least \(c/2\) choices are possible before there are no more disjoint edges, and throughout the first \(c/4\) choices there will remain at least \(m/4\) edges to choose from, disjoint from all previous edges. The sequence ends when there are no more such edges to choose. Once we have chosen this sequence of edges from the biregular graph, we construct a set of vertex-disjoint paths in the rearrangeable nonblocking network that connects them in sequence into a single path. For any given relaxation sequence \(\sigma\), as in the proof of Theorem 2.1, let \(\tau\) be the subsequence of edges in \(\sigma\) that belong to the biregular part of the graph, and consider a modified relaxation algorithm that, after relaxing each edge in \(\tau\), immediately relaxes all edges of the non-blocking network. Define the reduced cost for \(\tau\) to be the number of relaxation steps made from \(\tau\) before all distances are correct, not counting the relaxation steps in the non-blocking network. Clearly, this is at most equal to the reduced cost for \(\sigma\), because \(\sigma\) might fail to relax a path in the non-blocking network when \(\tau\) succeeds, causing the computation of shortest path distances using \(\sigma\) to fall behind that for \(\tau\). Define \(t_{i}\) to be the step in the relaxation sequence for \(\tau\) that relaxes the \(i\)th chosen edge from the biregular graph, making the distance to its ending vertex correct. Then the expectation of \(t_{i}-t_{i-1}\) (conditioned on the choice of the first \(i-1\) edges is at least the average, over all edges that were available to be chosen as the \(i\)th edge, of the number of steps along \(\tau\) from \(t_{i-1}\) to the next occurrence of that edge. This expectation is minimized when the edges occurring immediately following position \(t_{i-1}\) in \(\tau\) are exactly the next available edges, and is equal to half the number of available edges; for other possibilities for \(\tau\), the expectation can only be even larger. The expected reduced cost for \(\tau\) equals the sum of these differences \(t_{i}-t_{i-1}\). Since there are \(\Omega(c)\) steps in which the number of available edges is \(\Omega(m)\), the expected reduced cost for \(\tau\) is \(\Omega(cm)\). The expected reduced cost for \(\sigma\) can only be larger, and plugging in the value of \(c\) (coming from our choice of which type of non-blocking network to use) gives the result. ## 5 Conclusions and Open Problems We have shown that, for a wide range of choices for \(m\) and \(n\), the Bellman-Ford algorithm is asymptotically optimal among non-adaptive relaxation algorithms. Adaptive versions of the Bellman-Ford algorithm are faster, but only by constant factors. Is it possible to prove that, among adaptive relaxation algorithms, Bellman-Ford is optimal? Doing so would require a careful specification of what information about the results of relaxation steps can be used in choosing how to adapt the relaxation sequence. The constant factors of \(\frac{1}{6}\) and \(\frac{1}{12}\) in our deterministic and randomized lower bounds for complete graphs are far from the constant factors of \(\frac{1}{2}\) and \(\frac{1}{3}\) in the corresponding upper bounds. Can these gaps be tightened? Is it possible to make them tight enough to distinguish deterministic and randomized complexity? Alternatively, is it possible to improve the deterministic methods to match the known randomized upper bound? For sparse graphs (\(m=O(n)\)), our lower bound falls short of the Bellman-Ford upper bound by a logarithmic factor. Can the lower bound in this range be improved, or can the Bellman-Ford algorithm for sparse graphs be improved? In this work, we considered the worst-case number of relaxation steps used by non-adaptive relaxation algorithms for the parameters \(m\) and \(n\). But it is also natural to look at this complexity for individual graphs, with unknown weights. For any given graph, there is some relaxation sequence that is guaranteed to find shortest path distances for all weightings of that graph, with as few relaxation steps as possible. An algorithm of Haddad and Schaffer [12] can find such a sequence for the special case of graphs for which it is as short as possible, one relaxation per edge. What is the complexity of finding or approximating it more generally? ## Acknowledgements This research was supported in part by NSF grant CCF-2212129.
2307.10118
Some examples of quasiperiodic tilings obtained with a simple grid method
A grid method using tiling by fundamental domain of simple 2D lattices is presented. It refer to a previous work done by Stampfli in $1986$ using two tilings by regular hexagons, one rotate by $\pi/2$ relatively to the other. This allows to get a quasiperiodic structure with a twelve fold symmetry. The quasiperiodic structure is a tiling of the plane by regular triangles, squares and rhombuses. This can be extented to other examples of tilings by fundamental domain. Two other examples are proposed. The first example also based on the hexagonal lattice, but with grids defined by the fundamental rhombic domain formed by two regular triangles. The second example presents the case of a square lattice with a square fundamental domain.
Jean-François Sadoc, Marianne Imperor-Clerc
2023-07-19T16:37:31Z
http://arxiv.org/abs/2307.10118v1
# Some examples of quasiperiodic tilings obtained with a simple grid method ###### Abstract A grid method using tiling by fundamental domain of simple 2D lattices is presented. It refer to a previous work done by Stampfli in 1986 using two tilings by regular hexagons, one rotate by \(\pi/2\) relatively to the other. This allows to get a quasiperiodic structure with a twelve fold symmetry. The quasiperiodic structure is a tiling of the plane by regular triangles, squares and rhombuses. This can be extented to other examples of tilings by fundamental domain. Two other examples are proposed. The first example also based on the hexagonal lattice, but with grids defined by the fundamental rhombic domain formed by two regular triangles. The second example presents the case of a square lattice with a square fundamental domain. ## I Introduction Quasiperiodic tilings are more and more observed in experimental systems like 2D materials where they are directly linked to the superposition of periodic layers. A recent example is the case of dodecagonal graphene [1]. In this context, the use of grid methods for building quasiperiodic tilings is highly relevant. In a short paper published in 1986 Peter Stampfli [2] introduces the construction of a quasiperiodic tiling of the plane with regular triangles, squares and rhombuses having a global dodecagonal symmetry. He give a way to generate this tiling by a hierarchical decoration of tiles. But he introduces also a construction derived from the overlap of two similar grids. The two grids are two periodic hexagonal tilings by identical regular hexagons. One grid is rotate by \(\pi/2\) relatively to the other, so the two grids together have a dodecagonal symmetry. Two years latter inspired by the Stampfli proposition, Korepin [3] publish a more developed and general paper, making a bridge between the grid method and the cut and project method. The purpose of this paper is to give details of this construction by showing how it is possible to go from the two tiling by regular hexagons to the quasiperiodic dodecagonal tiling by squares, regular triangles and rhombuses. It is interesting to notice that this construction can be extended to other tilings. We present two examples. One uses a square lattice to define a grid which is a tiling by squares and then a second grid obtained by \(\pi/4\) rotation. The resulting quasiperiodic tiling is the Ammann-Beenker tiling by squares and rhombuses. The other example use the hexagonal lattice like in the Stampfli case, but grid are a tiling by rhombuses with \(\pi/3,2\pi/3\) angles (two regular triangles). The quasiperiodic tiling contains squares, regular triangles rhombuses like in the Stampfli example, but also three-fold star-like additional tiles. It seems that the choice of different fundamental domains associated to lattices in order to get grids leads to a large choice of resulting quasiperiodic tilings. ## II The two hexagonal grids of the Stampfli example The first grid is generated using an hexagonal lattice defined by the two base vectors: \(\mathbf{e_{1}}=\{1,0\},\mathbf{e_{3}}=\{-1/2,\sqrt{3}/2\}\). At each nodes of this lattice an hexagonal motif is reproduced leading to a tiling of the plane by hexagons. The vertices of this hexagonal motif are \(\{0,\frac{\sqrt{3}}{3}\},\{-\frac{1}{2},\frac{\sqrt{3}}{6}\},\{-\frac{1}{2}, -\frac{\sqrt{3}}{6}\},\{0,-\frac{\sqrt{3}}{3}\},\{\frac{1}{2},-\frac{\sqrt{3} }{6}\},\{\frac{1}{2},\frac{\sqrt{3}}{6}\}.\) Then the second grid is obtained rotating the first one by a \(\pi/2\) rotation around the origin, so constructed relatively to a lattice having base vectors \((\mathbf{e_{2}},\mathbf{e_{4}})\) orthogonal to \((\mathbf{e_{1}},\mathbf{e_{3}})\). The choice of these notations for the four vectors \((\mathbf{e_{1}}=\{1,0\},\mathbf{e_{2}}=\{\sqrt{3}/2,1/2\},\mathbf{e_{3}}=\{-1 /2,\sqrt{3}/2\},\mathbf{e_{4}}=\{0,1\})\) refer to a previous publication [5] and is coherent with the notations used for the cut and project method (4D to 2D). Figure 1 displays the two grids of hexagons and also how edges of hexagons cut each others. They can cut at \(\pi/3\) when the two edges belong to the same hexagon; they can be orthogonal, so with an edge belonging to one grid and the other to the other grid; they also can cut with a \(\pi/6\) angle also from different grids. Stampfli states that each edge overlap define a tile of the quasiperiodic tiling: regular triangle if they are at \(\pi/3\) angle, square for \(\pi/2\) angle and rhombus for \(\pi/6\) angle. Nevertheless Stampfli does not give a clear demonstration. The rule which will give the tiling show that any overlap point corresponds to a tile even if the position of points is not simply related to the tile position (it will even appear that an overlap point lies outside the related tile). ## Domains defined by the two grids Overlap of hexagons, one of each grid, define polygonal domains entirely covering the \(2D\)-plane. A domain have edges which are parts of edges of hexagons and have vertices which are the intersection points on hexagon edges, sometime from the same hexagon and otherwise from two hexagons from the two grids. The two grids of hexagons (figure 1) are built by reference to two hexagonal lattices one related to the other by a rotation of \(\pi/2\). A domain is the common polygonal surface common to one hexagon of the first grid and another of the other grid if both are sufficiently close to intersect. Hexagons are characterized by the lattice vectors positioning their centers. Call \((i,k)\) a first one define in the base \((\mathbf{e_{1}},\mathbf{e_{3}})\) and \((j,l)\) another in the base \((\mathbf{e_{2}},\mathbf{e_{4}})\) defining the two hexagons intersecting to form the domain we are considering. We attribute to each domain a reference point, located at the vector \((i\mathbf{e_{1}}+j\mathbf{e_{2}}+k\mathbf{e_{3}}+l\mathbf{e_{4}})/2\) which is the middle point between the two centers of the overlapping hexagons. Notice that vectors \((i,k)\) and \((j,l)\) could be seen as vectors in a \(E_{//}\) space projected from a space with higher dimension as it is often used in quasicrystal construction[5]. Here is the relation between the grid method and the construction of quasicrystal by projection of higher dimensional space. The aim of this paper is to show that reference points of domains are vertices of the quasi-periodic tiling. Figure 1: The two grids, one represented in blue, the other in red. Vertices between edges of hexagons are represented by points: red or blue for hexagons edges at hexagon vertices, yellow for two orthogonal edges, green for edges at \(\pi/6\). Each edge intersections will be associated with tiles: triangles for red or blue points, squares for yellow points and rhombuses for green points. A polygonal domain is the overlap of two hexagons, one of the first grid (red \((i,k)\)) and one of the other grid (blue \((j,l)\)). We define, as reference point of this domain, the point which bisect the vector joining the two centres of intersecting hexagons. Consider a domain close neighbour to the previous one, so sharing one of it edges. Consequently this domain is necessarily the overlap of one of the previous hexagon (for instance the red one of the first grid \((i,k)\)) and, in the other grid, a close neighbour with the hexagon \((j,l)\); we can call it \((j^{\prime},l^{\prime})\). The vector joining \((j,l)\) and \((j^{\prime},l^{\prime})\) which are close neighbours in their own lattice, is a unit vector. So the length of the vector joining the two close reference points is the half of one unit vector defining lattices of grids (see the figure 3). The line joining the reference point of the first considered domain with this new one is now considered as an edge of the tiling. It length is half of the module of base lattice vectors used to construct the grids (said \(1/2\)). It is orthogonal to the common edge of hexagon \((j,l)\) and \((j^{\prime},l^{\prime})\). Extending this construction to all edges of the first domain we conclude that from the reference point of this domain, there are lines of length \(1/2\) orthogonal to domain edges so possibly making \(\pi/3\), \(\pi/2\) or \(\pi/6\) angles referring to angles possibly done by edges of domains (the red and blue points or the yellow and green points of figure 1). A reference point of a domain is connected by a segment to a reference point of a neighbouring domain, so that the two domains are sharing a common edge. The important property is that the segment orthogonal to the shared edge has a length half of that of base lattice vectors, so all such segment have the same length. The angles appearing between segments joining close domains are the angles characteristic of regular triangles, squares or rhombuses. This is confirming that reference points of domains are vertices of a tiling by polygons whose edges have all the same modulus and angles are in the set \(\pi/3\), \(\pi/2\) or \(\pi/6\). All tiles are necessarily regular triangles, squares or rhombuses forming a quasiperiodic tiling. The quasiperiodic tiling can be constructed from the set of all reference points of all domains (which can be obtained using recent version (at least \(13.1\)) of Mathematica [7] then we make a Delaunay triangulation of all these points. In this triangulation there are edges with length different from \(1/2\) like square diagonals. By keeping in the triangulation only edges of length \(1/2\), on obtains the quasiperiodic tiling by regular triangles, squares and rhombuses. ## III An other example of this grid method: the octagonal tiling In place of the two hexagonal grids we consider instead two grids of squares, one with squares centered on the vertices of a square lattice with unit vector of unit length, the other grid being simply rotated around the origin by a \(\pi/4\) angle. Then we construct all the polygonal domains corresponding to the overlap of squares of the two grids. The reference point of such a domain is the mid-point of square intersecting to form the domain. This set of points can be triangulated to form a Delaunay set. In this set some edges have a \(1/2\) length (in term of base vector length); selecting these edges we get a tiling by squares and rhombuses which is quasiperiodic. In fact we recognize the well known Ammann-Beenker [8; 9] tiling obtained using inflation rules or by a different grid method. Figure 2: Example of three different overlaps of two hexagons centered on nodes of the two lattices, one with coordinates \((i,k)\) on the base \((\mathbf{e_{1}},\mathbf{e_{3}})\) (red point) and one with coordinate \((j,l)\) on \((\mathbf{e_{2}},\mathbf{e_{4}})\) (blue points). Their common domain is shown with it charasteristic point (black point) corresponding to the middle point between the two centers of the overlapping hexagons. This point is a vertex of the quasiperiodic tiling. Figure 4: Left; black points are characteristic points of domains resulting from the overlap of hexagons. The figure is the Delaunay triangulation of this set of points (dual of Voronoi). Right; Tiles of the quasiperiodic tiling are regular triangles, square and rhombuses. This is derived from the Delaunay triangulation suppressing edges whose length is not 1/2 (diagonal of squares and rhombuses and lines at the border). Figure 3: Overlaping domains of two hexagons. The central red hexagon, centered on the central red point, intersect blue hexagons, centered on vertices of the other lattice (blue points). Reference points (black points) of domains (vertices of the tiling) are mid points of vectors connecting the central red point to centres of blue hexagons. Edges of the tiling (in grey) join black points, for instance the one between the blue and the green domains. The vector joining the centre of the right and bottom blue hexagons, which are close neighbours, is a vector of the blue lattice of unit modulus. So it appears that the length of the edge between the reference point of the blue domain and that of the green domain is half of this unit modulus. That is the same for all edges of the quasiperiodic tiling. ## III An example with hexagonal lattices but with less symmetric tiles for grids In order to get a grid the tile which is reproduced by the lattice has to be a fundamental domain of the lattice. In the Stampfli example, the fundamental domain of the hexagonal lattice is a regular hexagon (a Voronoi cell of lattice nodes). But we can also choose a rhombus form by two glued regular triangles (the unit cell of the lattice). The resulting quasiperiodic tiling (figure ) contain regular triangles, squares, rhombuses like the Stampfli one, but also star made of three half rhombuses radiating around a small triangle. ## IV Conclusion There are different methods which generate quasiperiodic tilings. The present one based on grids of regular tessellations of the plane is interesting as it makes a bridge between methods derived from projections of high dimension lattices and methods related to Moire pattern. The advantage of this method is that it is done directly in the plane and it consists in two simple steps. First the overlaping domains and the set of their reference points is build. Then the quasiperiodic tiling is obtained from the Delaunay triangulation of this set of points after removing extra edges. Both steps can be implemented using Mathematica software. Figure 5: Left; The two square grids with intersections of edges. In the quasiperiodic tiling, vertices of grids lead to the two orientation of squares; overlaps between blue and red edges lead to rhombuses. Center; black points are characteristic points of domains resulting from the overlap of squares. The figure is the Delaunay triangulation of this set of points (dual of Voronoi). Right; Tiles of the quasi-periodic tiling are square and rhombuses. This is derived from the Delaunay triangulation suppressing edges whose length is not \(1/2\) (diagonal of squares and rhombuses and lines at the border). This is the Ammann-Beenker tiling. Figure 6: Left: Grids which are rhombuses formed by gluing two regular triangles. Intersection of edges form \(\pi/3\), \(\pi/2\) or \(\pi/6\) angles (with the same color code as in figure 1), here again related to regular triangles, squares or rhombuses (or half rhombuses) in the quasiperiodic tiling. Right: The quasiperiodic tiling resulting from this grid construction. Using the grid method, the type of tiles in the quasiperiodic tiling depends on the intersections between edges of the tiles of the grid which have to be fundamental domain of the considered lattices. For instance, using hexagonal fundamental domain, intersections are with \(\pi/6,\pi/2\) angles between grid tiles of the two grids, and with \(\pi/3\) angles on grid vertices. The ratio between the different types of tiles is the ratio between the different types of intersection. An open question is that fundamental domain can have very complex shapes, eventually fractal[10], nevertheless the number of types of intersection remains probably small.
2304.09119
Safety Guaranteed Manipulation Based on Reinforcement Learning Planner and Model Predictive Control Actor
Deep reinforcement learning (RL) has been endowed with high expectations in tackling challenging manipulation tasks in an autonomous and self-directed fashion. Despite the significant strides made in the development of reinforcement learning, the practical deployment of this paradigm is hindered by at least two barriers, namely, the engineering of a reward function and ensuring the safety guaranty of learning-based controllers. In this paper, we address these challenging limitations by proposing a framework that merges a reinforcement learning \lstinline[columns=fixed]{planner} that is trained using sparse rewards with a model predictive controller (MPC) \lstinline[columns=fixed]{actor}, thereby offering a safe policy. On the one hand, the RL \lstinline[columns=fixed]{planner} learns from sparse rewards by selecting intermediate goals that are easy to achieve in the short term and promising to lead to target goals in the long term. On the other hand, the MPC \lstinline[columns=fixed]{actor} takes the suggested intermediate goals from the RL \lstinline[columns=fixed]{planner} as the input and predicts how the robot's action will enable it to reach that goal while avoiding any obstacles over a short period of time. We evaluated our method on four challenging manipulation tasks with dynamic obstacles and the results demonstrate that, by leveraging the complementary strengths of these two components, the agent can solve manipulation tasks in complex, dynamic environments safely with a $100\%$ success rate. Videos are available at \url{https://videoviewsite.wixsite.com/mpc-hgg}.
Zhenshan Bing, Aleksandr Mavrichev, Sicong Shen, Xiangtong Yao, Kejia Chen, Kai Huang, Alois Knoll
2023-04-18T16:33:58Z
http://arxiv.org/abs/2304.09119v2
Safety Guaranteed Manipulation Based on Reinforcement Learning Planner and Model Predictive Control Actor ###### Abstract Deep reinforcement learning (RL) has been endowed with high expectations in tackling challenging manipulation tasks in an autonomous and self-directed fashion. Despite the significant strides made in the development of reinforcement learning, the practical deployment of this paradigm is hindered by at least two barriers, namely, the engineering of a reward function and ensuring the safety guaranty of learning-based controllers. In this paper, we address these challenging limitations by proposing a framework that merges a reinforcement learning planner that is trained using sparse rewards with a model predictive controller (MPC) actor, thereby offering a safe policy. On the one hand, the RL planner learns from sparse rewards by selecting intermediate goals that are easy to achieve in the short term and promising to lead to target goals in the long term. On the other hand, the MPC actor takes the suggested intermediate goals from the RL planner as the input and predicts how the robot's action will enable it to reach that goal while avoiding any obstacles over a short period of time. We evaluated our method on four challenging manipulation tasks with dynamic obstacles and the results demonstrate that, by leveraging the complementary strengths of these two components, the agent can solve manipulation tasks in complex, dynamic environments safely with a \(100\%\) success rate. Videos are available at [https://videoviawsite.wiksite.com/mpc-hgg](https://videoviawsite.wiksite.com/mpc-hgg). ## I Introduction Deep reinforcement learning (RL) has been widely used to solve complex decision-making tasks in robotics, such as controlling robotic arms to perform manipulation tasks [1], generating agile locomotion gaits for legged robots [2], and planning trajectories for autonomous vehicles [3]. As there is no modeling computation or optimization involved, RL-based methods are superior to traditional control methods in solving long-horizon planning and dynamic tasks in a timely manner. However, RL methods are constantly facing two major challenges, namely, requiring handcrafted reward functions that are tailored to individual tasks and lacking rigorous guarantee to ensure the safety of operations. For the first challenge, in most complex robotic tasks, where a concrete representation of efficient or even admissible behavior is unknown, it is extremely difficult and time-consuming to design an adequate task tailored reward, thereby making this strategy impractical for wide robotic applications of RL. One promising concept to address the reward engineering problem is to use a binary reward to simply indicate the completion of the task based on its success or failure condition. This kind of binary reward is also known as a sparse reward and is easy to derive from task definition with minimum effort. Although a sparse reward is easy to derive from task definition with minimum effort, RL algorithms that support sparse rewards usually suffer from bad learning efficiency. This is because the sparse reward only delivers shallow and insufficient information during training, which can limit the performance of the RL algorithm. Hindsight experience replay (HER) [4] is one of fundamental methods which improves the success of off-policy RL algorithms in multigoal RL problems with sparse rewards. The core idea behind HER is to train an agent using handcrafted, easy-to-achieve intermediate goals, and then gradually increase the difficulty of the goals. To achieve this, HER constructs hindsight goals Fig. 1: Overview of the proposed MPC-HGG framework. The RL planner performs long horizon planning by proposing intermediate goals. The MPC actor performs short horizon planning to ensure the safety of the action by avoiding dynamic obstacles. from previously achieved states, replays known trajectories with these hindsight goals, and uses the results to train a goal-dependent value function. An extremely useful extension to HER is hindsight goal generation (HGG) algorithm, which can generate more meaningful hindsight goals in the direction of the desired goal and accelerate the learning of distant targets [5]. However, HGG shows poor performance in the presence of obstacles since distance norm measure for the hindsight goals distribution does not account for the occupied area. For the second challenge, learning methods are always criticized by their indistinct interpretability and suffering its adaptability to environments with uncertainty. Take manipulation tasks for example, although robotic arms are mostly operated in a relatively enclosed space, the unpredictable movement of human operators or a sudden invasion of unknown objects will pose great safety threat to the completion of the task or even for life and property. As an effective control approach, the model predictive control (MPC) is extensively used for path planning and collision avoidance [6, 7]. Model predictive control requires a dynamic model of the environment to predict the future states and optimise control over a finite time horizon. This means that MPC provides online trajectory optimization with the ability to rapidly react to changes in the environment. To overcome these challenges, we propose a control framework that combines the advantages of RL-based methods in long-horizon planning and traditional model-based control methods in safety guaranteed performance (see Figure 1). Specifically, the HGG algorithm is used as the high-level planner to propose intermediate goals for solving a long-horizon task. While the MPC algorithm is used as an actor to execute the sub-task that takes the intermediate goal proposed by the planner as the input. We show that the combination of the both methods can guarantee collision avoidance in the randomized dynamic environments. Our results demonstrate how the MPC actor can effectively address the issue of unreliable safety in a RL policy and how the RL policy can guide MPC through the infinite horizon. Our contributions to the literature are summarized as follows. First, we propose an RL planner that utilizes the hindsight goal generation algorithm to generate intermediate goals for long-term manipulation tasks. We enhance the collision avoidance capability of the HGG algorithm by introducing a multi-objective sparse reward concept. This reward function incentivizes not only reaching the goal, but also avoiding any obstacle collision, all while minimizing the required engineering effort. Second, we formulate an MPC actor that can optimize the trajectory of a robotic arm to reach to the goal proposed by the RL planner and avoid colliding with any obstacle over finite horizon. Last, we show the proposed framework is able to solve complex manipulation tasks effectively and safely in the simulation and the real world. The controller solves all the tasks with a success rate of \(100\%\) and attains a real-time performance with less than \(3\) ms per timestep. ## II Related Works This section aims to provide a brief overview of the literature that explores the combination of model predictive control with reinforcement learning algorithms. A closely related work by [8] introduces the GO-MPC algorithm for autonomous navigation in crowded scenarios. The authors propose a pre-trained on-policy RL algorithm to provide long-term sub-goals to the local motion planner based on MPC. Hansen et al. [9] incorporate MPC and twin delayed deep deterministic policy gradient (TD3) to improve sample efficiency via model learning. The authors jointly train a terminal value function and a latent dynamics model, which are utilized for local trajectory optimization and global guidance, respectively. By leveraging the latent space representation, the learned model can reduce the state space by eliminating irrelevant features, such as background noise and shading. To address the problem of high computational costs over a long planning horizon, Negenborn et al. [10] propose to utilize the learned value function in the cost function of a conventional MPC controller. Bhardwaj et al. [11] propose a novel approach to capitalize on the advantages of both model-free and model-based methods. Specifically, they propose a Model Predictive Control (MPC) based Q-Learning algorithm that utilizes local optimization to improve the value function and demonstrate faster learning speeds with fewer system interactions. This method provides a promising alternative that combines the strengths of both model-based and model-free methods for control tasks. Greatwood et al. [12] employ MPC to regulate a quadrotor micro air vehicle (MAV) using guidance from a RL policy. The approach is designed to operate under the assumption of only rectangular obstacles and employs two distinct MPCs for each axis to effectively control the MAV. Xue et al. [13] leverage MPC within the deep deterministic policy gradient (DDPG) algorithm to predict the trajectory of dynamic obstacles. The proposed approach employs a complex reward function that comprises target attraction, obstacle repulsion, collision penalty, and reward for reaching the target. This technique demonstrates promise in navigating dynamic environments with obstacles. Another interesting approach was introduced by [14], in which the authors suggest learning a control policy directly in the real-world environment. The approach utilizes an additional safe policy that can be triggered to return the robot to a safe state during training. An approximated dynamics model is then used to decide when to revert back to the learning policy to continue the training. MPC is employed to achieve more stable operation due to the reduced action space and to avoid direct control of motor torques. This methodology offers a promising technique for training control policies in real-world environments while ensuring safety. ## III Preliminaries ### _Goal-Conditioned RL_ In goal-conditioned RL, an agent interacts with its environment to reach some goals, which can be modeled as a goal-conditioned Markov decision process (MDP) with a state space \(\mathcal{S}\), an action space \(\mathcal{A}\), a goal space \(\mathcal{G}\), a probabilistic transition function \(P:S\times\mathcal{A}\rightarrow\mathcal{S}\), a reward function \(r_{g}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\), and a discount factor \(\gamma\). The agent's action \(a_{t}\) is defined by a probabilistic policy \(\pi(s_{t}||g)\) at every time step \(t\), given by the current state \(s_{t}\) and the goal \(g\) (we use \(||\) as a symbol for concatenation into \(\mathcal{S}\times\mathcal{G}\)). The goal is to find a policy that can maximize the expected curriculum reward starting from the initial state sampled from the initial state distribution \(s\in S_{0}\), which is defined as \[V^{\pi}(s||g)=\mathbb{E}_{s_{0},a_{t}\sim\pi(s_{t}||g),\,s_{t+1} \sim P(s_{t},a_{t})}\big{[}\sum_{t=0}^{\infty}\gamma^{t}r_{g}(s_{t},a_{t}) \big{]}. \tag{1}\] ### _Hindsight Experience Replay_ Hindsight Experience Replay (HER [4]) is an RL algorithm specifically engineered for goal-oriented tasks characterized by sparse rewards, which are often difficult for agents to learn efficiently. Despite its simplicity, HER has been demonstrated to be a highly effective method for improving agent performance in these challenging scenarios. HER is designed to enhance learning efficiency by using a relabeling approach that exploits the idea that experiences that are uninformative for a given goal may still contain valuable information for other goals. HER assumes that, in a multi-goal RL task with sparse rewards, each goal \(g\) is associated with a predicate \(f_{g}:\mathcal{S}\rightarrow\{0,1\}\). Once the agent reaches a state \(s\) that satisfies \(f_{g}(s)=1\), it is considered that the goal has been achieved. The reward function is defined as sparse if it satisfies \(r_{g}(s,a)=-[f_{g}(s)=0]\). This implies that until the agent reaches the goal, it continuously receives negative rewards. In HER, each transition \((s_{t}||g,a_{t},r_{t},s_{t+1}||g)\) is not only stored with the original episode goal \(g\), but also with a subset of hindsight goals \(g^{\prime}\) as \((s_{t}||g^{\prime},a_{t},r_{t},s_{t+1}||g^{\prime})\). As a result, when replaying the resulting transitions \((s_{t}||g^{\prime},a_{t},r_{t},s_{t+1}||g^{\prime})\), the agent is more likely to encounter informative rewards. An interpretation of HER is that it acts as an implicit curriculum, focusing initially on simpler intermediate goals and subsequently progressing towards more challenging goals that are nearer to the ultimate target goals. ## IV Methodology This section first gives an overview of our proposed algorithm MPC-HGG. Then we explain the RL planner and the MPC actor in detail. Finally, we summarize MPC-HGG with its pseudocode. ### _Overview_ The overall architecture of the MPC-HGG algorithm is shown in Figure 1. The algorithm is briefly explained in two phases as follows. 1. In the first stage, we design a RL planner that can solve complex, long-horizon planning manipulation tasks via a curriculum learning approach. This RL controller can adapt itself to multi-goal tasks, but is not able to guarantee the safety of the proposed action. 2. In the second stage, we develop a MPC actor that can provide safe actions to reach to an intermediate goal in a short planning horizon. The intermediate goals are suggested by the RL planner from stage one. ### _RL Planner_ Inspired by HER [4] and HGG [5], we train the RL planner in the following fashion. The episode starts with sampling an initial state - goal pair \((s_{0},g)\) from an initial state distribution \(\mathcal{S}_{0}\) and a target goal distribution \(\mathcal{G}_{T}\). A state \(s\) can be mapped to a goal \(g_{s}\) by \(g_{s}=m(s)\). In the beginning of the learning stage, the exploration is random since no meaningful policy has been developed yet. Exploration naturally starts from \(s_{0}\sim\mathcal{S}_{0}\), thus goals that are close to \(m(s_{0})\) are reached more easily. The agent can leverage the generalization capabilities inherent in neural networks, enabling it to extrapolate from past experiences and extend its ability to achieve goals similar to those previously reached. The idea is illustrated in Figure 2. The distribution \(\mathcal{T}^{*}:\mathcal{G}\times\mathcal{S}\rightarrow\mathbb{R}\) determines how they are sampled. Instead of optimizing the value function \(V^{\pi}\) with difficult target goals, which carries the risk of being too far from the known goals, we try to optimize with a set of intermediate goals sampled from \(\mathcal{T}\). On the one hand, the goals contained in \(\mathcal{T}\) should be easy to reach, which requires a high \(V^{\pi}(\mathcal{T})\). On the other hand, goals in \(\mathcal{T}\) should be close enough to \(\mathcal{T}^{*}\) to be challenging for the agent. Inspired by HGG [5], a guided schedule for selecting suitable intermediate goals \(g^{\prime}\in\mathcal{G}\) that will be used by the agent instead of \(g\in\mathcal{G}_{T}\). This will guide the agent to learn from easy to difficult, so it can learn to reach the goals from \(\mathcal{G}_{T}\) gradually. Therefore, it is necessary to find a substitute distribution \(\mathcal{T}:\mathcal{G}\times\mathcal{S}\rightarrow\mathbb{R}\) which chooses appropriate intermediate goals. On the one hand, such goals must be close to goals that the agent can already reach, and on the other hand, they still have some distances to achieved goals so that the agent learns something new to approach the final goal. This trade-off can be formalized as \[\max_{\mathcal{T},\pi}V^{\pi}(\mathcal{T})-L\cdot\mathcal{D}( \mathcal{T}^{*},\mathcal{T}). \tag{2}\] The Lipschitz constant \(L\) is treated as a hyper-parameter. In practice, to select these goals, we first approximate \(\mathcal{T}^{*}\) by taking \(K\) samples from \(\mathcal{T}^{*}\) and storing them in \(\hat{\mathcal{T}}^{*}\). Then, for an initial state and goal \((\hat{s}^{i}_{0},\hat{g}^{i})\in\hat{\mathcal{T}}^{*}\), we select a trajectory \(\tau=\{s_{t}\}_{t=1}^{T}\) that minimizes the following function: \[\begin{split} w(\hat{s}^{i}_{0},\hat{g}^{i},\tau):=c& \|m(\hat{s}^{i}_{0})-m(s_{0})\|\\ &+\min_{s_{t}\in\tau}\left(\|\hat{g}^{i}-m(s_{t})\|-\frac{1}{L}V^{ \pi}((s_{0}\|m(s_{t}))\right).\end{split} \tag{3}\] \(c>0\) provides a trade-off between 1) the distance between target goals and 2) the distance between the goal representation of the initial states. Finally, from each of the \(K\) selected trajectories \(\tau^{i}\), the hindsight goal \(g^{i}\) is selected from the state \(s^{i}_{t}\in\tau^{i}\), that minimized (3). More formally, \[g^{i}:=\underset{s_{t}\in\tau}{\text{arg min}}\left(\|\hat{g}^{i}-m(s_{t})\|- \frac{1}{L}V^{\pi}((s_{0}\|m(s_{t}))\right). \tag{4}\] Through experimentation, it was observed that the agent was able to achieve the target goal despite encountering obstacles and colliding during task execution, which should be punished as we expect the agent to complete the task while avoiding collisions. Prior work addresses this issue by creating a dense reward signal linked to specific obstacle measurements to ensure collision-free movement. However, designing an well-designed reward tailored to a specific task is challenging and demands time and effort. We suggest a way to balance the advantages of using sparse rewards with the need for task-oriented behavior. Our solution is a multi-objective conditioned binary reward function that assigns different magnitudes to each objective. Specifically, the multi-objective sparse reward is defined as \[r_{g}(s):=\begin{cases}\eta,&\text{if collision}\\ 0,&\text{if }f_{g}(s)=1\\ -1,&\text{otherwise.}\end{cases} \tag{5}\] The reward \(\eta\), as a hyperparameter, is a constant negative value that is customized to prevent the collision. If the agent encounters an obstacle, it will receive a reward of \(\eta<-1\). However, if the agent does not collide with any obstacles but still fails to reach the goal, it will receive a reward of \(-1\). The agent will only receive a reward of \(0\) when it successfully reaches the goal. The pseudocode of the HGG algorithm is shown in Algorithm 1. ``` 1:Given: * An off-policy RL algorithm \(\mathbb{A}\), \(\triangleright\) e.g. DDPG * A strategy \(\mathbb{S}\) for sampling goals for replay, \(\triangleright\) e.g. \(\mathbb{S}(s_{0},...,s_{T})=m(s_{T})\) * A set of reward functions \(r_{g}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\)\(\triangleright\) e.g. \(r_{g}(s,a)=-\mid f_{g}(s)==0\mid\) 2:Initialize 3:Initialize replay buffer \(R\) 4:for\(iteration\)do 5: Construct a set of \(M\) intermediate tasks \(\{(\hat{s}_{0}^{i},g^{i})\}_{i=1}^{M}\): \(\triangleright\) HGG * Sample target tasks \(\{(\hat{s}_{0}^{i},\hat{g}^{i})\}_{i=1}^{K}\sim\mathcal{T}^{*}\) * Find \(K\) distinct trajectories \(\{r_{f}^{i}\}_{i=1}^{K}\) that together minimize (3) weighted bipartite matching * Find \(M\) intermediate tasks \((\hat{s}_{0}^{i},g^{i})\) by selecting and intermediate goal \(g^{i}\) from each \(\tau^{i}\) 6:for\(episode=1\), \(M\)do 7:\((s_{0},g)\leftarrow(\hat{s}_{0}^{i},g^{i})\)\(\triangleright\) hindsight goal-oriented exploration 8:for\(t=0,T-1\)do 9: Sample an action \(a_{t}\) using the policy from \(\mathbb{A}\) with noise: \[a_{t}\leftarrow\pi(s_{t}\parallel g)+\mathcal{N}_{t}\] (6) 10: Execute action \(a_{t}\) and observe new state \(s_{t+1}\) 11:endfor 12:for\(t=0,T-1\)do 13:\(r_{t}:=r_{g}(s_{t},a_{t})\) 14: Store transition \((s_{t}\parallel g,\;a_{t},\;r_{t},\;s_{t+1}\parallel g)\)\(\triangleright\) DDPG experience replay 15: Sample a set of additional goals for replay \(G:=\mathrm{S}(current\ episode)\) 16:for\(g^{\prime}\in G\)do 17:\(r^{\prime}:=r_{g^{\prime}}(s_{t},a_{t})\) 18: Store the transition \((s_{t}\parallel g^{\prime},\;a_{t},\;r^{\prime},\;s_{t+1}\parallel g^{\prime})\) in \(R\)\(\triangleright\) HER 19:endfor 20:endfor 21:endfor 22:for\(t=1,N\)do 23: Sample a minibatch \(B\) from the replay buffer \(R\)\(\triangleright\) HER or EBP 24: Perform one step of optimization using \(\mathbb{A}\) and minibatch \(B\)\(\triangleright\) DDPG 25:endfor 26:endfor ``` **Algorithm 1** Hindsight Goal Generation (HGG) ### _MPC Actor_ Model predictive control, which is model-based method for trajectory planning, uses a dynamic model of the system to predict the future position of the agent and optimizes the sequence of the control inputs to achieve lower costs. #### Iii-C1 Definition Our proposed model is designed based on the idea that modern industrial robots have optimization algorithms that can quickly convert a desired location into specific torque rates for each joint of the robot arm. This allows for precise movement of the arm to the desired location. As the action space used in the RL planner is the gripper's coordinates \([x,y,z]\) of the robotic arm, we use a simple point-mass model for the MPC actor: \[\dot{x} =v_{x}, \dot{v}_{x}=\frac{F_{x}}{m} \tag{7}\] \[\dot{y} =v_{y}, \dot{v}_{y}=\frac{F_{y}}{m}\] \[\dot{z} =v_{z}, \dot{v}_{z}=\frac{F_{z}}{m}\] We define the state vector as \(x=[x,y,z,v_{x},v_{y},v_{z}]\in\mathcal{X}=\mathbb{R}^{6}\) and the control input vector as \(u=[F_{x},F_{y},F_{z},\xi]\in\mathcal{U}=\mathbb{R}^{4}\). \(\xi\) is used to soften the hard constraints. \(m\) is the mass of the manipulable object. #### Iii-C2 Formulation We define the problem as non-convex, finite-time nonlinear optimal control with horizon length Fig. 2: Visualization of the intermediate goal distributions generated by the hindsight goal generation. and takes the following form: \[\begin{array}{llll}\text{minimize}&\sum_{k=1}^{N}f_{k}(z_{k},p_{k})&\text{cost function}\\ \text{subject to}&z_{1}(\mathcal{I})=z_{\text{init}}&\text{initial equality}\\ &z_{k}\leq z_{k}\leq\bar{z}_{k}&\text{upper-lower bounds}\\ &\underline{h}_{k}\leq h_{k}(z_{k},p_{k})\leq\bar{h}_{k}&\text{nonlinear constraints}\end{array} \tag{8}\] where \(z_{k}\in\mathcal{U}\times\mathcal{X}\in\mathcal{Z}=\mathbb{R}^{10}\) is a stage variable which stacks the input and differential state variables together. \(p_{k}\in\mathcal{G}\times\mathcal{O}_{i,k}\), \(\forall i\in\{1,\dots,N_{o}\}\) contains the real-time data, such as current goal and obstacles. \(f_{k}(z_{k},p_{k})\) represents the cost function, which should be minimized during the optimization process, and \(h_{k}(z_{k},p_{k})\) represents the nonlinear constraint function used for collision avoidance. For each of these variables, the respective inequalities must be satisfied during each optimization step. #### Iii-C3 Cost function The cost function \(f_{k}(z_{k},p_{k})\) is formulated as follows: \[f_{k}(z_{k},p_{k})=\left\{\begin{array}{ll}w_{1}\|\mathbf{s},\mathbf{g}\|_{2 }^{2}+w_{5}\xi^{2}\\ +w_{2}F_{x}^{2}+w_{3}F_{y}^{2}+w_{4}F_{z}^{2}&\text{for}&k<N\\ f_{k-1}(z_{k},p_{k})\\ +w_{6}v_{x}^{2}+w_{7}v_{y}^{2}+w_{8}v_{z}^{2}&\text{for}&k=N\end{array}\right. \tag{9}\] , where we separate it to the stage cost and terminal cost. The stage cost includes the Euclidean distance from the current position \(\mathbf{s}\) to the target goal \(\mathbf{g}\) and penalization of the control commands to achieve the smooth trajectory. For the terminal cost, the robot should reduce the velocity to zero when reaching the goal. We denote \(w_{i}\) as weights \(\forall i\in\{1,\dots,8\}\) which should be fine-tuned to prioritize the penalties. #### Iii-C4 Constraints The constraints for avoiding the obstacle is defined in the following form: \[\begin{split} h_{rect}(s_{k},s_{o},dim_{o})&=\frac{1}{ 2}max(\frac{|x-x_{o}|}{w_{o}+w_{r}},\frac{|y-y_{o}|}{h_{o}+h_{r}},\frac{|z-z_{o }|}{d_{o}+d_{r}})\\ & s_{k}=[x,y,z]\in\mathbb{R}^{3}\\ & s_{o}=[x_{o},y_{o},z_{o}]\in\mathcal{P}\\ & dim_{o}=[w_{o},h_{o},d_{o}]\in\mathcal{D}\end{split} \tag{10}\] , where \(w_{r}\), \(h_{r}\), \(d_{r}\) are width, height and depth of the rectangle describing the manipulated box together with the gripper. \(w_{0}\), \(h_{0}\), \(d_{0}\) are width, height and depth of the bounding box of the obstacle. Figure 3 illustrates the proposed hard-constraint definition. Since the \(\max\) operator is not differentiable and thus not suitable for the gradient descent algorithms used in the MPC controller, we propose to approximate it using the smooth maximum [15]: \[\begin{split}\mathcal{S}_{\alpha}(x_{1},\dots,x_{n})& =\frac{\sum_{i=1}^{n}x_{i}e^{\alpha x_{i}}}{\sum_{i=1}^{n}e^{\alpha x _{i}}}\\ &\mathcal{S}_{\alpha}\rightarrow\max\text{ as }\alpha \rightarrow\infty\end{split} \tag{11}\] ### _MPC-HGG Algorithm_ ``` 1:Given: RL pre-trained policy \(\pi_{\theta}\), observation \(s_{0}\), horizon \(N\) and goal \(g\) 2:\(u_{0}\leftarrow(0,0,0)\) 3:\(m\gets 1\) 4:for\(t\gets 0\) to \(T-1\)do 5:\(p_{t}\gets getPos(s_{t})\) 6:\(v_{t}\gets getVel(s_{t})\) 7:\(a_{t}\leftarrow\pi_{\theta}(s_{t}\mid g)\)\(\triangleright\) Get RL action 8:\(g_{t}\gets p_{t}+a_{t}\)\(\triangleright\) Convert action to a new target 9:if\(\|\mathbf{p_{t}},\mathbf{g}\|_{2}\leq Nv_{\max}d_{t}\)then\(\triangleright\) Use MPC directly when goal is reachable 10:\(g_{t}\gets g\) 11:endif 12:\(z_{1}\gets u_{t}\times p_{t}\times v_{t}\)\(\triangleright\) Setup first MPC state to start solver from 13:\(p\leftarrow\{g_{t}\times(o_{k}^{i})_{i=1}^{N_{o}}\}_{k=1}^{N}\)\(\triangleright\) Setup MPC parameters 14:\(U_{t}\gets minimize\sum_{k=1}^{n}f_{k}(z_{k},p_{k})\)\(\triangleright\) Solve MPC 15:if\(U_{t}\) is feasible then\(\triangleright\) A feasible solution found 16:\(u_{t}\gets U_{t,1}\)\(\triangleright\) Take first MPC action 17:\(a_{t}^{\prime}\gets getAction(u_{t},a_{t})\)\(\triangleright\) Convert MPC action to the MuJoCo action 18:else 19:\(m\gets m+1\) 20:if\(m<N\)then 21:\(u_{t}\gets U_{t-1,m}\)\(\triangleright\) Try next prediction from the previous solution 22:\(a_{t}^{\prime}\gets getAction(u_{t},a_{t})\) 23:else 24:\(a_{t}^{\prime}\gets noAction(a_{t})\)\(\triangleright\) Decelerate robot 25:endif 26:endif 27: Perform \(a_{t}^{\prime}\), get \(s_{t+1}\)\(\triangleright\) Perform action in the simulator and get a new observation 28:endfor ``` **Algorithm 2** MPC-HGG algorithm ### _MPC-HGG Algorithm_ The overall MPC-HGG algorithm is provided as Algorithm 2. First, we extract the proposed RL action based on the current observation and convert it to a new intermediate goal for the MPC in line 7. If the task can be solved within the planning horizon, we provide the main goal directly to the MPC in line 10. Note that we still need RL action to control the gripper. Once a sub-goal is selected by the RL planner, the MPC can be formalized in line 13. We choose the horizon length \(N\) for the MPC actor, so that the robot is able to reach the proposed position within the planning horizon \(t_{p}\) and to stop there. It is also important to choose the right integration time step \(t_{i}<t_{p}\), so that the robot will still be able to compute the next action within the time of the movement \(t_{i}\). Then we solve the MPC problem in line 14. Finally, we perform the optimized action in the simulation (in line 27) and receive a new observation which is used in the next iteration. Fig. 3: Hard constraint definition for the rectangle-shaped obstacles in 2D. ## V Experiments In this section, we present an empirical evaluation of the performance of MPC-HGG in comparison to HGG and vanilla MPC across four distinct MuJoCo environments. These environments are variants of the Fetch gripper environments introduced by [16], and are characterized by the presence of long-horizon goal-reaching tasks with static or dynamic obstacles. ### _Simulations_ All our tasks are simulated in MuJoCo [17] and are performed in the real world, in which a Panda robot with a gripper is controlled to push a puck through environments with dynamic obstacles (see Figure 4). These tasks share the following characteristics. First, the agent receives a state containing the joint positions and velocities of the robotic arm. This information is directly retrieved from the simulation or from the robotic arm. Second, the robot is controlled by a three-dimensional vector describing the end effector's position. In the case of enabled gripper control, the gripper's opening control parameter is added as the fourth component. Third, the accessible goal space \(\mathcal{G}_{A}\) is defined by a 2D region on the table. Fourth, we obtained the positional information of obstacles directly from the simulation. Conversely, in the physical experiment, a side-view camera was employed to capture the relevant positional data. Notably, we determined the distance threshold required to ascertain the successful attainment of a goal to be \(0.05\) m. The difficulty of the four tasks is gradually increased, and a brief description of each environment is given as follows. 1. _DynamicSquareObstacles_ (see Figure 4(a): In this environment, there are two dynamic square obstacles that can move linearly in two directions with a velocity randomly sampled between \(0.6\) m/s and \(0.9\) m/s. The direction of the movement is also randomized. The transparent red regions represent the area that can be blocked by the obstacles. 2. _DynamicMixedObstacles_ (see Figure 4(b): In this environment, there is one dynamic square obstacle and one static big rectangle obstacle. Both obstacles are sampled randomly inside the red region. The square obstacle is moving with a velocity sampled between \(0.6\) m/s and \(0.9\) m/s. 3. _DynamicRectObstacles_ (see Figure 4(c): In this environment, the speed of the rectangle obstacle is chosen randomly from the interval between \(0.2\)m/s and \(0.6\)m/s. 4. _DynamicLiftedObstacles_ (see Figure 4(d): In this environment, a rectangular static obstacle is placed under one of the dynamic obstacles. Therefore, the robot must lift the object above the static obstacle and subsequently lower it down to the goal positions. Dynamic obstacles are sampled randomly inside the red region. Figure 5 shows the testing success rate of HGG, MPC, and MPC-HGG, which is calculated by averaging the performance of the best policy from each algorithm in \(100\) episodes. These tests have a tolerance parameter \(N\in\{0,1,2\}\) for the number of collisions that can be allowed per episode. If the number of collisions surpasses \(N\), then the episode is terminated as a failure. The most remarkable results can be observed, in all four environments; MPC-HGG can learn safe obstacle-avoiding behavior with a success rate of \(100\%\) across different numbers of tolerance parameters, while the other algorithms are not able to solve tasks without collisions. In the _DynamicSquareObstacles_ environment (see Figure 5 (a), the HGG controller achieves a success rate around \(75\%\) when \(N=0\) and slightly increases its perfor Fig. 4: Robotic manipulation environments in the simulation and the real world. mance when \(N=1\) and \(N=2\). Since the MPC controller is a deterministic method, it demonstrates a consistently stable performance without any associated error bars. The proposed MPC-HGG controller combines the advantage of long-horizon planning of the HGG controller with the short-horizon safety guarantee of the MPC controller, leading to a highly effective solution with a remarkable success rate of \(100\%\). Similar results can also observed in the other three scenarios (see Figure 5 (b,c,d)). Figure 6 shows a successful obstacle avoiding behavior in the _DynamicSquareObstacles_ environment. At step \(15\), the subgoals proposed by the RL planner (marked by "'\(\times\)") lead to an intersection between the puck and the second dynamic obstacle. In the effect of the MPC actor, the agent predicts the upcoming of the obstacle and calculates a safe path by shifting its moving direction away from the obstacle (see Figure 5(c), step 19). At step \(25\), the puck is successfully placed to the goal position (see Figure 5(d)). The simulation videos show the obstacle-avoidance movements in detail 1. Footnote 1: [https://videoviewsite.wixsite.com/mpc-hgg](https://videoviewsite.wixsite.com/mpc-hgg) ### _Real-world Experiments_ As illustrated in Figure 4(e), the real-world experiment setup uses a Franka Emika Panda robotic arm and an Intel RealSense camera to obtain the coordinates of the manipulatable object. Two leading screws that carry white blocks are controlled by stepper motors and act as the dynamic obstacles. The control rate is consistent with the simulation, which is \(20\) Hz across all experiments. The ForcesPro [18] is used as the MPC solver for fast-speed computation. The overall computational time is less than \(3\) ms for a planning horizon of \(8\) steps. As in the four environments in the simulation, we also create four environments featuring the Panda robotic arm with the obstacles (see Figure 4). It should be noted that unlike the simulation that obtains the coordinates of the obstacles directly, we use a camera to gather this information by tracking the ArUco Maker attached to moving obstacles in the real world. We took the policy directly from each of the tasks trained in the simulation and deployed it in the real world without any fine tuning. Inspired by the experiment performed by Fig. 5: Success rates of the collision avoidance testing of MPC-HGG, HGG, and the MPC controller. Fig. 6: Example of a resolved collision in the _DynamicSquareObstacles_ environment. The red dot represents the goal position and the cross marks are the intermediate goals suggested by the RL planner. The dashed blue circles are the current position of the obstacle and the dashed green circles are the predicted states in the near future. HER [4], we also add Gaussian noise to the observed object's position during policy training to compensate for the small errors introduced by the camera, which can increase the success rate of these tasks. The performances of these four tasks demonstrate that the policy can be successfully transferred to the corresponding tasks in the real world. In each scenario, the task is performed three times by selecting different goal locations. As shown in the video, the robot arm can always successfully approach the target position in all trajectories while actively avoiding the dynamic obstacles. Once the MPC actor anticipates a collision, it will control the robotic arm to move away from the moving direction of the obstacle and replan its trajectory to the target goal position. The real-world experiment videos can be found at the project's website. As shown in the video, the robot arm can always successfully avoid the obstacles and approach the target position in every trial. ## VI Conclusion In this paper, we have introduced a safety guaranteed planning framework for manipulation tasks based on a RL planner and a MPC actor. We showed that our RL planner exhibits the capability to choose a sequence of intermediary goals that culminate in achieving the ultimate targets, while proficiently circumventing dynamic impediments encountered in its trajectory. We also demonstrated that our MPC actor is able to accomplish the designated goals proposed the RL planner while ensuring no collision with dynamic obstacles. For future work, we will explore more advanced methods to ensure the safety of the manipulation operation. ## Acknowledgment This project/research has received funding from the European Union's Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No.945539 (Human Brain Project SGA3). The authors also acknowledge the financial support by the Bavarian State Ministry for Economic Affairs, Regional Development and Energy (StMWi) for the Lighthouse Initiative KI.FABRIK, (Phase 1: Infrastructure as well as the research and development program under, grant no. DIK0249).
2304.00883
Conjugacy classes of real analytic one-dimensional maps are analytic connected manifolds
An important question is to describe topological conjugacy classes of dynamical systems. Here we show that within the space of real analytic one-dimensional maps with critical points of prescribed order, the conjugacy class of a map is a real analytic manifold. This extends results of Avila-Lyubich-de Melo \cite{ALM} for the quasi-quadratic unimodal case and of Clark \cite{C} for the more general unimodal case. Their methods fail in the case where there are several critical points, and for this reason we introduce the new notions of {\em pruned Julia set} of a real analytic map, and associate to a real analytic map an {\em external map} of the circle {\em with discontinuities} and {\em a pruned polynomial-like} complex extension of the real analytic map. Using this we are also able to show that topological conjugacy classes are connected (something which was not even known in the general unimodal setting). Even more, this space is contractible. In a companion paper, further applications of this paper will be given. It will be shown that within any real analytic family of real analytic one-dimensional maps, hyperbolic parameters form an open and dense subset.
Trevor Clark, Sebastian van Strien
2023-04-03T11:06:44Z
http://arxiv.org/abs/2304.00883v1
# Conjugacy classes of real analytic one-dimensional maps are analytic connected manifolds ###### Abstract. An important question is to describe topological conjugacy classes of dynamical systems. Here we show that within the space of real analytic one-dimensional maps with critical points of prescribed order, the conjugacy class of a map is a real analytic manifold. This extends results of Avila-Lyubich-de Melo [ALM] for the quasi-quadratic unimodal case and of Clark [C] for the more general unimodal case. Their methods fail in the case where there are several critical points, and for this reason we introduce the new notions of _pruned Julia set_ of a real analytic map, and associate to a real analytic map an _external map_ of the circle _with discontinuities_ and _a pruned polynomial-like_ complex extension of the real analytic map. Using this we are also able to show that topological conjugacy classes are connected (something which was not even known in the general unimodal setting). Even more, this space is contractible. In a companion paper, further applications of this paper will be given. It will be shown that within any real analytic family of real analytic one-dimensional maps, hyperbolic parameters form an open and dense subset. The authors were supported by ERC AdG RGDD No 339523. We are grateful to K. Drach for many helpful comments, and D. Preiss and M. Lyubich for several useful discussions. Part of this paper was written during the spring 2022 programme at MSRI on holomorphic dynamics. ###### Contents * 1 Introduction and statement of results * 2 The main contributions of Sullivan into the field of dynamical systems. * 3 The main contributions of Sullivan into the field of dynamical systems. * 4 The main contributions of Sullivan into the field of dynamical systems. * 5 The main contributions of Sullivan into the field of dynamical systems. * 6 The main contributions of Sullivan into the field of dynamical systems. * 7 The main contributions of Sullivan into the field of dynamical systems. * 8 The main contributions of Sullivan into the field of dynamical systems. * 9 The main contributions of Sullivan into the field of dynamical systems. * 10 The main contributions of Sullivan into the field of dynamical systems. * 11 The main contributions of Sullivan into the field of dynamical systems. * 12 The main contributions of Sullivan into the field of dynamical systems. * 13 The main contributions of Sullivan into the field of dynamical systems. * 14 The main contributions of Sullivan into the field of dynamical systems. * 15 The main contributions of Sullivan into the field of dynamical systems. * 16 Hybrid-conjugacy in the parabolic case * 17 Topological and analytic structure on the space of real analytic functions and pruned polynomial-like mappings * 18 The space \(\mathcal{A}_{a}^{\nu}\) is a Banach manifold * 19 Conjugacy classes of semi-hyperbolic maps form Banach manifolds * 20 Mating pruned polynomial-like mappings * 21 Hybrid classes form immersed analytic manifold * 22 Topological conjugacy classes form immersed analytic manifold * 23 Infinitesimal theory, horizontal, vertical and transversal vector fields * 24 Estimates for vertical vector fields * 25 Estimates for horizontal vectors fields * 26 The codimension of conjugacy classes * 27 Hybrid conjugacies are embedded manifolds * 28 Hybrid classes are Banach manifolds * 29 Conjugacy classes of real analytic maps are path connected * 30 Hybrid conjugacies form a partial lamination * 31 Conjugacy classes of real analytic maps are contractible ## Part C Open questions **Appendices and References** * 1. Local connectivity of the pruned Julia set and complex box mappings which include domains that do not intersect the real line * 2. Continuity of the pruned Julia set * 3. A topological and analytic structure on the space of real analytic functions * 4. Summary of notation and definitions ## 1. Introduction and statement of results One of the main contributions of Sullivan into the field of dynamical systems is his introduction of quasiconformal mappings, and in particular the Measurable Riemann Mapping Theorem into the subject. For example, in his proof of the absence of wandering domains for rational maps, he shows that if a rational map has a wandering domain, then it is possible to construct an infinite dimensional space of distinct deformations of this map, contradicting the fact that the space of rational maps is finite dimensional, see [Su1]. Similarly, he proposed a strategy for proving density of hyperbolicity for real polynomial maps by establishing quasisymmetric rigidity. This was successfully implemented in the real quadratic case in [Ly2, GS2] and in the general case in [KSvS1, KSvS2]. For a survey on the techniques developed in the latter papers, see [CDKvS]. One of the limitations of quasisymmetric rigidity is that there is no analogue of the Measurable Riemann Mapping Theorem in the real setting. In this paper we will show how to overcome this in a number of settings, by showing that two interval maps which are topologically conjugate can be connected by a real-analytic path of maps in the same topological conjugacy class. Moreover we will show that topological conjugacy classes form real analytic manifolds, generalising a result in the unimodal setting due to Avila-Lyubich-de Melo [ALM] and Clark [C]. It turns out that these results imply density of hyperbolicity within full families. In a sequel to this paper, we will use the results obtained here to show that in the space of real analytic maps with _small basins_, topological conjugacy classes laminate the entire space. Here we say that a real analytic map has small basins, if the complex extension of the real basins of its periodic attractors are compactly contained in the domain of analyticity of the map; for a formal definition see Definition 4.1. This is quite a surprising result, as topological conjugacy classes _do not_ laminate the space of all unimodal real analytic mappings. One application of this laminar structure is that the topological entropy of real analytic families of unimodal maps which are close to the family of quadratic maps depends monotonically on the parameter. In another sequel of this paper we will use the results in this paper to show that hyperbolic maps form a dense subset within real analytic families of real analytic one-dimensional maps. Let us state our results more precisely. Let \(I\) be a compact interval; to be definite, let us take \(I=[-1,1]\). Let \(\nu\in\mathbb{N}\), and fix a vector \(\underline{\nu}=(\ell_{1},\ell_{2},\dots,\ell_{\nu})\in\mathbb{N}^{\nu}\) with each \(\ell_{i}\geq 2\), and let \(\mathcal{A}^{\underline{\nu}}\) denote the space of real analytic mappings \(f:I\to I\) with \(f(\partial I)\subset\partial I\), with precisely \(\nu\) critical points \(-1<c_{1}<c_{2}<\dots<c_{\nu}<1\) such that \(Df(c_{i})=\dots=Df^{\ell_{i}-1}(c_{i})=0\) and \(Df^{\ell_{i}}(c_{i})\neq 0\). For convenience, let us also assume that \(\partial I\) is hyperbolic repelling and that \(f\) is in the class \(\mathcal{A}^{\underline{\nu}}_{a}\) of maps in \(\mathcal{A}^{\underline{\nu}}\) which have a holomorphic extension to \(\Omega_{a}=\{z\in\mathbb{C}:\operatorname{dist}(z,I)<a\}\), with precisely \(\nu\) critical points in \(\Omega_{a}\) of order \(\ell_{1},\dots,\ell_{\nu}\) and which extends continuously to \(\overline{\Omega}_{a}\). The space \(\mathcal{A}^{\underline{\nu}}_{a}\) is endowed with the supremum metric \(d(f,g)=\sup_{z\in\overline{\Omega}_{a}}|f(z)-g(z)|\). Following Appendix 2 of [Ly1], we will also introduce a topology on the space \(\mathcal{A}^{\underline{\nu}}\) and the concept of _real analytic manifold modelled on Banach spaces_ in Section 17. If \(f\in\mathcal{A}^{\underline{\nu}}\) has only hyperbolic periodic points (i.e. with multipliers not \(0,\pm 1\)), then the _topological conjugacy class_\(\mathcal{T}^{\underline{\nu}}_{f}\) denotes the set of mappings \(g\in\mathcal{A}^{\underline{\nu}}\) with _only_ hyperbolic periodic points that are topologically conjugate to \(f\) on \(I\) by an order preserving topological conjugacy which maps the critical points of \(f\) to those of \(g\). Since \(f,g\in\mathcal{A}^{\underline{\nu}}\), this conjugacy necessarily preserves the order \(\ell_{i}\) of the critical points. Let \(\zeta(f)\) be the maximal number of critical points _in the basins_ of periodic attractors of \(f\) with pairwise disjoint infinite orbits. **Theorem A** (Manifold structure).: _Assume that all periodic orbits of \(f\in\mathcal{A}^{\underline{\nu}}\) are hyperbolic. Then_ 1. \(\mathcal{T}^{\underline{\nu}}_{f}\) _is an embedded real analytic submanifold of_ \(\mathcal{A}^{\underline{\nu}}\) _modelled on a family of Banach spaces of codimension_ \(\nu-\zeta(f)\)_;_ 2. _for each_ \(a>0\)_,_ \(\mathcal{T}^{\underline{\nu}}_{f}\cap\mathcal{A}^{\underline{\nu}}_{a}\) _is an embedded real Banach submanifold of_ \(\mathcal{A}^{\underline{\nu}}_{a}\) _of codimension_ \(\nu-\zeta(f)\)_._ In Section 17 we will give a formal definition of the term _embedded real analytic submanifold modelled on a family of Banach spaces_. **Theorem B** (Topological conjugacy classes are contractible).: _Assume that all periodic orbits of \(f\in\mathcal{A}^{\underline{\nu}}\) are hyperbolic. Then_ 1. _for each_ \(g\in\mathcal{T}^{\underline{\nu}}_{f}\) _there exists a one-parameter family of real analytic maps_ \(f_{t}\colon I\to I\) _in_ \(\mathcal{A}^{\underline{\nu}}_{a}\) _with_ \(f_{0}=f\)_,_ \(f_{1}=g\) _so that_ \(f_{t}\) _depends analytically on_ \(t\) _and_ \(f_{t}\) _is topologically conjugate on_ \(I\) _to_ \(f\) _for each_ \(t\in[0,1]\)_._ 2. _Moreover, if_ \(f\in\mathcal{A}^{\underline{\nu}}_{a}\) _then the manifold_ \(\mathcal{T}^{\underline{\nu}}_{f}\cap\mathcal{A}^{\underline{\nu}}_{a}\) _is contractible._ In Sections 19-31 we will state corresponding theorems with the _real-hybrid class_\(\mathcal{H}^{\mathbb{R}}_{f}\) of \(f\), where (in the case that all periodic orbits of \(f\colon I\to I\) are hyperbolic) \(\mathcal{H}^{\mathbb{R}}_{f}\) is defined as the subset of \(\mathcal{T}_{f}\) where the topological conjugacy extends as a holomorphic conjugacy in a small complex neighbourhood of the periodic attractors, or equivalently in a complex neighbourhood of the real basin of the periodic attractors. In particular, the multipliers at periodic attractors for each map \(g\in\mathcal{H}_{f}\) are the same as those for the corresponding periodic attractors of \(f\). The assumption that all periodic attractors are hyperbolic is not required in the 'hybrid' analogues of Theorems A and B, but the presence of parabolic periodic points requires additional assumptions (namely that the parabolic periodic points are'simple') and arguments in the proof. **Theorem C** (Partial lamination).: _Assume that all periodic orbits of \(f\in\mathcal{A}^{\underline{\nu}}_{a}\) are hyperbolic. Then \(f\) has a neighbourhood which is laminated by hybrid conjugacy classes. More precisely, for each neighbourhood \(\mathcal{V}_{2}\) of \(f\) in \(\mathcal{A}^{\underline{\nu}}_{a}\) there exists a neighbourhood \(\mathcal{V}_{1}\subset\mathcal{V}_{2}\) of \(f\) in \(\mathcal{A}^{\underline{\nu}}_{a}\) so that for each \(g\in\mathcal{V}_{1}\) and each \(g_{0},g_{1}\in\mathcal{V}_{1}\cap\mathcal{H}^{\mathbb{R}}_{g}\) there exists a path \(g_{t}\in\mathcal{A}^{\underline{\nu}}_{a}\), \(t\in[0,1]\) inside \(\mathcal{V}_{2}\cap\mathcal{H}^{\mathbb{R}}_{g}\) connecting \(g_{0},g_{1}\)._ As mentioned, Theorems A and C were obtained in [ALM] for hybrid classes in the context of unimodal maps with quadratic critical points. In [GSm], using completely different methods, an analogous result to Theorem A is proved in the setting of piecewise expanding mappings. One of the main steps in the proof of Theorems A and B is **Theorem D** (External maps and pruned polynomial-like structure).: _Associated to each map \(f\in\mathcal{A}^{\underline{\nu}}\) with only hyperbolic periodic points there exist_ 1. _a circle map with discontinuities (called an_ external map_) associated to_ \(f\)_;_ 2. _a pruned-polynomial-like mapping_ \(F\colon U\to U^{\prime}\) _which is an extension of the real analytic_ \(f\) _and so that its domain_ \(U\) _is a neighbourhood of_ \(I\)_._ 3. _For each_ \(f\in\mathcal{A}^{\underline{\nu}}_{a}\) _there exists a neighbourhood_ \(\mathcal{U}\subset\mathcal{A}^{\underline{\nu}}_{a}\) _so that each map_ \(g\in\mathcal{U}\) _has a pruned polynomial-like extension_ \(G\colon U_{g}\to U^{\prime}_{g}\) _which is obtained from_ \(F\colon U\to U^{\prime}\) _by holomorphic motion, see Theorem_ 3.1 _for a more precise statement. (Note that_ \(g\) _is not required to be conjugate to_ \(f\)_.)_ The definition of the notion of pruned polynomial-like map will be given in Section 3. The main point is that they share essentially all useful properties of polynomial-like maps; an example is shown in Figure 1. For a more formal statement, see Theorem 3.1. The main ingredients for proving these results is quasisymmetric rigidity which was proved in [CvS] and the complex bounds from [CvST]. These results build on the enhanced nest construction from [KSvS1], see also [KvS, CDKvS]. The theorems above are stated for maps \(f\in\mathcal{A}^{\underline{\nu}}_{a}\) without parabolic periodic points. Most theorems also go through when \(f\) does have parabolic periodic points, see for example Section 11. The lamination structure from Theorem C does not hold near maps with parabolic periodic points, and this is the topic of a companion paper which is currently in preparation. ### Further applications In a companion paper it will be shown that the results in this paper imply that one has density of hyperbolicity within real analytic families of real analytic one-dimensional maps. ### Notation and terminology We let \(\mathbb{C}\) denote the complex plane and we say that a subset \(A\subset\mathbb{C}\) is _real symmetric_ if \(z\in A\) iff \(\bar{z}\in A\). A function \(g\colon\partial\mathbb{D}\to\partial\mathbb{D}\) is called real symmetric if \(g(\bar{z})=\overline{g(z)}\). If \(U\subset\mathbb{C}\) is an open set, we let \(\mathcal{B}_{U}\) denote the set of holomorphic mappings on \(U\) which are continuous on \(\overline{U}\). For \(F\in\mathcal{B}_{U}\) we will define \[B_{U}(F,\epsilon)=\{G\in\mathcal{B}_{U};|G(z)-F(z)|<\epsilon\text{ for all }z\in \overline{U}\}.\] As before, for \(a>0\), we let \(\Omega_{a}=\{z\in\mathbb{C}:\operatorname{dist}(z,I)<a\}\) where \(\operatorname{dist}\) is the Euclidean distance on \(\mathbb{C}\). Let \(\mathcal{A}^{\underline{\nu}},\mathcal{A}^{\underline{\nu}}_{a}\) are defined as in the introduction. We let \(\operatorname{Cr}(f)\) (or simply \(\operatorname{Cr}\)) denote the set of critical points of a differentiable mapping \(f\). We should emphasise that \(f\in\mathcal{A}^{\underline{\nu}}\) has at most a finite number of attracting periodic points, see [MMvS, dMvS], but attracting periodic points do not necessarily contain a critical point in their basin. We will denote by \(B_{0}(f)\subset I\) the union of immediate basins of the periodic attractors of \(f\colon I\to I\). Note that some periodic attractors may not contain critical points in their immediate basin. Given a set \(K\subset\mathbb{C}\) we let \(\operatorname{cc}_{x}(K)\) (or \(\operatorname{cc}_{x}K\)) denote the connected component of \(K\) containing \(x\) and if \(A\subset\mathbb{C}\) then \(\operatorname{cc}_{A}(K)\) is the union of the connected components of \(K\) containing points of \(A\). ## 2. Organisation and outline of the ideas used in this paper In Part A of this paper we introduce the notion of pruned polynomial-like mapping. As we will show, having a pruned polynomial-like mapping is almost as good as having a polynomial-like mapping. Indeed, it allows us to extend the results of [ALM] for non-renormalizable unimodal maps to the general setting. In particular, we do not need to assume 'big bounds' as in that paper. In Section 3 we formulate the notion of pruned polynomial-like mapping and state Theorem 3.1 which asserts that _each_ real analytic interval map has a complex analytic extension which is a pruned polynomial-like mapping. Thus we obtain a good Markov structure in a complex neighbourhood of the entire dynamical interval \(I\). The proof of the existence of such a pruned polynomial-like extension of a real analytic map, goes in several steps: 1. First we associate to a real analytic map a pruned Julia set. If the real analytic map is in fact a polynomial, then this pruned Julia set \(K_{X}\) is simply a subset of the usual Julia set, but pruned so that it is in a small neighbourhood of the the interval \(I\). Where the Julia set is pruned depends on a set \(X\), where \(X\) consists of the boundary points of intervals around the critical values of \(f\). 2. Using the uniformisation of \(\mathbb{C}\setminus K_{X}\) and the map \(f\) near \(K_{X}\) we obtain a real analytic ordering preserving circle map (whose 'degree' depends on the degree of the critical points of \(f\)) with discontinuities (each critical point of \(f\) corresponds to a discontinuity of \(\hat{f}_{X}\)). This circle map is called the external map \(\hat{f}_{X}\colon\partial\mathbb{D}\to\partial\mathbb{D}\) associated to \(f\) and the pruning data \(X\). Later on we will encode the pruning data in a more combinatorial way by a subset \(Q\subset\partial\mathbb{D}\) which is forward invariant under the map \(z\mapsto z^{|\nu|}\) where \(|\nu|=\sum_{i=1,\ldots,\nu}\ell_{i}\). 3. The expanding properties of the map \(\hat{f}_{X}\) can be used to obtain a pruned polynomial-like structure near the interval \(I\). This construction is given in Sections 4-9 and in Section 13 it is then shown that the pruned polynomial-like extensions of two topologically conjugate interval maps are qc conjugate and that this conjugacy preserves the structure of these pruned polynomial-like mappings. In Part B of the paper we will then show that pruned polynomial-like mappings can be treated more or less like polynomial-like mappings. This means that with this structure in hand, we no longer need to use a necklace neighbourhood around the \(I\) as in [ALM], but obtain a quasiconformal conjugacy between two conjugate interval maps on a full complex neighbourhood of \(I\). This allows us to prove a mating result as in [Ly2]: associated to two pruned polynomial-like mappings with the same (combinatorial) pruning data there exists a new pruned polynomial-like mapping which is qc conjugate to the first mapping and has the same external mapping as the second one, see Section 20. This gives that one conjugacy class inherits the manifold structure from another conjugacy class. To determine the codimension of these manifolds, we extend the infinitesimal pullback argument and key lemma of [ALM] to our setting. Once these techniques are in place, the theorems readily follow. The main differences with [ALM] are: 1. We do not (and cannot) assume that there are big bounds. In [C] a polynomial-like argument is used for the case when the geometry is bounded (that argument here would not work when the post-critical set is non-minimal) and arguments similar to [ALM] when the geometry is unbounded. Here we do not need to make this distinction. We overcome the lack of big bounds by introducing the notion of pruned polynomial-like map and showing that each real analytic interval map has a complex extension with such a structure. 2. In [ALM] a manifold structure \(\mathcal{H}_{f}^{\mathbb{R}}\) is obtained for each unimodal \(f\in\mathcal{A}_{a}^{\nu}\) by first obtaining the manifold structure for hyperbolic unimodal maps (using the implicit function theorem) and then using the density of hyperbolic maps and a lambda-lemma argument (to complex perturbations of these maps) to extend this manifold structure for non-hyperbolic maps. Here we cannot use this lambda-lemma as our manifolds are of higher codimension (as our maps have \(\nu\) critical points). Instead we use our pruned polynomial-like structure to obtain a mating result: given two maps one can find a third map which is hybrid-conjugate to the first one and which has the same external map as the 2nd one. Then we proceed essentially as in [Ly1] to inherit a manifold structure on the hybrid class of non-hyperbolic maps from the manifold structure of a given nearby hyperbolic map. 3. In [ALM] vertical vectors are obtained, as in [Koz], by constructing first smooth vertical vector fields and then using a polynomial approximation. This is one of the most subtle arguments in [ALM], for which the notion of puzzle maps is introduced. These are maps whose domains form a necklace neighbourhood of the interval \(I\) (rather than an actual neighbourhood). Here we can avoid this discussion and argue as in the polynomial-like case more or less in the spirit of [Ly1]. A difference with the polynomial-like case dealt with in [Ly1] is that 4. in our case (of pruned polynomial-like mappings) the space of these external maps is more complicated than in the case of polynomial-like mappings, due to the existence of discontinuities. For that reason we do not attempt to prove nor use that the space of external maps (in our setting) has a manifold structure. This means that, for example, our proof of the contractibility of conjugacy classes is quite different from [AL]. Note that such a result is not claimed in [ALM]. An important step in our proof of Theorem B2 (that the hybrid class is contractible), is to assign a pruned polynomial-like map to each map in a real hybrid class, using a quasiconformal motion (which is obtained using a partition of unity of unity argument). Here we use the notion of quasiconformal motion (the continuous analogue of a holomorphic motion) introduced in [ST]. ## Part A: Pruned polynomial-like maps and their external maps ### 3. Pruned polynomial-like mappings One of the main technical innovations in this paper is the introduction of the notion of pruned polynomial-like map, and the theorem that shows that to each real analytic map one can associate such a pruned polynomial-like map. **Definition 3.1** (Pruned polynomial-like mapping).: We say that a holomorphic map \(F\colon U\to U^{\prime}\) between two open sets \(U,U^{\prime}\) consisting of finitely many components together with a finite union of arcs or closed curves \(\Gamma,\Gamma_{a}^{*},\Gamma_{a}\) is a _pruned polynomial-like mapping_ if * \(F\) extends holomorphically to a neighbourhood of \(\overline{U}\). Figure 1. A pruned polynomial-like map \(f\colon U\to U^{\prime}\) associated to a quadratic interval map \(f\colon I\to I\) with only repelling periodic points and whose pruned Julia set \(K_{X}(f)\) consists of the closure of pre-images of \(J\), i.e. pruned at pre-images of the points \(X=\partial J\). The set \(\Gamma\) consists of the union of the drawn curves (intersected with \(\overline{U}\)), consisting of preimages of a ray landing at a periodic point. Note that \(U\setminus U^{\prime}\) is equal to the part of \(U\) to the left of \(\gamma\). In this figure, the curves drawn are rays and exponentials obtained from the Böttcher coordinates associated to the quadratic map, but in general these curves we will need to be constructed using local methods. * \(U^{\prime}=F(U)\) and \(F(\partial U)\subset\partial U^{\prime}\cup\Gamma\); * \(U\cap U^{\prime}\neq\emptyset\), \(\partial U\cap\partial U^{\prime}\subset\Gamma\) and \(\partial U^{\prime}\cap U\subset\Gamma\cup\Gamma_{a}^{*}\cup\Gamma_{a}\); * each component of \(\Gamma\) is either a piecewise smooth arc in \(U^{\prime}\) connecting boundary points of \(U^{\prime}\) or is contained in \(\partial U^{\prime}\cup\partial U\); * \(F(\Gamma\cap\overline{U})\supset\Gamma\); * each component of \(\Gamma_{a}^{*}\) and \(\Gamma_{a}\) is contained in the basin of a periodic attractor \(F\); * each component of \(U^{\prime}\setminus\Gamma^{\prime}\), \(U^{\prime}\setminus(U\cup\Gamma^{\prime})\), \(U\setminus\Gamma^{\prime}\) and \(U\setminus(U^{\prime}\setminus\Gamma^{\prime})\) is a quasidisk. Here \(\Gamma^{\prime}=\Gamma\cup\Gamma_{a}^{*}\cup\Gamma_{a}\). We say that a pruned polynomial-like mapping is _real_ if \(U,U^{\prime}\) are real-symmetric and \(F\) commutes with complex conjugation. _Remark 3.1_.: When \(U\subset U^{\prime}\) and \(\Gamma=\emptyset\) then \(U\) is compactly contained in \(U^{\prime}\) and this definition reduces to that of a polynomial-like map. Moreover, if \(F\) has no periodic attractors, then \(\Gamma_{a}=\Gamma_{a}^{*}=\emptyset\). _Remark 3.2_.: An example of such a pruned polynomial-like map is shown in Figure 1. This example shows that in general \(U\setminus U^{\prime}\neq\emptyset\) and \(U^{\prime}\setminus U\neq\emptyset\). In this paper the curves of \(\Gamma\) will consist of 'rays' landing on real periodic or pre-periodic points of \(F\). Note that the pruned polynomial-like map from Figure 1 is not proper. _Remark 3.3_.: If \(f\) has a periodic attractor with a critical point in its basin, then there exists a bit of \(\partial U^{\prime}\) which is mapped to a curve which iterates to the attracting fixed point, see Figure 10. If \(F:U\to U^{\prime}\) is a pruned polynomial-like mapping, we call \[K_{F}:=\{z\in\mathbb{C};F^{n}(z)\in\overline{U}\text{ for all }n\geq 0\}\] the _pruned filled Julia set_ of \(F:U\to U^{\prime}\), and \(J_{F}:=\partial K_{F}\) its _pruned Julia set_. Observe that while \(K_{F}\subset\overline{U}\), \(K_{F}\) need not be contained in \(U\) and also does not need to be backward invariant, see for example Figure 1. We will see that for the pruned polynomial-like mappings we construct in the theorem below, one has \(K_{F}\cap\partial U=K_{F}\cap\partial U^{\prime}\). Let us now show that one can associate a pruned polynomial-like map to a real analytic map \(f\in\mathcal{A}_{a}^{\underline{\nu}}\). Since \(\Omega_{a}\) might be a very small neighbourhood of \(I\), one cannot expect \(f^{-1}(\mathbb{R})\) (or even \(f^{-1}(I)\)) to be contained in the interior of \(\Omega_{a}\), and so one should not expect \(f\colon I\to I\) to have a genuine polynomial-like extension. In this section we will construct something almost as good as a polynomial-like extension, namely the extension given by the following theorem (which contains part of Theorem D). **Theorem 3.1** (Pruned polynomial-like mapping associated to \(f\)).: _Suppose that \(f\in\mathcal{A}_{a}^{\underline{\nu}}\) has only hyperbolic periodic points. Associated to \(f\colon I\to I\) there exist open sets \(U,U^{\prime}\) in the complex plane and a finite union of curves \(\Gamma,\Gamma_{a}^{*},\Gamma_{a}\) so that_ 1. \(f\) _extends to a map_ \(F\colon U\to U^{\prime}\) _which together with_ \(\Gamma,,\Gamma_{a}^{*},\Gamma_{a}\) _forms a pruned polynomial-like mapping._ 2. \(U\) _is a neighbourhood of_ \(I\) _and for each_ \(\delta>0\) _the sets_ \(U,U^{\prime}\) _can be chosen so that they are contained in a_ \(\delta\)_-neighbourhood of_ \(I\)_._ 3. _there exists a neighbourhood_ \(\mathcal{U}\subset\mathcal{A}_{a}^{\underline{\nu}}\) _of_ \(f\) _so that each map_ \(g\in\mathcal{U}\) _has a pruned polynomial-like extension_ \(G\) _which is obtained from_ \(F\colon U\to U^{\prime}\) _by holomorphic motion. More precisely, there exists a holomorphic motion_ \(H_{\lambda}\)_,_ \(\lambda\in\mathcal{U}\) _of_ \(U,U^{\prime}\) _where_ \(H_{f}=id\) _so that each_ \(g\in\mathcal{U}\) _has a complex extension_ \(G\colon H_{g}(U)\to H_{g}(U^{\prime})\) _which is a pruned polynomial-like mapping. Moreover,_ \(G(z)=H_{g}^{-1}\circ F\circ H_{g}(z)\) _for_ \(z\in\partial U\)_._ To obtain the pruned polynomial-like extension \(F\colon U\to U^{\prime}\) of the real analytic map \(f\colon I\to I\) we will associate to \(f\): * A 'pruned Julia set' \(K_{X}\subset\mathbb{C}\) associated to \(f\). * An 'external' map \(\hat{f}_{X}\colon\partial\mathbb{D}\to\partial\mathbb{D}\) which is real analytic, except in a finite number of points where it is discontinuous. * Exploit the expanding structure of \(\hat{f}_{X}\) together with the attracting structure near the immediate basins of the periodic attractors of \(\hat{f}_{X}\). * The persistence of the pruned polynomial-like structure is proved in Proposition 10.3. _Remark 3.4_.: If \(f\) has no periodic attractors, then \(\Gamma^{*}_{a}=\Gamma_{a}=\emptyset\) and \(\Gamma\) consists of 'rays' landing at points which are eventually mapped onto repelling periodic points. If \(f\) the periodic attractors are _small_ in the sense of Definition 4.1 (roughly speaking this means that the basins are contained in the domain of analyticity of \(f\)) then we also can ensure that \(\Gamma^{*}_{a}=\Gamma_{a}=\emptyset\), see Theorem 4.1. _Remark 3.5_.: If \(f\) has periodic attractors, then \(\Gamma\) may also consist of pieces of curves which land on periodic attractors, see Section 8. However, if the basins of periodic attractor of \(f\) are compactly contained in the domain of analyticity of \(f\), then one can ensure that these basins are compactly contained in the domain of \(F\colon U\to U^{\prime}\) and one can take again \(\Gamma^{*}_{a}=\Gamma_{a}=\emptyset\), see Subsection 4.1. _Remark 3.6_.: Even if \(f\) is allowed to have simple parabolic periodic points (see Definition 11.1), the previous theorem still holds, see Subsection 11 except that the statement of part (4) of Theorem 3.1 needs to be amended, see Subsection 11.1. A somewhat similar result to Theorem 3.1 is obtained in [ALM, Appendix B] for unimodal maps, with non-degenerate critical points, which are at most finitely many times renormalizable. Their proof requires the mappings have a large scaling factor at some level of the principal nest; it does not go through for multimodal mappings or unicritical mappings of higher degree. Consequently, our construction is necessarily quite different and more abstract, since we need to deal with multimodal maps and infinitely renormalizable maps. To construct the associated pruned-polynomial-like map, we will first define the notion of a pruned Julia set, show that this is near the real line and then associate to this object (and the map \(f\)) some circle map with discontinuities. ### Notation used for interval maps and their extensions Interval maps will be denoted by \(f\colon I\to I\) as well as its complex extension \(f\colon U\to\mathbb{C}\). An extension which will have the structure of a pruned polynomial-like mapping will be denoted by \(F\colon U\to U^{\prime}\). The map \(f\colon U\to\mathbb{C}\) will induce (via some Riemann mapping \(\phi_{X}\)) an _external map_\(\hat{f}_{X}\colon\partial\mathbb{D}\to\partial\mathbb{D}\). The complex extension of this map will again be denoted by \(\hat{f}_{X}\) but for the part where this extension to a neighbourhood of \(\partial\mathbb{D}\) has an expanding Markov structure we will write \(\hat{F}_{X}\colon\hat{E}\to\hat{E}^{\prime}\). The sets \(\hat{E},\hat{E}^{\prime}\) will be used to construct the 'expanding' part of the pruned polynomial-like extension of \(f\), denoted by \(F\colon E\to E^{\prime}\). We also have a suitable structure near basins of \(f\), which gives an extension \(F\colon B\to B^{\prime}\) of \(f\) near basins. Taking \(U:=E\cup B\) and \(U^{\prime}=E^{\prime}\cup B^{\prime}\) we will obtain the required pruned polynomial-like map \(F\colon U\to U^{\prime}\). ## 4. A pruned Julia set associated to \(f\colon\Omega_{a}\to\mathbb{C}\) As mentioned, to prove Theorem 3.1 we will first define the notion of a pruned Julia set. Let \(1\leq\nu^{\prime}\leq\nu\) denote the number of distinct critical values of \(f\colon I\to I\). Take (real) disjoint interval neighbourhoods \(J_{1},\dots,J_{\nu^{\prime}}\subset I\) of the critical values \(f(c_{1}),\dots,f(c_{\nu})\in I\subset\mathbb{R}\) so that the closure of the component of \(f^{-1}(J_{i})\) containing \(c_{i}\) is contained in \(\Omega_{a}=\{z\in\mathbb{C}:\operatorname{dist}(z,I)<a\}\). Next set \(J=\cup J_{i}\) and define \[J^{-1}=\overline{f^{-1}(J)\setminus\mathbb{R}}\text{ and }X=\partial J^{-1}.\] For later use, also define \(\operatorname{Cr}^{\prime}(f)\) to be the subset of critical points of \(f\) which are periodic. We shall call \(J_{i}\)_pruning intervals_ and the points \(X\)_pruning points_. By choosing the intervals \(J_{i}\) sufficiently small, we can ensure that 1. each \(J_{i}\) contains exactly one critical value of \(f\), 2. \(f^{-1}(J)\) consists of arcs whose closures are contained in \(\Omega_{a}\); 3. if the critical value \(f(c_{i})\) in \(J_{i}\) is contained in the basin of a periodic attractor, then we assume that \(J_{i}\) is contained compactly in this basin, and moreover does not any point in the backward orbit of a non-periodic critical point. The intervals \(J\) (and the finite set \(X\)) will be used to define the pruned Julia set associated to \(f\colon\Omega_{a}\to\mathbb{C}\). To define the pruned Julia set \(K_{X}(f)\), inductively define \(K_{n}\) by taking \(K_{0}=I\), \[K_{1}=K_{0}\cup\operatorname{cc}_{K_{0}}J^{-1} \tag{4.1}\] and for \(n\geq 1\), \[K_{n+1}:=K_{n}\cup\operatorname{cc}_{K_{n}^{\prime}}f^{-n}(J^{-1}). \tag{4.2}\] where \(\operatorname{cc}_{K_{n}^{\prime}}\) stands for the connected components which intersect \(K_{n}^{\prime}:=K_{n}\setminus\operatorname{Cr}^{\prime}(f)\). So in the definition of \(K_{n+1}\) we exclude any preimage of \(J^{-1}\) which once again intersects a critical point. Implicit in the definition (4.2) is that \(J\) can be chosen so that \(\operatorname{cc}_{K_{n}^{\prime}}f^{-n}(J^{-1})\) is again in \(\Omega_{a}\) for each \(n\geq 0\). In Theorem 4.2 we will show that one can choose \(J\) so that this property indeed holds. For example, the set \(K_{1}\) consists of \(I\) together with \(2(\ell_{i}-1)\) arcs in \(\mathbb{C}\setminus\mathbb{R}\) attached to \(c_{i}\) for each \(i\). Note that \(K_{n+1}\setminus K_{n}\subset\mathbb{C}\setminus\mathbb{R}\), and is contained in \(f^{-n}(K_{1}\setminus I)\). It is immediately clear from the definition that for all \(n\geq 0\), \[f^{-1}(K_{n})\supset K_{n}\text{ and }K_{n+1}\supset K_{n}.\] Next define \[K_{X}(f)=\text{ closure of }\bigcup_{n=0}^{\infty}K_{n}. \tag{4.3}\] Note that \(K_{0}\subset K_{1}\subset K_{2}\subset\dots\) are finite trees with finite degree whose 'complex endpoints' or 'pruning points' are preimages of \(X\). We will call \(K_{X}(f)\) the _pruned Julia set_ associated to \(f\colon\Omega_{a}\to\mathbb{C}\) associated to the set \(X\). Note that for the pruned polynomial-like extension \(F\colon U\to U^{\prime}\) that we will construct we may not have \(K_{X}(f)\subset K_{F}\). However, if \(f\) has only repelling periodic orbits then we will obtain \(K_{F}=K_{X}(f)\), see Lemma 9.2. The reason we use the notation \(K_{X}(f)\) rather than \(K_{J_{i}}(f)\) is that for real polynomials without periodic critical points, \(K_{X}(f)\) is obtained from the filled Julia set of \(f\) by 'pruning' the filled Julia set at all preimages of \(X\), see Example 4.1(ii). If \(f\) has periodic critical points, then we prune more because otherwise we would not be able to ensure that \(K_{X}(f)\) is close to the real line. _Example 4.1_ (The set \(K_{X}\)).: Let us illustrate the notion of a pruned Julia set in some cases: (i) If \(I=[-1,1]\), \(J=(-\epsilon,\epsilon)\) with \(\epsilon<1\) and \(f(x)=x^{\ell}\) then \(K_{0}=I\) and for \(n\geq 1\), \[K_{n}=\bigcup_{k=1,\ldots,\ell-1}e^{\pi ik/\ell}\cdot J^{\prime}\ \bigcup\ I\text{ where }J^{\prime}=[-\epsilon^{1/\ell},\epsilon^{1/\ell}].\] Note that \(K_{n}=K_{1}\) for \(n\geq 1\) because the only pre-image of the critical point \(c_{1}=0\) is itself and because of the definition of \(K_{1}^{\prime}=K_{1}\setminus\{0\}\). So in this case \(K_{X}(f)\) consists of \(\ell\) curves through \(0\). If we replaced in the definition (4.2) \(\operatorname{cc}_{K_{n}^{\prime}}f^{-n}(J)\) by \(\operatorname{cc}_{K_{n}}f^{-n}(J)\) then we Figure 3. The quadratic map \(F_{c}(z)=(-c+1)z^{2}+c\), normalised so that \(F_{c}(\pm 1)=1\) with \(c=-0.2\). Here we take an interval \(J\) around the critical value \(c\) in the basin of an attracting fixed point for two choices of \(J\): The set \(K_{X}(F)\) shrinks as \(J\) gets smaller, and in particular when \(J\) does not contain a backward iterate of the critical point. Here, as everywhere, \(X=\partial J^{-1}\). Figure 2. The \(\cup_{i=1,\ldots,8}f^{-i}(J^{-1})\) corresponding to two slightly different small ‘pruning’ intervals \(J\) (drawn in red) around the critical value of a real quadratic map \(f\). Note the different places where the tree structure is _pruned_. If we denote by \(X=f^{-1}(\partial J)\setminus\mathbb{R}\), then the pruned Julia set \(K_{X}\) corresponds to the complex pre-images of \(J\) connected to \(I\). By taking the interval \(J\) small enough, we can ensure that \(K_{X}\) lies near \(I\) (and so in the domain of definition of an extension of \(f\) to the complex plane). would have obtained that \(K_{X}(f)\) is equal to the closed unit disc, which is undesirable for our purposes. Note that the relevance of \(K_{n}^{\prime}\) in the inductive definition of \(K_{n+1}\) only occurs when \(f\) has a periodic critical point. (ii) Let \(I=[-1,1]\), \(f(z)=(-c+1)z^{2}+c\), normalised so that \(f(\pm 1)=1\). If \(c<0\) then \(f\) has an attracting orientation reversing fixed point \(a\in(c,0)\). Let \(x\in(-1,a)\) be so that \(f(x)=0\). If the pruning interval \(J\ni c\) is chosen so large that it contains \([x,c]\) then \(\mathbb{C}\setminus K_{X}\) has bounded components, for example the region \(A\) which lies NW of \(0\), see Figure 3(a). Note that \(A\) is mapped by \(f\) to its complex-conjugate \(\overline{A}\), and that \(A\) and \(\overline{A}\) each contain a periodic point of period two in its boundary. Moreover, \(f\) maps \(B:=\operatorname{int}(A\cup\overline{A})\) into \(B\setminus[x,c]\) and so is contracting w.r.t. the hyperbolic metric. Note that we require that \(J\) does not contain backward iterates of non-periodic critical points, so in actual fact, \(\mathbb{C}\setminus K_{X}\) will have no bounded components, see Figure 3(b). (iii) Assume that \(I=[-1,1]\) and \(f\colon I\to I\) is a polynomial so that all its critical points are real and non-periodic, then for each \(n\geq 0\) we have \(K_{n}=\{z\in\mathbb{C};f^{n}(z)\in I\}\) when \(J=(-1,1)\) and so \(K_{X}\) is the usual Julia set of \(f\). On the other hand, if \(J\Subset(-1,1)\) then \(K_{n}\) is a subset of the Julia set of \(f\) pruned at preimages of \(X=\partial f^{-1}(J)\setminus\mathbb{R}\). Again \(K_{n}\) consists of a finite union of smooth curves and \(K_{X}\) is the usual Julia set but pruned at infinitely many preimages of \(X\). The sets \(K_{n}\) are illustrated in Figure 2, 3 and 4. (iv) if one critical points \(c_{1}\) of degree \(\ell_{1}\) is mapped after \(k\) steps onto another critical point \(c_{2}\) of degree \(\ell_{2}\), then \(K_{1}\) has \(\ell_{i}\) curves emanating from \(c_{i}\), \(i=1,2\) but then \(K_{k+1}\) has \(\ell_{1}\cdot\ell_{2}\) curves emanating from \(c_{1}\). (v) if \(f\) has no periodic attractor, then the backward orbits of critical points are dense in \(I\) (this follows from the absence of wandering intervals). If \(z\in I\) is such a point, and \(f^{n}(z)\) is a critical point then \(f^{-(n+1)}(J)\) will contain a curve through \(z\) transversal to \(I\). On the other hand, if \(f\) has a periodic attractor containing a critical point \(c\) and \(J\subset I\) is a very small neighbourhood of \(f(c)\), then there may not be any points in the backward orbit of any critical point in \(J\). In that case \(K_{n}\) has a particularly simple tree structure. ### Maps whose periodic attractors have small basins In general, the basin of a periodic attractor containing a critical point \(c\) in its basin could intersect the boundary of the domain of analyticity of \(f\). In this case, we will choose the pruning interval \(J\ni f(c)\) so it is contained in the basin of a periodic attractor. However, if \(f\) has periodic attractors whose basins are small enough to fit compactly inside the domain of analyticity of \(f\), then we can define a larger version of \(K_{X}(f)\) as follows. More precisely, Figure 4. If the basins are near the interval \(I\), then one can prune along all preimages of the finite set \(X=\partial J^{-1}\) suggested in this figure. If \(f\) is real analytic, then \(K_{X}(f)\) is the analogue of the Julia set ‘near’ the interval \(I\). **Definition 4.1** (Small basins).: We say that \(f\) has _small basins_ or that the _basins of \(f\) fit inside the domain of analyticity_ if for each periodic attractor which contains a real critical point in its basin the following holds: * For each periodic attractor \(a\), let \(B_{0}(a)\) be the immediate (complex) basin of a periodic attracting point \(a\) (possibly parabolic) and let \(B_{\mathbb{R}}(a)\) be the union of the preimages of \(B_{0}(a)\) that intersect the real line. Then \[\overline{B_{\mathbb{R}}(a)}\subset\Omega,\] where \(\Omega\) is a domain of analyticity of \(f\) on which \(f|\Omega\) has only critical points on the real line. In this definition it is allowed that \(f\) has parabolic periodic points. If \(f\) has small basins, we take the pruning intervals \(J_{i}\) so that for each critical point \(c_{i}\) of \(f\) one has that \(\operatorname{Comp}_{f(c_{i})}B_{0}(a)\cap\mathbb{R}\) is compactly contained in \(J_{i}\ni f(c_{i})\) and so that the set \(K_{X,O}(f)\) is compactly contained in a domain of analyticity of \(f\), where \[K_{X,O}(f):=K_{X}\cup\bigcup_{i}\operatorname{cc}_{K_{X}}\overline{B_{i,O}}\] and where \(B_{i,O}\) are the connected components of \(B_{O}\). Implicit in this definition is that \(J_{i}\) can be chosen so that \(\operatorname{cc}_{K_{X}}\overline{B_{i,O}}\) is again in \(\Omega\). In the proof of Theorem 4.2 we will show that if \(f\) has small basins, then we can indeed choose \(J_{i}\) so that this is the case. If \(f\) has small basins, then we have the following analogue of Theorem 3.1: **Theorem 4.1**.: _If \(f\in\mathcal{A}^{\underline{\omega}}\) has small basins, then the properties of Theorem 3.1 hold with \(\Gamma^{*}=\Gamma_{a}=\emptyset\)._ _Remark 4.1_.: The above enlargement \(K_{X,O}(f)\) ensures that the attracting periodic orbits are not visible for the external map \(\hat{f}_{X}\) that we define in the next section. Indeed, if \(f\) has small basins, then the circle map \(\hat{f}_{X}\) that we will construct below will have no periodic attractors, see Lemma 5.1 below. _Remark 4.2_.: Assume that \(f\) has only repelling periodic points. Then not only we will obtain a pruned polynomial-like extension \(F\) of \(f\), but we will even obtain (via holomorphic motion) a pruned polynomial-like extension \(\tilde{F}\) for all maps \(\tilde{f}\) near \(f\). Of course \(\tilde{f}\) may have attracting periodic orbits (which will necessarily have small basins) and the basins of these attractors will be compactly contained in the domain of the pruned polynomial-like extension \(\tilde{F}\colon\tilde{U}\to\tilde{U}^{\prime}\), as we will show in Section 6. Figure 5. The tree structure of the sets \(K_{n}\). The critical points are marked in the figure by the symbol \(\bullet\). The symbol \(\circ\) indicates the points in \(f^{-1}(z(m-1))\) in the components of \(f^{-1}(J(m-1))\) containing \(c(m-1)\) respectively the points in \(f^{-1}(z(m-2))\) in the components of \(f^{-1}(J(m-2))\) containing \(c(m-2)\). ### How the tree structure of \(K_{n}\) is created Although not needed in our discussion, let us discuss the tree structure of \(K_{n}\). Take \(z\in K_{n}\setminus\mathbb{R}\). Then the component \(L_{n}(z)\) of \(K_{n}\setminus\mathbb{R}\) containing \(z\) contains a path connecting \(z\) to \(\mathbb{R}\) consisting of a finite number of smooth arcs which cross each other transversally which are generated as follows: there exist \(0\leq m\leq n\) and \(k\geq 0\), and for each \(1\leq i\leq m\) there exist a critical point \(c(i)\) of \(f\), an interval \(J(i)\) from the collection \(\{J_{1},\ldots,J_{\nu^{\prime}}\}\) containing the critical value \(f(c(i))\), a point \(z(i)\in J(i)\setminus\{c(i)\}\) and an integer \(k(i)\geq 1\) with the following properties. Take the smooth curve \(f^{-1}(J(m))\) through \(c(m)\), consider \(z(m-1)\in J(m-1)\) and \(k(m-1)\geq 0\) so that \(f^{k(m-1)}(z(m-1))=c(m)\). Then \(f^{-(k(m-1)+1)}(J(m))\) contains a smooth arc through \(z(m-1)\in J(m-1)\) transversal to \(\mathbb{R}\) and \(f^{-(k(m-1)+2)}(J(m))\) contains a smooth arc transversally crossing the curve \(f^{-1}(J(m-1))\) through \(c(m-1)\). Next consider \(z(m-2)\in J(m-2)\) and \(k(m-2)\geq 0\) so that \(f^{k(m-2)}(z(m-2))=c(m-1)\). Then \(f^{-(k(m-1)+k(m-2)+2)}(J(m))\) contains a smooth arc, which crosses the smooth arc \(f^{-(k(m-1)+1)}(J(m-1))\) transversally, which in turn intersects \(J(m-2)\) transversally. Hence \(f^{-(k(m-1)+k(m-2)+3)}(J(m))\) crosses a component of \(f^{-(k(m-2)+2)}(J(m-1))\) transversally, which in turn intersects \(f^{-1}(J(m-2))\ni c(m-2)\) transversally, see Figure 5. Continuing in this way, \(f^{-(k(m-1)+\cdots+k(1)+m)}(J(m))\) contains a smooth arc, which crosses a smooth arc in \(f^{-(k(m-2)+\cdots+k(1)+(m-1))}(J(m-1))\) transversally, which in turns crosses \(f^{-(k(m-3)+\cdots+k(1)+(m-2))}(J(m-2))\), etc, until \(f^{-(k(1)+2)}(J(2))\) which intersects \(f^{-1}(J(1))\ni c(i)\) transversally. If the component \(L_{n}(z)\) contains a critical point, then the shortest path in \(L_{n}(z)\) connecting \(z\) to \(\mathbb{R}\) is of this form. If \(L_{n}(z)\) does not contain a critical point then it is any path connecting \(z\) to \(\mathbb{R}\) is mapped onto such such a path by some map \(f^{k}\). ### The set \(K_{x}(f)\) can be chosen arbitrarily near the real line and is locally connected To obtain control on the geometry of \(K_{X}\) we have to develop a construction which incorporates certain complex pullbacks in the complex box construction from [CvST]. This will be done in Appendix A, where also the proof of the following theorem will be given. **Theorem 4.2**.: _For each \(a>0\) and each \(f\in\mathcal{A}_{a}^{\nu}\) there exists \(\delta>0\) so that if the pruning intervals \(J_{i}\) are disjoint and have length \(<\delta\) then_ 1. \(K_{X}(f)\) _is well-defined: in the inductive definition (_4.2_) we have that_ \(cc_{K_{n}^{\prime}}f^{-n}(J^{-1})\subset\Omega_{a}\)_. Moreover,_ \(K_{X}(f)\Subset\Omega_{a}\) _and_ \(K_{X}(f)\) _is real symmetric._ 2. \(f^{-1}(K_{X}(f))\supset K_{X}(f)\) _and_ \(K_{X}(f)\) _is connected;_ 3. \(K_{X}(f)\) _has no interior, is full (i.e._ \(\mathbb{C}\setminus K_{X}(f)\) _is connected) and locally connected;_ 4. _each point of_ \(K_{X}(f)\) _has finitely many accesses;_ 5. _every periodic point of_ \(f\colon K_{X}(f)\to K_{X}(f)\) _is either repelling or in the real line (where_ \(f\) _also denotes the complex extension of the real analytic interval map)._ _Moreover, let \(O\) be a union of periodic attractors and assume that \(O\) has a small basin in the sense of Definition 4.1, then_ 1. _properties (1)-(5) hold for_ \(K_{X,O}(f)\) _and except that in this case_ \(K_{X,O}(f)\) _will have interior;_ 2. _all periodic points of_ \(\hat{f}_{X}\) _in_ \(\partial K_{X,O}(f)\) _are repelling._ As mentioned, the proof of this theorem will be given at the end of Appendix A. _Remark 4.3_.: Note that periodic points are in general not dense in \(K_{X}(f)\). For example, consider a unimodal map with a maximum at \(c\). Then points to the right of \(J\ni f(c)\) do not contain pre-images in \(K_{X}(f)\), and therefore any component of \(K_{X}(f)\setminus I\) which is 'to the right' of \(J\) does not contain periodic points. In the next subsection we will associate to \(f\) and \(K_{X}(f)\) a circle map \(\hat{f}_{X}\). This circle map will have discontinuities and periodic points of this circle map are again not dense in the circle. _Remark 4.4_.: We should emphasise that \(f\in\mathcal{A}^{\underline{\nu}}\) has at most a finite number of attracting or parabolic periodic points, see [MMvS, dMvS]. ## 5. The associated external map: a circle mapping with discontinuities The external mapping associated to a polynomial-like mapping \(f\) is an analytic expanding map of the circle \(\hat{f}:\partial\mathbb{D}\to\partial\mathbb{D}\) that has an extension to a holomorphic mapping \(\hat{f}:A\to A^{\prime}\) where \(A\) and \(A^{\prime}\) are annuli so that \(\partial\mathbb{D}\Subset A\Subset A^{\prime}\). In this section, we will construct an external mapping which is associated to a real analytic map \(f\colon I\to I\) together with its pruned Julia set \(K_{X}\), and use it to prove Theorem 3.1. We will go on to discuss this class of external mappings in general. To do this we will make the following assumption from now on: \[\begin{cases}&\text{the pruning intervals $J$ and the corresponding pruned Julia set}\\ &\text{$K_{X}$ satisfy the conclusions of Theorem \ref{thm:main}.}\end{cases}\] Let \(\psi_{X}\colon\overline{\mathbb{C}}\setminus\overline{\mathbb{D}}\to\overline {\mathbb{C}}\setminus K_{X}\) be the uniformising map that fixes \(\infty\) and with real derivative at \(\infty\), and write \(\phi_{X}=\psi_{X}^{-1}\colon\overline{\mathbb{C}}\setminus K_{X}\to\overline {\mathbb{C}}\setminus\overline{\mathbb{D}}\). Set \[\hat{f}_{X}=\phi_{X}\circ f\circ\psi_{X}, \tag{5.1}\] defined on \(\psi_{X}^{-1}(\Omega_{a}\setminus K_{X})\subset\overline{\mathbb{C}}\setminus \overline{\mathbb{D}}\) which is an open annulus with inner boundary \(\partial\mathbb{D}\). Since \(K_{X}\) is locally connected and each point of \(K_{X}(f)\) has finitely many accesses, by Caratheodory's Theorem \(\psi_{X}\) extends continuously to the boundary of \(\mathbb{D}\) and \(\phi_{X}\) extends as a multi-valued function to the boundary of \(K_{X}.\) So we can write \[\psi_{X}\colon\partial\mathbb{D}\to K_{X}\text{ and }\phi_{X}\colon K_{X}\to\partial \mathbb{D}\] where the latter map is multi-valued. Note that the normalization of \(\psi_{X}\), the assumption that \(I=[-1,1]\) and that \(K_{X}\) is real-symmetric, imply that \(\phi_{X}(-1)=-1\) and \(\phi_{X}(1)=1\). Since there are finitely many accesses at each point in \(K_{X}\), \[\hat{X}=\phi_{X}(X) \tag{5.2}\] is a finite subset of \(\partial\mathbb{D}\). Note that \(\pm 1\notin X\) and therefore \[\pm 1\notin\hat{X}.\] _Example 5.1_.: Let us describe the external maps corresponding to the examples from above: (i) \(f\) has a super-attractor. Let \(I=[-1,1]\), \(J=(-\epsilon,\epsilon)\) with \(\epsilon<1\) and \(f(x)=x^{3}\) then as we saw in Example 4.1 \[K_{X}=\bigcup_{k=1,\dots,\ell-1}e^{\pi ik/\ell}\cdot J^{\prime}\ \bigcup\ I\text{ where }J^{\prime}=[-\epsilon^{1/\ell},\epsilon^{1/\ell}].\] Since the pruned filled Julia is not backward invariant, the associated external map \(\hat{f}_{X}\) is not going to extend continuously to \(\partial\mathbb{D}\). In the next lemma we will show that nevertheless \(\hat{f}_{X}\) extends to a piecewise real analytic map map of \(\partial\mathbb{D}\) with discontinuities at pre-images under \(\psi_{X}\) of \(X=f^{-1}(\partial J)\setminus\mathbb{R}\) and of the critical point. (ii) Assume that \(I=[-1,1]\), \(J_{n}=(-1,1)\) and \(f\colon I\to I\) is a polynomial so that all its critical points are real and non-periodic. Then \(K_{n}=\{z\in\mathbb{C};f^{n}(z)\in I\}\), \(\forall n\geq 0\) and therefore \(K_{X}\) is equal to the filled Julia set of \(f\). On the other hand, if \(J\Subset(-1,1)\) then \(K_{n}\) is a strict subset of the Julia set of \(f\) pruned at preimages of \(X=\partial J^{-1}\). ### A semi-conjugacy with an expanding circle map **Definition 5.1**.: Define the _sign_\(\epsilon(f)\in\{-1,1\}\) of \(f\colon I\to I\) by \[\epsilon(f)=\begin{cases}1&\text{ if }f(1)=1\\ -1&\text{ if }f(1)=-1.\end{cases}\] In this section we will show that associated to \(\hat{f}_{X}\) is is a map in the class of maps \(\mathcal{E}_{\epsilon}^{d}\) defined below. **Definition 5.2** (\(\mathcal{E}_{\epsilon}^{d}\)).: Assume that \(\epsilon\in\{-1,1\}\). We say that \(g\in\mathcal{E}_{\epsilon}^{d}\) if 1. there exists a subset \(\hat{X}\subset\partial\mathbb{D}\) of finite cardinality so that \(g\) is a real-symmetric analytic map \(g\colon\partial\mathbb{D}\setminus\hat{X}\to\partial\mathbb{D}\), so that \(g\) has discontinuities at each point of \(\hat{X}\) and so that \(\underline{\pm 1}\notin\hat{X}\); 2. \(g(\bar{z})=\overline{g(z)}\) for all \(z\in\partial\mathbb{D}\setminus\hat{X}\); 3. \(g(1)=\epsilon\). 4. there exists a map \(g^{*}\colon\mathbb{R}\setminus\pi^{-1}(\hat{X})\to\mathbb{R}\) so that \(\pi\circ g^{*}(x)=g\circ\pi(x)\) for all \(x\in\mathbb{R}\setminus\pi^{-1}(\hat{X})\) where \(\pi\colon\mathbb{R}\to\partial\mathbb{D}\) is the covering map \(t\mapsto e^{2\pi it}\) so that \(g^{*}\colon\mathbb{R}\setminus\pi^{-1}(\hat{X})\to\mathbb{R}\) is strictly monotone increasing on each component of \(\pi^{-1}(\hat{X})\) and so that the jumps at discontinuities of \(g^{*}\) are of size \(<1\); 5. \(g^{*}(x+1)-g^{*}(x)=d\) for all \(x\in\mathbb{R}\setminus\pi^{-1}(\hat{X})\); **Lemma 5.1**.: _Assume that \(f\in\mathcal{A}_{a}^{\nu}\) has only hyperbolic periodic points and assume that the intervals \(J_{i}\ni f(c_{i}),\,c_{i}\in\operatorname{Cr}(f)\), are disjoint and are compactly contained in \(I\). Then \(\hat{f}_{X}\) extends to a map on \(\partial\mathbb{D}\) (minus a finite number of points):_ 1. \(\hat{f}_{X}\colon\partial\mathbb{D}\setminus\hat{X}\to\partial\mathbb{D}\) _which is real analytic, order preserving and has no critical point;_ 2. \(\hat{f}_{X}\) _has a jump discontinuity at_ each _point_ \(\hat{X}\) _and_ \(\phi_{X}(\operatorname{Cr}(f))\cap\hat{X}=\emptyset\)_;_ 3. _each periodic point of_ \(\hat{f}_{X}\colon\partial\mathbb{D}\setminus\hat{X}\to\partial\mathbb{D}\) _is either hyperbolically repelling or corresponds to a (real) periodic attractor or parabolic point of_ \(f\colon I\to I\)_._ _Moreover, \(\hat{f}_{X}\) is in the class \(\mathcal{E}_{\epsilon}^{d}\) defined in the definition above, where \(d=\ell_{1}+\cdots+\ell_{\nu}\), where \(\ell_{i}\) is the \(i\)-th critical point of \(f\), and where \(\epsilon=\epsilon(f)\) (defined above)._ _If \(f\) has no periodic attractors then each periodic orbit of \(\hat{f}_{X}\colon\partial\mathbb{D}\setminus\hat{X}\to\partial\mathbb{D}\) is repelling. Moreover, if \(O\) is the set of all (real) periodic attractors of \(f\) and assume that \(O\) is as in the 2nd part of Theorem 4.2, then each periodic point of \(\hat{f}_{X,O}\colon\partial\mathbb{D}\setminus\hat{X}\to\partial\mathbb{D}\) is repelling. Here \(\hat{f}_{X,O}\) is the external map corresponding to \(K_{X,O}\)._ Proof of Lemma 5.1.: Let us for simplicity assume in the proof below (except at the end of the proof) that no critical point of \(f\) is eventually mapped to itself or to another critical point. To prove the first statement of the lemma let us for simplicity write \(K=K_{X}\) and show that \(\hat{f}_{X}\) extends to a real analytic map outside \(\hat{X}=\psi_{X}(X)\) and has discontinuities at each point in \(\hat{X}\). Indeed, for each \(x\in K\setminus X\), there exists a neighbourhood \(W\subset\mathbb{C}\) of \(x\) so that \(f\) maps each component \(W^{\prime}\) of \(W\setminus K\) homeomorphically onto some component \(W^{\prime\prime}\) of \(f(W)\setminus K\). It follows that \(\hat{f}_{X}\) maps \(\hat{W}^{\prime}=\psi_{X}^{-1}(W^{\prime})\) homeomorphically onto \(\hat{W}^{\prime\prime}=\psi_{X}^{-1}(W^{\prime\prime})\). Taking the reflection of the sets \(\hat{W}^{\prime}\) and \(\hat{W}^{\prime\prime}\) in \(\partial\mathbb{D},\) and using Schwarz reflection principle, it follows that \(\hat{f}_{X}\) is a well-defined analytic map in a neighbourhood of each point of \(\partial\mathbb{D}\setminus X,\) and \(\hat{f}_{X}\) maps \(\partial\mathbb{D}\setminus X\) into \(\partial\mathbb{D}\). Another way of seeing the previous sentence is by considering curves in \(\mathbb{C}\setminus\overline{\mathbb{D}}\) which land at a point in \(\overline{\mathbb{D}}\). For example, consider a critical point \(c_{i}\in K\) which is not eventually mapped into a critical point. Then, locally, each of the \(2\ell_{i}\) sectors near \(c_{i}\) of \(\mathbb{C}\setminus K_{X}\) is mapped by \(f\) to a half-sector near \(f(c_{i})\). Hence the above argument shows that \(\hat{f}_{X}\) extends continuously to \(\phi_{X}(c_{i}),\) and therefore as a real analytic map near \(\phi_{X}(c_{i})\). If \(c_{i}\) is mapped by \(f\) to another non-periodic critical point, then by construction the number of sectors of \(\mathbb{C}\setminus K_{X}\) at \(c_{i}\) will be \(\ell_{i}\) times the number of sectors of \(\mathbb{C}\setminus K_{X}\) at \(f(c_{i})=c_{j}\) and so the previous argument still goes through. In particular, even in this case, \(\hat{f}_{X}\) is real analytic at \(\phi_{X}(c_{i}).\) To prove the second statement of the lemma, we need to analyse what happens at points \(\hat{X}\) and show they are discontinuities of \(\hat{f}_{X}\). This is described in the following **claim:** For each point \(x\in X\) there exists a unique point \(\hat{x}\) (which is in \(\hat{X}\)) so that \(\phi_{X}(\hat{x})=x\) but there are (precisely) two points \(\hat{x}_{1}^{f},\hat{x}_{2}^{f}\) so that \(\phi_{X}(\hat{x}_{1}^{f})=\phi_{X}(\hat{x}_{2}^{f})=x^{f}=f(x).\) To prove this claim, note that for _each_ small neighbourhood \(W\) of \(x,\) we have that \(W\setminus K\) consists of a single component, but the set \(f(W)\setminus K\) consists of two components, one in the upper half-plane and one in the lower half plane, see Figure 7. This proves the claim. It follows that \(\hat{f}_{X}\colon\partial\mathbb{D}\setminus\hat{X}\to\partial\mathbb{D}\) has a discontinuity at each point \(\hat{x}\in\hat{X}\) and its left and right limit values correspond to \(\hat{x}_{1}^{f}\) and \(\hat{x}_{2}^{f}\). Since \(\phi_{X}(\hat{x}_{1}^{f})=\phi_{X}(\hat{x}_{1}^{f})\) it follows that \(\hat{x}_{1}^{f}\) and \(\hat{x}_{2}^{f}\) are each others complex conjugates. Again we can also see that \(\hat{f}_{X}\) has a discontinuity at such a point by considering two curves \(\gamma_{1},\gamma_{2}\) in \(\mathbb{C}\setminus K_{X}\) landing on \(x_{1}\) from the right respectively from the left of the curve \(f^{-1}J^{\prime}\) as in Figure 7. Then \(f(\gamma_{1})\) lands on \(f(x_{1})\) from above and \(f(\gamma_{2})\) lands on \(f(x_{2})\) from below. The same argument also shows that one has discontinuities at \(\phi_{X}(c_{i})\) when \(c_{i}\) is a periodic critical point of \(f.\) To prove the third and fourth statement of the lemma, observe that each periodic point \(\hat{p}\) of period \(s\) for \(\hat{f}_{X}\) corresponds to a periodic point \(p\) of period \(s\) for \(f\colon\Omega_{a}\to\mathbb{C}\). By part (4) of Theorem 4.2, \(p\) is repelling unless it lies on the real line. Let us first assume that \(p\) is a repelling periodic point of \(f\colon\Omega_{a}\to\mathbb{C}\). Then we can choose a neighbourhood \(W\) of \(p,\) so that \(f^{s}(W)=W^{\prime}\Supset W.\) We have that \(\psi_{X}^{-1}(W)\) is a union of disjoint topological disks each with the property that its boundary can be decomposed into two components one in \(\phi_{X}(\partial W)\) and one in \(\partial\mathbb{D}.\) Reflect each of these disks about \(\partial\mathbb{D}\) to obtain a union of topological disks \(W_{\phi}\). One of these components contains \(\hat{p};\) label it by \(\hat{W}.\) Repeat this procedure with \(W^{\prime}\) and to obtain a topological disk \(\hat{W}^{\prime}\Supset\hat{W}\ni\hat{p},\) so that \(\hat{f}_{X}^{s}(\hat{W})=\hat{W}^{\prime}.\) Thus we have that \(\hat{p}\) is a repelling periodic point of \(\hat{f}_{X}\). If \(p\) is an attracting periodic point of \(f,\) then we can find a neighbourhood \(W\) of \(p\) so that \(f^{s}(W)=W^{\prime}\Subset W\) and repeat a similar argument. The parabolic case also goes similarly. If \(O\) is as in (6) of Theorem 4.2, then all periodic points of \(f\) in \(\partial K_{X,O}\) are repelling, and so \(\hat{f}_{X,O}\) has only repelling periodic points. Now it is easy to see that the properties from Definition (5.2) hold. For example, take a point \(p\in f^{-1}(J_{i})\setminus\mathbb{R}\). There are two curves landing at \(p\) approaching \(f^{-1}(J_{i})\setminus\mathbb{R}\) from opposite sides. The image under a complex extension of \(f\) of one of these curves lies in the upper half plane and the other one in the lower half plane, but land at the same point. The number of curves \(f^{-1}(J)\setminus\mathbb{R}\) is determined by \(\ell_{1}+\cdots+\ell_{\nu}\). From all this we obtain that \(g\) has degree \(d=\ell_{1}+\cdots+\ell_{\nu}\) as claimed in property (5) of Definition (5.2). Later on it will be useful to apply a further pruning in a combinatorial well-defined way. For this we will use the following **Lemma 5.2** (Semi-conjugacy with expanding covering map of \(\partial\mathbb{D}\)).: _Let \(g\in\mathcal{E}_{\epsilon}^{d}\) with \(\epsilon\in\{-1,1\}\) and let \(Q_{g}\) be a finite forward invariant set disjoint from the set \(\hat{X}\) of discontinuities of \(g\). Then_ 1. _there exists an orientation preserving degree_ \(d\) _continuous covering map_ \(g_{*}\colon\partial\mathbb{D}\to\partial\mathbb{D}\) _which agrees with_ \(g\) _outside a neighbourhood_ \(U\) _of_ \(\hat{X}\) _with_ \(U\cap Q_{f}=\emptyset\) _and so that the image of_ \(g_{*}\) _of each component of_ \(U\) _has length_ \(<1\)_;_ 2. _there exists a unique semi-conjugacy_ \(h_{g}\) _of_ \(g_{*}\) _with the map_ \(\partial\mathbb{D}\to\partial\mathbb{D}\) _defined by_ \[\begin{cases}z\mapsto z^{d}&\text{ if }\epsilon=+1\\ z\mapsto-z^{d}&\text{ if }\epsilon=-1.\end{cases}\] _Moreover,_ \(h_{g}(1)=1\) _and_ \(h_{g}\) _is real symmetric, i.e.,_ \(h_{g}(\overline{z})=\overline{h_{g}(z)}\)_;_ 3. \(Q=h_{g}(Q_{g})\) _only depends on_ \(g\) _and_ \(Q_{g}\) _and not on the extension_ \(g_{*}\) _of_ \(g\) _(and so not on_ \(h_{g}\)_)._ Proof.: That \(g_{*}\) exists immediately follows from the properties of \(\mathcal{E}^{d}\). Parts (2) and (3) immediately follow from [Sh]. Using this lemma, we can give the following definition. **Definition 5.3**.: \(\mathcal{E}_{\pm,Q}^{d}\). Take \(\epsilon\in\{-1,1\}\) and assume that \(Q\subset\partial\mathbb{D}\) is a finite forward invariant set under \(z\mapsto z^{d}\) if \(\epsilon=1\) and invariant under \(z\mapsto-z^{d}\) if \(\epsilon=-1\). Then \(\mathcal{E}_{\pm,Q}^{d}\) is defined to be the class of maps \(g\in\mathcal{E}^{d}\) so that \(g\) has an invariant subset \(Q_{g}\) which does not intersect the set of discontinuities of \(g\), so that \(h_{g}(Q_{g})=Q\). Sometimes we tacitly use \(\mathcal{E}^{d}\) for the space of circle maps which do actually do arise from interval maps. ### Further pruning and the forward invariant set \(\Lambda_{N}\) Since the analytic behaviour of the map \(\hat{f}_{X}\) near its discontinuities is difficult to describe, we will take for each critical value, \(f(c_{i})\), a closed, nice interval \(J_{i}^{*}\Subset J_{i}\), containing \(f(c_{i})\), so that its boundary points are periodic, preperiodic or in the basin of a periodic attractor. Here \(J_{i}\) are the pruning intervals used in the construction of the pruned Julia sets. Such intervals \(J_{i}^{*}\) exist by the real bounds, see [vSV, Theorem A\({}^{\prime}\)]. Let \(J^{*}=\cup_{i}J_{i}^{*}\) and \[X^{*}=f^{-1}(\partial J^{*})\setminus\mathbb{R}\] and let \(Y\) be the union of the components of \(K_{X}\setminus X^{*}\) which do not intersect \(I\). Associated to each critical point \(c_{i}\) there are \(2(\ell_{i}-1)\) such components. Note that \[\hat{Y}=\phi_{X}(Y),\] consists of open intervals around the discontinuity points of \(\hat{f}_{X}\), i.e., around the finite set \(\hat{X}=\phi_{X}(X)\subset\partial\mathbb{D}\). Let \(B_{0,i}\subset\mathbb{C}\) be the immediate basins of the (real) periodic attractors of \(f\colon\Omega_{a}\to\mathbb{C}\), let \(B_{0}=\cup B_{0,i}\) and let \(B_{0,i}^{*}\) be the connected components of \(K_{X}\setminus\partial B_{0}\) containing points of \(B_{0,i}\). Then define \[\hat{B}_{0,i}=\phi_{X}(B_{0,i}^{*})\] and let \(B,B^{*},\hat{B}_{0}\) be the union of these sets. Note that \(f\) has at most a finite number of real periodic attractors, see [MMvS]. Next we define the \(\hat{f}_{X}\)-forward invariant set \[\Lambda_{N}:=\{z\in\partial\mathbb{D};\hat{f}_{X}^{n}(z)\notin\hat{Y}\text{ for all }0\leq n\leq N\}\text{ and }\Lambda_{\infty}=\cap_{N\geq 0}\Lambda_{N}.\] **Lemma 5.3**.: _The map \(\phi_{X}\), \(\hat{f}_{X}\) and \(\hat{Y}\) have the following properties._ 1. \(\phi_{X}(I)\subset\Lambda_{\infty}\) _and therefore_ \(\Lambda_{\infty}\) _corresponds to the real points;_ 2. \(\partial\hat{Y}=\phi_{X}(X^{*})\)_,_ \(\partial\hat{Y}\subset\Lambda_{\infty}\) _and_ \(\partial\hat{Y}\) _is contained in a finite forward invariant set which avoids the set of discontinuities of_ \(\hat{f}_{X}\)_;_ 3. \(\partial\hat{Y}\subset\partial\mathbb{D}\) _consists of periodic or eventually periodic points or is in the basin of a periodic attractor of_ \(\hat{f}_{X}\)_;_ 4. \(\hat{B}_{0}\subset\partial\mathbb{D}\) _consists of a finite number of intervals, each of which containing a point of_ \(\phi_{X}(a)\) _for some attracting periodic point_ \(a\) _of_ \(f\)_. Moreover,_ \(\partial\hat{B}_{0}\) _is a finite forward invariant set._ _Remark 5.1_.: If each component of the basin of a (real) periodic attractor of \(f\colon\Omega_{a}\to\mathbb{C}\) has small diameter, as in (6) of Theorem 4.2, then by Lemma 5.1 all periodic points of \(f\) in \(\partial K_{X,O}\) are hyperbolically repelling, and so \(\hat{f}_{X,O}\) has only repelling periodic points. In that case, it will not be necessary to use the set \(\hat{B}_{0}\) we just defined in the next section (provided we use \(\hat{f}_{X,O}\) instead of \(\hat{f}_{X}\)). Proof.: By construction the closure of \(Y\) is disjoint from \(I\). In particular, \(f\)-forward iterates of \(x\in I\) never intersect the closure of \(Y\). It follows that \(\hat{f}_{X}\)-forward iterates points of \(z\in\phi_{X}(I)\) do not intersect \(\hat{Y}\), proving assertion (1). \(\partial\hat{Y}=\phi_{X}(X^{*})\) is obvious. Since \(f(\hat{X}^{*})\subset I\) it follows that \(\partial\hat{Y}\subset\Lambda_{\infty}\). For each periodic point \(p\), \(K_{X}\setminus\{p\}\) consists of at most finitely many components. So at most finitely many accesses (or 'rays' land) at each periodic point of \(f\colon K_{X}\to K_{X}\). Hence periodic (preperiodic) points of \(f\colon K_{X}\to K_{X}\) correspond to periodic (resp. preperiodic) points of \(\hat{f}_{X}\colon\partial\mathbb{D}\setminus\hat{X}\to\partial\mathbb{D}\) and in particular \(\partial\hat{Y}\) consists of periodic or preperiodic points of \(\hat{f}_{X}\). This implies the final part of (2). That \(\partial\hat{Y}\) consists of periodic or eventually periodic orbit, follows from assumption that \(\partial J^{*}\) consists of periodic, or eventually periodic points (or the basin of a periodic attractor), proving (3). Let us prove (4). Let \(K_{1}\) be a connected component of \(K_{X}\setminus\partial B_{0}\) that intersects \(B_{0}.\) Then \(\phi_{X}(K_{1})\) consists either of 1. one component, and \(\partial K_{1}\cap\mathbb{R}\) contains an endpoint of \(I\), or 2. two disjoint, connected sets one corresponding to curves through the upper half-plane landing at \(K_{1}\), and the other to curves passing through the lower half-plane. In either case, \(\psi_{X}(\partial\hat{B}_{0})\subset\partial K_{1}\cap I\subset\partial B_{0}\cap I\), which is a finite forward invariant set, and so \(\partial\hat{B}_{0}\) is a finite forward invariant set too. ## 6. An expanding Markov structure of the external map Assume that \(f\) has only hyperbolic periodic points (or that the basins of the parabolic points are sufficiently small so that they be included in a set \(K_{X,O}\) as in Theorem 4.2). Then define \[\Lambda^{\prime}_{N}=\{z\in\partial D;\hat{f}_{X}^{n}(z)\notin\hat{Y}\cup\hat{ B}_{0}\text{ for all }0\leq n\leq N\}\text{ and }\Lambda^{\prime}_{\infty}=\cap_{N\geq 0}\Lambda_{N}\] where we should note that \(\hat{Y}\) contains a neighbourhood of the discontinuity of \(\hat{f}_{X}\). By Lemma 5.3(3) each point in \(\partial(\hat{Y}\cup\hat{B}_{0})\) is periodic or eventually periodic. Therefore we define \[Q_{\hat{f}_{X}}=\partial(\hat{Y}\cup\hat{B}_{0})\] and set \[Q_{f}:=h_{\hat{f}_{X}}(Q_{\hat{f}_{X}})\] where \(h_{\hat{f}_{X}}\) is the semi-conjugacy from Lemma 5.2. _Remark 6.1_.: In fact, if the attractors of \(B\) are have sufficiently small diameter, as described in the 2nd part of Theorem 4.2 then we can replace \(\hat{f}_{X}\) by \(\hat{f}_{X,O}\). In this case, as pointed out in Remark 5.1, all periodic points of \(\hat{f}_{X,O}\) will be repelling, we do _not_ need to include \(\hat{B}_{0}\) in the above definition of \(\Lambda^{\prime}_{N}\). In this case in the next lemma one can replace \(\Lambda^{\prime}_{\infty}\) by \(\Lambda_{\infty}\). It follows that in this case the basins of the periodic attractors of \(f\colon\Omega_{a}\to\mathbb{C}\) will be well-inside the domain of the pruned polynomial-like mapping \(F\colon E\to E^{\prime}\) that we will construct. **Lemma 6.1** (\(\Lambda^{\prime}_{\infty}\) is expanding).: _The set \(\Lambda^{\prime}_{\infty}\) is a forward invariant hyperbolic repelling set and there exists a Riemannian metric \(|\cdot|_{x}\) on \(\Lambda^{\prime}_{\infty}\) and \(\lambda>1\) so that \(|D\hat{f}_{X}(x)v|_{\hat{f}_{X}(x)}\geq\lambda|v|_{x}\) for \(v\in T_{x}\partial\mathbb{D}\), \(x\in\Lambda^{\prime}_{\infty}\). Moreover,_ 1. _there exist_ \(N<\infty\) _and_ \(\lambda>1\) _so that_ \(|D\hat{f}_{X}(x)v|_{\hat{f}_{X}(x)}\geq\lambda|v|_{x}\) _for each_ \(v\in T_{x}\partial\mathbb{D}\)_,_ \(x\in\Lambda^{\prime}_{N}\)_._ 2. _Let_ \(I^{\prime}_{1},\ldots,I^{\prime}_{l}\subset\partial\mathbb{D}\) _be the components of_ \(\Lambda^{\prime}_{N}\) _and let_ \(I_{1},\ldots,I_{k}\subset\partial\mathbb{D}\) _be so that for each_ \(i\) _there exists_ \(j_{i}\) _so that_ \(\hat{f}_{X}(I_{i})=I^{\prime}_{j_{i}}\) _and so that_ \(\Lambda^{\prime}_{N+1}=\cup I_{i}\subset\cup I^{\prime}_{i}\)_._ 3. _Each boundary points of_ \(I_{i}\) _and_ \(I^{\prime}_{j}\) _is periodic or eventually periodic._ Proof.: That \(\Lambda^{\prime}\) is hyperbolic follows from Mane, see [12, Theorem III.5.1]. For example, one can modify \(\hat{f}_{X}\) near \(\hat{X}\) to obtain a new \(C^{2}\) covering map of the circle \(\partial\mathbb{D}\). Since the statement only concerns points which stay away outside a neighbourhood of \(\hat{X}\) one can indeed apply the above theorem of Mane. Part (1) of this lemma therefore follows by taking an adapted metric, see [12, Lemma III.1.3]. Parts (2) and (3) follow from the fact that each point in \(\partial(\hat{Y}\cup\hat{B}_{0})\) is eventually periodic. ### The complex extension of the external map and its Markov structure Since \(\hat{f}_{X}\colon\partial\mathbb{D}\setminus\hat{X}\to\partial\mathbb{D}\) is real analytic, it is defined on a complex neighbourhood of its domain. **Lemma 6.2**.: _For each \(p\in Q_{\hat{f}_{X}}\) there exists a smooth curve \(\gamma_{p}\) through \(p\) which is transversal to \(\partial\mathbb{D}\), so that \(\hat{f}_{X}(\gamma_{p})\supset\gamma_{\hat{f}(p)}\) and so that the curves \(\gamma_{p}\), \(p\in Q_{\hat{f}_{X}}\) are pairwise disjoint._ Proof.: To construct a ray \(\gamma_{p}\) through \(p\in Q_{\hat{f}_{X}}\), observe that \(p\) is a finite forward or backward iterate of a repelling periodic point \(q\in\partial\mathbb{D}\) of period \(s\) under \(\hat{f}_{X}\). Using a local linearizing coordinate at \(q\) we have that \(\hat{f}_{X}^{s}\) is conjugate in a neighbourhood of \(q\) to \(z\mapsto\lambda z\) in a neighbourhood of \(0\), for some \(\lambda>1\). The mapping \(z\mapsto\lambda z\) preserves the line landing at \(0\) that corresponds under the inverse of the linearization to a ray \(\gamma_{q}\) transverse to the circle at \(q\). We transfer this ray to \(p\) through the appropriate iterate of \(\hat{f}_{X}\). Moreover, since \(\hat{f}_{X}\) is conformal, we have that each \(\hat{f}_{X}^{i}(\gamma_{p})\) is transverse to \(\partial\mathbb{D}\), so provided that \(\gamma_{q}\) was chosen short enough, we have that the \(\hat{f}_{X}^{i}(\gamma_{p})\) are pairwise disjoint. Now define \[\Gamma_{\hat{f}_{X}}=\cup_{p\in Q_{\hat{f}_{X}}}\gamma_{p}.\] **Definition 6.1**.: We say that \(\hat{F}_{X}\colon\hat{E}\to\hat{E}^{\prime}\) has an _expanding Markov structure_ if the following holds: * \(\hat{F}_{X}\colon\hat{E}\to\hat{E}^{\prime}\) is a locally univalent covering map; * \(\hat{E}\cap\partial\mathbb{D}=I_{1}\cup\dots\cup I_{k}\), \(\hat{E}^{\prime}\cap\partial\mathbb{D}=I^{\prime}_{1}\cup\dots\cup I^{\prime}_ {l}\) where \(I_{i},I^{\prime}_{j}\) are intervals; * \(\hat{F}_{X}\) maps \(I_{i}\) onto some \(I^{\prime}_{j}\) and each boundary point of \(I_{i}\) and \(I^{\prime}_{j}\) is periodic or eventually periodic. If such a point \(p\) is periodic, then \(\psi_{X}(x)\in I\). * \(\partial\hat{E}\cap\partial\hat{E}^{\prime}\subset\Gamma_{\hat{f}_{X}}\); * \(\hat{F}_{X}(\partial\hat{E}\setminus\Gamma_{\hat{f}_{X}})\subset\partial\hat{ E}^{\prime}\setminus\Gamma_{\hat{f}_{X}}\); * the set \(\Gamma_{\hat{f}_{X}}\) is forward invariant in the sense that \(\hat{F}_{X}(\Gamma_{\hat{f}_{X}}\cap\overline{\hat{E}})=\Gamma_{\hat{f}_{X}} \cap\hat{E}^{\prime}\); * \(\partial\hat{E}\cap\partial\hat{E}^{\prime}\subset\Gamma_{\hat{f}_{X}}\); * the diameters of puzzle pieces, i.e. components of \(\hat{F}_{X}^{n}(\hat{E}^{\prime})\), tend to zero as \(n\to\infty\); * every point in \(\hat{E}\setminus\partial\mathbb{D}\) eventually escapes \(\hat{E}\); * \(\hat{E}\) contains a tubular neighbourhood of \(\partial\mathbb{D}\setminus\hat{B}_{0}\) where \(\hat{B}_{0}\subset\partial\mathbb{D}\) is the set corresponding to the immediate basins of periodic attractors of \(f\). **Proposition 6.3** (Existence of expanding Markov structure).: _Let \(\hat{f}_{X}\) and \(\Gamma_{\hat{f}_{X}}\) be as above. There exist open sets \(\hat{E},\hat{E}^{\prime}\) near \(\partial\mathbb{D}\subset\mathbb{C}\) so that \(\hat{f}_{X}\) extends to a map \(F\colon\hat{E}\to\hat{E}^{\prime}\) which has an expanding Markov structure, in the sense defined above._ Proof.: For each \(i\) take a'rectangular' set \(\hat{E}^{\prime}_{i}\supset I^{\prime}_{i}\) bounded by two arcs from \(\Gamma_{\hat{f}_{X}}\) and two curves from \(\{z\in\mathbb{C};d(z,\partial\mathbb{D})=\tau\}\) and let \(\hat{E}^{\prime}=\cup\hat{E}^{\prime}_{i}\). Here \(d\) is the metric coming induced from the norm from Lemma 6.1. Let \(\hat{E}=\hat{f}_{X}^{-1}(\hat{E}^{\prime})\). By the previous lemma, provided \(\tau>0\) is chosen small enough, \(\hat{f}_{X}\) maps \(\hat{E}\) onto \(\hat{E}^{\prime}\) (locally univalently). **Definition 6.2**.: The parts of \(\partial\hat{E},\partial\hat{E}^{\prime}\) consisting of curves transversal to \(\partial\mathbb{D}\) will be called _rays_ and the other curves will be called _roofs_ or the _equipotentials_ of \(\hat{E}\). Each ray is eventually mapped to a ray through a periodic point. We shall also use the corresponding terminology for the boundary curves of \(E:=\psi_{X}(\hat{E})\) and \(E^{\prime}:=\psi_{X}(\hat{E}^{\prime})\). _Remark 6.2_.: Note that we use the notation \(\hat{F}_{X}\) to emphasise this complex extension of \(\hat{f}_{X}\) has additional structure, see Remark 3.1. Also note that \(\hat{E}\) contains \(\Lambda^{\prime}_{\infty}\). Associating a pruned polynomial-like map to a real analytic map: the proof of Theorem 3.1 when \(f\) has only repelling periodic points Let \(E^{\prime}=\psi_{X}(\hat{E}^{\prime})\), \(E=\psi_{X}(\hat{E})\) and \(\Gamma_{F}=\psi_{X}(\Gamma_{f_{X}})\). Let us show that \[F\colon E\to E^{\prime}\] is a pruned polynomial-like map if \(f\) has no periodic attractors. By Proposition 6.3, \((K_{X}(f)\setminus X)\subset E\). Moreover, we have that if \(z\in\partial E\cap\partial E^{\prime}\), then \(z\in\Gamma_{F}\). By Lemma 5.3(1) and since contains \(\Lambda_{\infty}\) and since we assume that there are no periodic attractor (or parabolic periodic points), we have that \(E\) contains a neighbourhood of \(I\). So in that case we define \(U=E\) and \(U^{\prime}=E^{\prime}\) and obtain a pruned polynomial-like extension \(F\colon U\to U^{\prime}\) of \(f\colon I\to I\). _Remark 6.3_.: If \(f\) does have periodic attractors, then \(E\) does not necessarily contain \(I\). To address this issue we will define sets \(B,B^{\prime}\) corresponding to the basins of the periodic attractors so that \(F\colon E\cup B\to E^{\prime}\cup B^{\prime}\) is a pruned polynomial-like map. ## 7. The pruning data \(Q\) Since the boundary points of \(\hat{Y}\cup\hat{B}_{0}\) are (eventually) periodic there exists a smallest finite \(\hat{f}_{X}\)-forward invariant set \(Q(\hat{f}_{X},\hat{Y},\hat{B}_{0})\subset\partial\mathbb{D}\) so that \[\partial I_{i},\partial I^{\prime}_{j},\partial\hat{B}_{0}\subset Q(\hat{f}_{X},\hat{Y},\hat{B}_{0})\quad\forall i,\] where \(I_{i}\) are the intervals from the previous section. We call \(Q(\hat{f}_{X},\hat{Y},\hat{B}_{0})\) the _pruning data_ of the external map \(\hat{f}_{X}\). The set \(Q(\hat{f}_{X},\hat{Y},\hat{B}_{0})\) gives some finite combinatorial data about \(f\) together with the choices made for \(J\) and \(J^{*}\). **Definition 7.1**.: A set \(Q\subset\partial\mathbb{D}\) is an _admissible pruning set for \(f\in\mathcal{A}^{\underline{\nu}}\)_ if it is of the above form. We have already shown that if \(f\) has only repelling periodic orbits, then it has a pruned polynomial-like extension \(F\colon U\to U^{\prime}\) with respect an admissible pruning set \(Q\). _Remark 7.1_.: If the basins of all real periodic attractors of \(f\) have small diameter (as discussed before), then we can replace \(\hat{f}_{X}\) by \(\hat{f}_{X,O}\). As \(\hat{f}_{X,O}\) will have no periodic attractor, we have that \(\hat{B}_{0}=\emptyset\) in this case. On the other hand, if \(f\) has a periodic attractor whose diameter is not small, then it may be necessary to choose the pruning interval \(J_{i}\) and \(J_{i}^{*}\) to be contained in the basin of this attractor. In that case, the boundary points of \(\hat{Y}\) would be in the basin of \(\hat{B}_{0}\). Nevertheless the set \(Q\) defined above would be finite. From Lemma 5.2 the map is semi-conjugate via a map \(h_{\hat{f}_{X}}\) to either \(z\mapsto z^{d}\) or to \(z\mapsto-z^{d}\), depending on the sign \(\epsilon(f)\) of \(f\). This means that \(h_{\hat{f}_{X}}(Q(\hat{f}_{X},\hat{Y},\hat{B}_{0}))\) is a finite forward set which is invariant under \(z\mapsto z^{d}\) or under \(z\mapsto-z^{d}\) and so this set is 'combinatorially' defined. **Definition 7.2**.: We say that the external maps \(\hat{f}_{X}\) and \(\hat{g}_{X}\) associated to two interval maps are _pruning equivalent_ if the degrees of \(\hat{f}_{X}\) and \(\hat{g}_{X}\) are the same, the signs of \(f,g\) are the same and if moreover \[h_{\hat{f}_{X}}(Q(\hat{f}_{X},\hat{Y},\hat{B}_{0}))=h_{\hat{g}_{X}}(Q(\hat{g}_ {X},\hat{Y}_{g},\hat{B}_{0,g})).\] We will write \(Q(\hat{f}_{X})\sim Q(\hat{g}_{X})\) if one can make choices for \(\hat{Y}\) and \(\hat{Y}_{g}\) to that the above equality holds. _Remark 7.2_.: \(Q(\hat{f}_{X})\sim Q(\hat{g}_{X})\) does not give any information on the (orbits of the) discontinuities of \(\hat{f}_{X}\) and \(\hat{g}_{X}\), and in particular does not imply that \(\hat{f}_{X},\hat{g}_{X}\) are topologically conjugate, nor that \(f,g\) are topologically conjugate. **Definition 7.3** (The set \(Q(F)\) associated to a pruned polynomial-like map).: Given a pruned polynomial-like map \(F\colon U\to U^{\prime}\) with rays \(\Gamma\), we also have a set \(K_{F}\), a conformal map \(\phi_{X}\colon\mathbb{C}\setminus K_{F}\to\mathbb{C}\setminus\mathbb{D}\), and an external map \(\hat{F}\) with discontinuities. The set \(\Gamma\cap K_{X}\) is forward \(F\)-invariant and eventually periodic. Thus the set \(\phi_{X}(\Gamma\cap K_{X})\) corresponds an \(\hat{F}\) invariant subset of \(\mathbb{D}\) which is eventually periodic. Then \(Q(F)\) is defined as the finite subset \(\partial\mathbb{D}\) the image of \(\phi_{X}(\Gamma\cap K_{X})\) under the semi-conjugacy with the linear map on \(\mathbb{D}\), as in Lemma 5.2. ## 8. An attracting structure near hyperbolic attracting basins In this subsection, we will add some additional structure to the pruned polynomial-like map which takes care of basins of the 'large' basins of the interval map \(f\colon I\to I\) (namely the ones that cannot be treated by considering the sets \(K_{X,O}\)). **Definition 8.1**.: We say that \(f\colon B\to B^{\prime}\) has an _attracting structure near attracting basins_ if there exists a finite union \(\Gamma_{a}\) of curves so that * \(B\cap\partial\mathbb{R}\) agrees with the immediate basin of the periodic attractors of \(f\) and each component of \(B\) is contained in the basin of a periodic attractor of \(f\); * \(B\) has finitely many components and \(B\subset\Omega_{a}\); * \(\partial B\cup\partial B^{\prime}\subset\Gamma_{a}\), \(B^{\prime}=f(B)\subset B\) and \(f(\partial B)\subset\partial B^{\prime}\cup\Gamma_{a}\); * \(\Gamma_{a}=\Gamma_{a}^{*}\cup\Gamma_{a}^{v}\cup\Gamma_{a}^{a}\), where * \(\Gamma_{a}^{*}\) is so that \(f(\Gamma_{a}^{*}\cap B)=\Gamma_{a}^{*}\) and each component of \(\Gamma_{a}^{*}\) is a smooth curve connecting an attracting periodic point \(a\) to a repelling periodic points in the boundary of the basin of \(a\), or pre-images of these point; * each component \(\gamma\) of \(\Gamma_{a}^{v}\) is a piecewise smooth arc in \(B\) connecting boundary points of \(B\) and iterates of \(\gamma\) are disjoint from \(\Gamma_{a}^{v}\); * each component \(\gamma\) of \(\Gamma_{a}^{a}\) bounds a disk \(D_{0}(a)\ni a\), where \(a\) is an attracting periodic point of \(\hat{F}_{X}\) so that \(D_{0}(a)\) is in the basin of \(f\) and so that \(\mathbb{C}\setminus(f^{n}(\gamma)\cup\gamma)\) bounds an annulus where \(n\) is the period of \(a\); * each component of \(B^{\prime}\setminus(\Gamma_{a}\cup\mathbb{R})\) and of \(B^{\prime}\setminus\Gamma_{a}\) is a quasidisk. An example of such a structure near a periodic attractor for \(f\colon I\to I\) is shown in 10. **Proposition 8.1**.: _Near each hyperbolic periodic attractor of \(f\colon I\to I\), the complex extension \(f\colon\Omega_{a}\to\mathbb{C}\) has an attracting structure \(f\colon B\to B^{\prime}\) in the sense of the previous definition. There is a corresponding attracting structure \(\hat{F}_{X}\colon\hat{B}\to\hat{B}^{\prime}\) near \(\partial\mathbb{D}\cap\hat{B}\)._ _Remark 8.1_.: Note that by [MMvS, dMvS] the period of periodic attractors of \(f\colon I\to I\) are bounded. Since \(f\) is real analytic, it follows that \(f\) can have at most a finite number of periodic attractors. Figure 10. The attracting structure \(f\colon B\to B^{\prime}\) near an attracting fixed point \(a\) associated to a map \(f\) whose graph is as shown on the left, with the rectangular fundamental domain \(N\) and its image \(f(N)\) and with a curve through the critical point \(c\) marked with the symbol \(*\) and its image \(f(c)\) is marked with the symbol \(\bullet\). The set \(B_{0}^{*}\) is the set bounded by the curves \(\Gamma_{a}^{*}\). By adding a disk \(D_{0}(a)\) round \(a\) and its preimage under \(f\), one obtains an attracting structure. Proof.: Assume that \(f\) has only hyperbolic periodic points, and assume that it has one or more periodic attractors. Let \(a\) be one of these periodic attractors and assume it has period \(n\). Let \(B_{0}(a)\) be the component of the basin which contains \(a\). Choose fundamental domains \(N,N^{\prime}\) in \(B_{0}(p)\subset I\), so that all critical points that are in the basin of \(p\) have forward iterates in \(N\cup N^{\prime}\) and so that \(c\in\partial N\). Here we take \(N=N^{\prime}\) if \(Df^{n}(a)<0\) or if \(Df^{n}(a)=0\) and \(x\mapsto f^{n}(x)\) has a local extremum at \(x=a\), while if \(Df^{n}(a)>0\) or \(Df^{n}(a)=0\) and \(x\mapsto f^{n}(x)\) has an inflection point at \(x=a\) then \(N\) and \(N^{\prime}\) are on opposite sides of \(a\), see Figure 10. Abusing notation, we also denote by \(N,N^{\prime}\) rectangular sets in \(\mathbb{C}\) whose real traces agree with the fundamental domains in \(\mathbb{R}\), whose boundaries consist of pieces of smooth curves and so that \(N,f^{n}(N)\) (respectively \(N,f^{2n}(N)\)) have a smooth common boundary in the orientation preserving (resp. reversing) case. Choose \(N,N^{\prime}\) so that their'vertical' boundaries are smooth curves, orthogonal to \(\partial\mathbb{D}\) and add an additional smooth vertical curve \(\gamma\in\Gamma^{v}\) in \(N,N^{\prime}\) through each iterate of a critical point in \(N,N^{\prime}\), orthogonal to \(\partial\mathbb{D}\) and connecting the top and bottom boundaries of \(N,N^{\prime}\). Adding forward and backward iterates of \(N\) (intersecting \(\mathbb{R}\)) we obtain a set \(B_{0}^{*}\subset\mathbb{C}\) whose boundary is a union of smooth curves landing on the boundary points of \(\partial B_{0}\cap I\) and on \(a\) (and possibly some of its preimages in the immediate basin of \(a\) as is shown in Figure 10). Note that preimages of the curves \(\gamma\in\Gamma^{v}\) which go through a critical value will no longer be orthogonal, and thus we get that some preimages of the fundamental domain \(N\) will be 'complex', see Figure 10. Now add a topological disk \(D_{0}(a)\) around \(a\) so that \(f^{n}(D_{0}(a))\subset D_{0}(a)\) and so that \(B_{0}^{*}\cap\partial D_{0}(a)\) coincides with an iterate of the'vertical curve' in \(\partial N\). Next add preimages \(D_{0}^{\prime}\) of \(D_{0}(a)\) centered at preimages of \(a\) in the immediate basin of \(a\). Thus we obtain a set \[B=B_{0}^{*}\cup D_{0}(a)\cup D_{0}^{\prime}\] so that \(f^{n}(B)\subset B\) and so that \(f^{n}(\partial B)\subset\partial B\cup\Gamma\) where \[\Gamma=f^{n}(\partial D_{0}(a))\cup\partial B_{0}^{*}.\] Taking forward iterates of \(f\), we have shown that around the entire immediate basin of the periodic attractor \(\{a,\dots,f^{n-1}(a)\}\) we have an _attracting structure_ completing the construction. Using the map \(\phi_{X}\) the attracting structure \(f\colon B\to B^{\prime}\) induces an attracting structure for \(\hat{f}_{X}\colon\hat{B}\to\hat{B}^{\prime}\). ## 9. A global pruned polynomial-like structure associated to interval maps Let us now combine the expanding Markov structure and the attracting structure near basins of periodic attractors, to a global structure on a neighbourhood of the dynamical interval: **Definition 9.1**.: We say that \(\hat{F}_{X}\colon\hat{E}\cup\hat{B}\to\hat{E}^{\prime}\cup\hat{B}^{\prime}\) is a _global pruned polynomial-like structure_, if \(\hat{F}_{X}\colon\hat{E}\to\hat{E}^{\prime}\) is an expanding Markov structure and \(\hat{F}_{X}\colon\hat{B}\to\hat{B}^{\prime}\) is attracting structure and if these are _compatible_ in the sense that 1. \(\hat{B}\cup\hat{E}\) contains a tubular neighbourhood of \(\phi_{X}(I)\subset\partial\mathbb{D}\); 2. each component of \(\hat{B}\) is either strictly contained in a component of \(\hat{E}^{\prime}\setminus\Gamma\) or disjoint from \(\hat{E}^{\prime}\) and in the former case \(\hat{E}^{\prime}\setminus\hat{B}\) forms a quasidisk. Similarly, an extension \(F\colon E\cup B\to E^{\prime}\cup B^{\prime}\) of an interval map \(f\colon[-1,1]\to[-1,1]\) is said to have a _global pruned polynomial-like structure_ if \(E\cup B\supset[-1,1]\) and if \(F\colon E\to E^{\prime}\) (coming from the expanding map \(\hat{F}_{X}\colon V\to V^{\prime})\) matches the 'attracting structure' \(F\colon B\to B^{\prime}\) we constructed in the previous section near basins in the sense above. **Theorem 9.1**.: _There exists a global pruned polynomial-like structure \(F\colon E\cup B\to E^{\prime}\cup B^{\prime}\)._ Proof.: To do this, extend \(\hat{F}_{X}\colon\hat{E}\to\hat{E}^{\prime}\) near the boundary of \(\partial B_{0}\) to a map \(\hat{F}_{X}\colon\hat{E}_{1}\to\hat{E}^{\prime}_{1}\), so that \(\hat{E}_{1}\setminus\hat{E}\), \(\hat{E}^{\prime}_{1}\setminus\hat{E}^{\prime}\) are'rectangular' regions, so that \((\hat{E}^{\prime}_{1}\setminus\hat{E}_{1})\cap\hat{B}\) agrees with one of the sets \(\hat{F}^{-i}_{X}(N)\) where \(N\) is from the construction on the previous subsection. This is illustrated in Figure 11. We will add these sets to \(\hat{E}\) and \(\hat{E}^{\prime}\) and denote the resulting map again by \(\hat{F}_{X}\colon\hat{E}\to\hat{E}^{\prime}\). Taking the corresponding sets \(E,E^{\prime},B,B^{\prime}\) we thus obtain a global pruned polynomial-like extension \(F\colon E\cup B\to E^{\prime}\cup B^{\prime}\) of \(f\). Associating a pruned polynomial-like map to a real analytic map: the proof of Theorem 3.1 when there are only hyperbolic periodic points As in Subsection 6.2 we obtain a global pruned polynomial-like mapping even in the presence of hyperbolic periodic attractors. ### Summarising the proof of Theorem 3.1(1-3) in a diagram In the previous subsection we showed how to assign to a real analytic map \(f\) and intervals \(J_{i}\) around the critical values \(f(c_{i})\) of \(f\), a pruned polynomial-like mapping. This was done by considering the pruned Julia set \(K_{X}\), where \(X=\partial J^{-1}\). Using the corresponding Riemann mapping of \(\bar{\mathbb{C}}\setminus K_{X}\to\bar{\mathbb{C}}\setminus\mathbb{D}\) we obtained a circle map and, if \(f\) has no periodic attractors, an expanding structure near \(\partial\mathbb{D}\) (away from the discontinuities of \(\hat{f}_{X}\colon\partial\mathbb{D}\to\partial\mathbb{D}\)). Thus we obtain a pruned polynomial-like map \(F\colon E\to E^{\prime}\) which is a complex extension of \(f\). In this case we write \(U=E\) and \(U^{\prime}=E^{\prime}\). If \(f\) has periodic attractors whose basins are sufficient small (in the sense of Theorem 4.2) then we can replace \(K_{X}\) by a larger set \(K_{X,O}\) and then we again get an expanding structure near \(\partial\mathbb{D}\) and a pruned polynomial-like map \(F\colon E\to E^{\prime}\). In this case the basin of the periodic attractors of \(F\) are compactly contained inside of \(E\), and we again obtain a pruned polynomial-like map \(F\colon U\to U^{\prime}\) when setting \(U=E,U^{\prime}=E^{\prime}\). In this case we have that \(U\) contains \(I\). If \(f\) has attractors with larger basins, then we need to treat the attracting structure near those basins as in Section 8 and Figure 10 and ensure that it is compatible with the expanding structure as in Figure 11. We thus obtain a global pruned polynomial-like map \(F\colon E\cup B\to E^{\prime}\cup B^{\prime}\). In this case we set \(U=E\cup B\) and \(U^{\prime}=E^{\prime}\cup B^{\prime}\). We should also remark that if \(f\in\mathcal{A}^{\underline{c}}\), and given sufficiently small interval neighbourhoods \(J_{i}\), the following diagram commutes, Here, \(\hat{f}_{X}:=\Phi_{X}(f)=\phi_{X}\circ f\circ\psi_{X}\), \(F_{X}:=\Psi_{X}(\hat{F}_{X})=\psi_{X}\circ\hat{F}_{X}\circ\phi_{X}\big{|}_{\psi _{X}(\hat{E})}\). Here the restriction is to \(I\subset\mathbb{C}\) or \(\partial\mathbb{D}\subset\mathbb{C}\) and extension refers to a complex extension. Note that \(\hat{E},\hat{E}^{\prime}\) and therefore \(E,E^{\prime}\) depends on the choice of the set \(Y\) and the intervals \(I_{i}\supset\Lambda^{\prime}\) in \(\partial D\). _Remark 9.1_.: Let us motivate the terminology: the sets \(\hat{E},\hat{E}^{\prime}\) come from the _expanding_ set \(\hat{F}_{X}\colon\hat{E}\to\hat{E}^{\prime}\), whereas \(\hat{B},\hat{B}^{\prime}\) are associated to the _basins_ of attractors. The set \(E,E^{\prime},B,B^{\prime}\) are the corresponding sets for \(F\), and \(U=E\cup B\) is a neighbourhood of \(I\). It may be useful to add the following: **Lemma 9.2**.: _Let \(f\) be a real analytic map with only repelling periodic point. Given sufficiently small interval neighbourhoods \(J_{i}\) of around the critical values \(f(c_{i})\) of \(f\), and setting \(X=\partial J^{-1}\), we obtain_ \[K(F_{X})=K_{X}(f)\] _where we let \(F_{X}\colon U\to U^{\prime}\) be the pruned polynomial-like mapping constructed above._ Proof.: This follows from part 3 of Proposition 6.3. ## 10. Holomorphic motions of pruned polynomial-like mappings and line fields **Definition 10.1**.: A _holomorphic motion_ of \(X\subset\mathbb{C}\) over a complex Banach manifold \(T\ni 0\) is a family of maps \[h_{\lambda}\colon X\to\mathbb{C}\] so that 1. \(\lambda\mapsto h_{\lambda}(x)\) depends holomorphically on \(\lambda\in T\); 2. \(x\mapsto h_{\lambda}(x)\) is injective; 3. \(h_{0}=id\). Holomorphic motions have very useful properties: **Theorem 10.1**.: _[_BR, ST_]_ _A holomorphic motion \(h_{\lambda}\) of \(X\subset\mathbb{C}\) over a Banach ball \(T=B_{r}\) admits an extension to a holomorphic motion \(h_{\lambda}\colon\mathbb{C}\to\mathbb{C}\) over \(T^{\prime}=B_{r/3}\)._ **Theorem 10.2**.: _[_BR_, MMS]_ _A holomorphic motion \(h_{\lambda}\) of \(X\subset\mathbb{C}\) over \(T=\mathbb{D}\) admits an extension to a holomorphic of \(\mathbb{C}\) over \(T=\mathbb{D}\). Moreover, the quasiconformal dilatation \(\varkappa(h_{\lambda})\) of \(h_{\lambda}\) is at most \(\dfrac{1+|\lambda|}{1-|\lambda|}\) for \(\lambda\in\mathbb{D}\)._ Proof.: See page 209 of [AIM]. At the end of this paper, we will also need to consider deformations over some infinite dimensional manifold, Therefore we will also consider quasiconformal motions in Section 31 and in that setting we will no longer use the above theorems. **Definition 10.2**.: Let \(\mathcal{B}_{a}^{\underline{\nu}}\) be the set of maps which are holomorphic on \(\Omega_{a}\), with precisely \(\nu\) critical points in \(\Omega_{a}\) of order \(\ell_{1},\dots,\ell_{\nu}\) and which extend continuously to \(\overline{\Omega_{a}}\). Obviously \(\mathcal{A}_{a}^{\underline{\nu}}\subset\mathcal{B}_{a}^{\underline{\nu}}\). It is easy to see that there exists an open set \(\mathcal{U}\ni f\) in \(\mathcal{B}_{a}^{\underline{\nu}}\) which is complex analytic manifold, see Lemma 18.1 below. The box mappings we constructed above move holomorphically over open subsets of \(\mathcal{B}_{a}^{\underline{\nu}}\). **Proposition 10.3**.: _[Persistence of pruned polynomial-like maps via holomorphic motion] Let \(f\in\mathcal{A}_{a}^{\underline{\nu}}\) and assume that all periodic points points of \(f\) hyperbolic. Then there exists an open complex neighbourhood \(\mathcal{U}\) in \(\mathcal{B}_{a}^{\underline{\nu}}\) of \(f\) so that there exists a pruned polynomial-like extension \(F\colon U\to U^{\prime}\) of \(f\) with rays \(\Gamma\) so that each map \(g\in\mathcal{U}\cap\mathcal{A}_{a}^{\underline{\nu}}\) has a pruned polynomial-like extension \(G\colon U_{G}\to U_{G}^{\prime}\) which is obtained from \(F\colon U\to U^{\prime}\) by holomorphic motion over \(\mathcal{U}\). More precisely,_ * _there exists a holomorphic motion_ \(h_{G}\) _of_ \(\partial U\cup\partial U^{\prime}\cup\Gamma\) _over_ \(G\in\mathcal{U}\) _where_ \[h_{F}=id,\] _so that if we take_ \(U_{G},U_{G}^{\prime}\) _as the regions bounded by the_ \(h_{G}\)_-images of_ \(\partial U,\partial U^{\prime}\) _then_ \(G\colon U_{G}\to U_{G}^{\prime}\) _is a pruned polynomial-like map;_ * _we have_ \[h_{G}\circ G(z)=F\circ h_{G}(z)\text{ for all }z\in\partial U;\] * _if_ \(f\) _has periodic attractors, then in the previous statement we can take_ \(U_{G}=E_{G}\cup B_{G}\) _and_ \(U_{G}^{\prime}=E_{G}^{\prime}\cup B_{G}^{\prime}\)_._ * _We can choose_ \(\mathcal{U}\) _so that_ \(h_{G}\) _extends to a holomorphic motion of_ \(\mathbb{C}\) _over_ \(G\in\mathcal{U}\)_._ Proof.: Let us first consider the case that all periodic points of \(f\) are repelling. Choose a pruned polynomial-like extension \(F\colon U\to U^{\prime}\) so that \(U,U^{\prime},\Gamma\Subset\Omega_{a}\). Note that the boundary of \(U\) and \(U^{\prime}\) consists of rays and equipotentials (using the terminology of Definition 6.2). The finitely many rays (which are in the set \(\Gamma\)) are eventually mapped to an invariant curve through hyperbolic repelling periodic points. So choose the neighbourhood \(\mathcal{U}\) of \(f\) in \(\mathcal{A}_{a}^{\underline{\nu}}\) so that each of these (finitely many) periodic points remains repelling. As mentioned, each curve in \(\Gamma\) is eventually mapped to a ray \(\gamma\) through some repelling periodic point \(p\in I\). For each such \(\gamma\) pick an arc \(\alpha_{t}\subset\gamma\), \(t\in[0,1]\) which is a fundamental domain (so each orbit hits this arc at most once) so that \(\alpha_{0}\in\partial U^{\prime}\) and so that \(F^{n}(\alpha_{1})=\alpha(0)\) or \(F^{2n}(\alpha_{1})=\alpha_{0}\) where \(n\) is the period of the periodic orbit \(p\). Here the case \(F^{2n}\) corresponds to the situation that the multiplier of \(p\) is negative. Now choose a family of arcs \(\alpha_{g,t}\in\mathbb{C}\), \(t\in[0,1]\) depending analytically on \(g\), so that this arc has no self-intersections and so that \(\alpha_{g,0}=\alpha_{0}\) and \(G^{n}(\alpha_{g,1})=\alpha_{g,0}\). So this defines a holomorphic motion of \(\alpha\) over a small neighbourhood \(\mathcal{U}\). Now extend this holomorphic motion over each \(\gamma\subset\Gamma\) by considering \(F^{-n}(\alpha)\) for any \(n\) and thus over all of \(\Gamma\). Also define \(h_{G}|\partial U^{\prime}=id\) and define \(h_{G}\) restricted to the equipotential arcs in \(U\) so that there \(h_{G}\circ G(z)=F\circ h_{G}(z)\). Since the equipotential arcs in \(\partial U,\partial U^{\prime}\) have a positive distance from each other, one can choose a neighbourhood \(\mathcal{U}^{\prime}\subset\mathcal{U}\) of \(f\) in \(\mathcal{B}^{\nu}_{a}\) so that this defines a holomorphic motion of \(\partial U,\partial U^{\prime},\Gamma\) over \(\mathcal{U}^{\prime}\). If we define \(U_{G},U^{\prime}_{G}\) as the regions bounded by the \(h_{G}\)-images of \(\partial U,\partial U^{\prime}\) then we immediately get that the complex extension of \(g\in\mathcal{U}^{\prime}\cap\mathcal{A}^{\nu}_{a}\) forms a pruned polynomial-like map \(G\colon U_{G}\to U^{\prime}_{G}\) with rays \(\Gamma_{G}=h_{G}(\Gamma)\), with the above properties. If \(f\) has also hyperbolic periodic attractors, then one argues similarly considering the arcs in \(\Gamma_{a}\) which are in the basin of the periodic attractors. Theorem 10.1 implies that there exists a neighbourhood \(\mathcal{U}^{*}\subset\mathcal{U}^{\prime}\) of \(f\) in \(\mathcal{B}^{\nu}_{a}\) so that the holomorphic motion \(h_{G}\) of \(\partial U,\partial U^{\prime},\Gamma\) over \(\mathcal{U}^{\prime}\) extends to a holomorphic motion of \(\mathbb{C}\) over \(\mathcal{U}^{*}\). Relabeling \(\mathcal{U}^{*}\) by \(\mathcal{U}\) gives the last assertion. Combining the results of the previous sections gives the proof of Theorem 3.1. _Remark 10.1_.: Of course \(f\) may only have repelling periodic points, whereas a nearby map \(g\) might have periodic attractors or parabolic periodic points. In that case the immediate basin of these periodic attractors of \(G\colon U_{G}\to U^{\prime}_{G}\) will be compactly contained in \(U_{G}\) and will not be relevant for the above discussion. In fact, later on we will also need that the pruned polynomial-like structure persists over the entire topologically conjugacy class (or hybrid class) of \(f_{0}\). For this we cannot use holomorphic motions, but will use quasiconformal motions, see Section 31. ## 11. A global pruning structure in the presence of simple parabolic periodic points Lemma 6.1 no longer holds in the presence of parabolic periodic points. However, we can still treat in a more or less similar way provided all parabolic points are simple, in the following sense: **Definition 11.1**.: We say that a periodic point \(a\) of \(f\colon I\to I\) with (minimal) period \(n\) is a _simple_ parabolic periodic point if \(|Df^{n}(a)|=1\) and it is of saddle-node, period-doubling or of pitchfork type. Here we say that \(a\) is of * _saddle-node_ type if one can write \[Df^{n}(a)=1\text{ and }f^{n}(x)\ =a+(x-a)+\tau(x-a)^{2}+O((x-a)^{3})\text{ for }x\approx a\] with \(\tau\neq 0\). * _period-doubling_ type if \[Df^{n}(a)=-1\text{ and }f^{2n}(x)=a+(x-a)-\tau(x-a)^{3}+O((x-a)^{4})\text{ for }x \approx a,\] with \(\tau\neq 0\). * _pitchfork type_ if \[Df^{n}(a)=1\text{ and }f^{n}(x)\ =a+(x-a)+\tau_{-}(x-a)^{3}+O((x-a)^{4})\text{ for }x \approx a\] with \(\tau_{-}<0\). **Lemma 11.1**.: _If the Schwarzian derivative of \(f\) is negative then each parabolic periodic point of \(f\) is simple. Moreover, each parabolic periodic point is attracting from (at least) one side._ Proof.: This follows from a simple computation. (One can show that if \(f\) has only one critical point then only the first case can hold when \(Df^{n}(a)=1\).) _Remark 11.1_.: If \(Df^{n}(a)=-1\) then necessarily \(D^{2}f^{2n}=0\). More precisely, if \(f^{n}(x)=-x+ax^{2}+bx^{3}+O(x^{4})\) then \(f^{2n}(x)=x-2(b+a^{2})x^{3}+O(x^{4})\). So let us assume that all parabolic periodic points of \(f\) are simple. Let \(B_{0,hyp}\) be the immediate basins of hyperbolic periodic points and \(B_{0,par}\) be the immediate basins of parabolic periodic points of \(f\). Let \(\hat{B}_{0,hyp}\) and \(\hat{B}_{0,par}\) be the corresponding immediate basins for \(\hat{f}_{X}\). Choose a set \(\hat{Z}_{0,par}\) consisting of intervals whose union forms a neighbourhood of \(\hat{B}_{0,par}\) so that each boundary point of \(\hat{B}_{0,par}\) is eventually periodic. Next for each \(N\in\mathbb{N}\), define \[\Lambda^{\prime}_{N,\delta}=\{z\in\partial D;\hat{f}_{X}^{n}(z)\notin\hat{Y} \cup\hat{B}_{0,hyp}\cup\hat{Z}_{0,par}\text{ for all }0\leq n\leq N\}\] and \[\Lambda^{\prime}_{\infty,\delta}=\cap_{N\geq 0}\Lambda_{N,\delta}.\] **Lemma 11.2** (\(\Lambda^{\prime}_{\infty}\) is semi-expanding).: _The set \(\Lambda^{\prime}_{\infty}\) is a forward invariant semi-expanding repelling set: there exists a Riemannian metric \(|\cdot|_{x}\) on \(\Lambda^{\prime}_{\infty}\) and \(\lambda>1\) so that_ \[|D\hat{f}_{X}(x)v|_{\hat{f}_{X}(x)}\geq\lambda^{\prime}_{\delta}(x)|v|_{x} \text{ for all }v\in T_{x}\partial\mathbb{D},x\in\Lambda^{\prime}_{\infty}\] _where_ \[\lambda^{\prime}_{\delta}(x)=\begin{cases}\lambda&\text{ if }x\text{ has distance }\geq\delta\text{ from any parabolic periodic point}\\ 1&\text{ otherwise.}\end{cases}\] _Moreover,_ 1. _there exist_ \(N<\infty\) _and_ \(\lambda>1\) _so that_ \(|D\hat{f}_{X}(x)v|_{\hat{f}_{X}(x)}\geq\lambda^{\prime}(x)|v|_{x}\) _for each_ \(v\in T_{x}\partial\mathbb{D}\)_,_ \(x\in\Lambda^{\prime}_{N,\delta}\)_._ 2. _Let_ \(I^{\prime}_{1},\dots,I^{\prime}_{l}\subset\partial\mathbb{D}\) _be the components of_ \(\Lambda^{\prime}_{N,\delta}\) _and let_ \(I_{1},\dots,I_{k}\subset\partial\mathbb{D}\) _be so that for each_ \(i\) _there exists_ \(j_{i}\) _so that_ \(\hat{f}_{X}(I_{i})=I^{\prime}_{j_{i}}\) _and so that_ \(\Lambda^{\prime}_{N+1,\delta}=\cup I_{i}\subset\cup I^{\prime}_{i}\)_._ 3. _Each boundary points of_ \(I_{i}\) _and_ \(I^{\prime}_{j}\) _is periodic or eventually periodic._ Proof.: This lemma is obtained by a minor modification of Mane's Lemma and the proof of Lemma 6.1. Using this lemma we then obtain the analogues of Lemma 6.2 and Proposition 6.3 in the setting when simple parabolic periodic points are allowed. The only difference is that now we do not obtain an expanding structure on \(\partial\mathbb{D}\) outside \(\hat{B}_{0}\) but outside \(\hat{B}_{0,hyp}\cup\hat{Z}_{0,par}\). Here \(\hat{Z}_{0,par}\subset\partial\mathbb{D}\) is defined as \(\phi_{X}(Z_{0,par})\). To construct an attracting structure in the the basin of a parabolic periodic point, we start with a ray \(\gamma\) through \(\partial Z_{0,par}\) (on the repelling side of parabolic point). We then extend the ray \(\gamma\) to a curve connecting to the parabolic periodic point \(a\), as in Figure 12. More detail on how to construct \(\gamma\) is given in the proof of Theorem 16.1. Next consider the iterates of \(\gamma\) inside a disc \(D\) tangent to \(\mathbb{R}\) at \(a\). The region \(S_{+,f}\) between \(\gamma\) and \(f(\gamma)\) forms a fundamental crescent: each orbit near \(a\) passes precisely once through \(S_{+,f}\). One can get a precise description on the shape of these regions using Fatou coordinates. Next complete the construction as in the proof of Proposition 8.1. Let \(\gamma^{\prime}\) be the invariant curve as shown in Figure 12 (analogous to the curves \(\Gamma^{*}_{a}\) in Figure 10). Then the region \(B\) bounded between \(\gamma\) and \(\bar{\gamma}^{\prime}\) is no longer a quasidisk. This is because the curves \(\gamma\) and \(\gamma^{\prime}\) will be tangent at \(a\). Nevertheless if we add the disc discussed above (with the crescent-like regions inside) we obtain an attracting structure near a simple parabolic periodic point. Thus we get a global pruning structure for maps \(\mathcal{A}^{\underline{\nu}}\) for which all parabolic periodic points are simple, analogously as in Theorem 9.1. ### Holomorphic motion (in a restricted sense) in the presence of parabolic periodic points If \(f\) has parabolic periodic points then the pruned polynomial-like structure that we constructed does not persist under a holomorphic deformation. However, some version of this holds nevertheless. Assume that \(f\) has a simple parabolic periodic point of period \(n\). Let \(\mathcal{B}^{\underline{\nu}}_{a}\) be defined as in Definition 10.2. Let \(\mathcal{PAR}_{f}\) the set of maps \(g\in\mathcal{B}^{\underline{\nu}}_{a}\) so that \(g\) has a simple parabolic periodic point of period \(n\). It is not hard to show that there exists a neighbourhood \(\mathcal{U}\) of \(f\) in \(\mathcal{B}_{a}\) so that \(\mathcal{U}\cap\mathcal{B}^{\underline{\nu}}_{a}\) is a complex analytic manifold, see Subsection 19.3. It follows that the analogue of Proposition 10.3 holds: there exists a neighbourhood \(\mathcal{U}\) of \(f\) in \(\mathcal{B}_{a}\) so that there exists a holomorphic motion \(h_{G}\) of \(\partial E,\partial E^{\prime},\partial B,\partial B^{\prime}\) over \(g\ni\mathcal{U}\cap\mathcal{PAR}_{f}\) the conclusion of Proposition 10.3 holds. In particular each \(g\in\mathcal{A}^{\underline{\nu}}_{a}\cap\mathcal{PAR}_{f}\) near \(f\) has a pruned polynomial-like extension \[G\colon U_{G}:=E_{G}\cup B_{G}\to E^{\prime}_{G}\cup B^{\prime}_{G}=:U^{ \prime}_{G}\] where \(B_{G},B^{\prime}_{G}\) has the parabolic structure as described in the previous subsection. ## 12. Absence of line fields It is useful to observe that using the arguments of [McM, She1], see also [CDKS], the complex bounds of [CvST, CvS] imply: **Proposition 12.1**.: _Suppose that \(F:U\to U^{\prime}\) is a pruned polynomial-like mapping with a filled pruned Julia set \(K_{F}\). Then the boundary of \(K_{F}\) does not support a measurable invariant line field._ ## 13. Two external maps with the same pruning data are qc-conjugate **Proposition 13.1**.: _Let \(\hat{f}_{X},\hat{g}_{X}\) be two external maps with global pruned polynomial-like extensions \(\hat{F}_{X}\colon\hat{E}_{\hat{F}}\cup B_{\hat{F}}\to\hat{E}^{\prime}_{\hat{F }}\cup B^{\prime}_{\hat{F}}\) and \(\hat{G}_{X}\colon\hat{E}_{\hat{G}}\cup B_{\hat{G}}\to\hat{E}^{\prime}_{\hat{G }}\cup B^{\prime}_{\hat{G}}\) so that so that \(Q(\hat{f}_{X})=Q(\hat{g}_{X})\)._ _Then there exists a \(\partial\mathbb{D}\)-symmetric quasiconformal map \(\hat{H}\colon\hat{E}_{\hat{F}}\cup\hat{E}^{\prime}_{\hat{F}}\cup B_{\hat{F}} \to\hat{E}_{\hat{G}}\cup\hat{E}^{\prime}_{\hat{G}}\cup B_{\hat{G}}\) which sends \((\hat{E}^{\prime}_{\hat{F}},\hat{E}_{\hat{F}},B_{\hat{F}},B^{\prime}_{\hat{F }})\) to the corresponding sets \((\hat{E}^{\prime}_{\hat{G}},\hat{E}_{\hat{G}},B_{\hat{G}},B^{\prime}_{\hat{G}})\) and which conjugates \(\hat{F}_{X}\) and \(\hat{G}_{X}\)._ Figure 12. The attracting structure near a parabolic one-sided attractor with quadratic order of contact. The region bounded by the dashed curves come from the expanding structure. The set \(B\) (which is bounded between the curves \(\gamma^{\prime}\) and \(\bar{\gamma}^{\prime}\) is not a quasdisk. Proof.: Since \(Q(\hat{f}_{X})=Q(\hat{g}_{X})\), and since all the relevant sets are quasidisks, there exists a \(\partial\mathbb{D}\)-symmetric map which is quasiconformal and maps the following sets to corresponding sets \[H:(\hat{E}_{\hat{F}},\hat{E}^{\prime}_{\hat{F}},B_{\hat{F}},B^{\prime}_{\hat{F} },\Gamma_{\hat{F}})\rightarrow(\hat{E}_{\hat{G}},\hat{E}^{\prime}_{\hat{G}},B _{\hat{G}},B^{\prime}_{\hat{G}},\Gamma_{\hat{G}})\] and so that \(H\) is a conjugacy between \(\hat{F}_{X}\) and \(\hat{G}_{X}\) restricted to \((\partial\hat{E}_{\hat{F}},\partial\hat{E}^{\prime}_{\hat{F}},B_{\hat{F}},B^{ \prime}_{\hat{F}},\Gamma_{\hat{F}})\). Now set \(H_{0}=H\) and define \(H_{n+1}\) by \(\hat{F}_{X}\circ H_{n+1}=H_{n}\circ\hat{G}_{X}\). Since \(\hat{F}_{X}\) and \(\hat{G}_{X}\) do not have critical points, we obtain a qc conjugacy between the external mappings \(\hat{F}:\hat{E}_{\hat{F}_{X}}\cup B_{\hat{F}_{X}}\rightarrow\hat{E}^{\prime} _{\hat{F}_{X}}\cup B^{\prime}_{\hat{F}_{X}}\) and \(\hat{G}:\hat{E}_{\hat{G}_{X}}\cup B_{\hat{G}_{X}}\rightarrow\hat{E}^{\prime} _{\hat{G}_{X}}\cup B^{\prime}_{\hat{G}_{X}}\). The previous result shows that two maps \(f,g\in\mathcal{A}_{a}^{\underline{\nu}}\) with the same pruning data are 'externally' conjugate: **Proposition 13.2** ('External' qc-conjugacy of the pruned polynomial-like maps).: _Assume that all periodic points of \(f,g\in\mathcal{A}_{a}^{\underline{\nu}}\) are hyperbolic and that \(Q(\hat{f}_{X})=Q(\hat{g}_{X})\). Then the following holds._ 1. _If_ \(f,g\) _have either no periodic attractors, or all periodic attractors have small basins, these maps have pruned polynomial-like complex extensions_ \(F\colon U_{F}\to U_{F}^{\prime}\) _and_ \(G\colon U_{G}\to U_{G}^{\prime}\) _and there exists a qc map_ \(h\colon U_{F}\cup U_{F}^{\prime}\to U_{G}\cup U_{G}^{\prime}\) _so that_ \[U_{G}=h(U_{F})\text{, }U_{G}^{\prime}=h(U_{F}^{\prime})\text{, }\Gamma_{g}=h(\Gamma_{F})\] _and_ \[h\circ F(z)=G\circ h(z)\text{ for }z\in\partial U_{F}\cup\Gamma_{F}.\] _Any such qc map extends to a conjugacy_ \(H\colon(U_{F}\cup U_{F}^{\prime})\setminus K_{F}\rightarrow(U_{G}\cup U_{G}^{ \prime})\setminus K_{G}\) _between_ \[F\colon U_{F}\setminus K_{F}\to U_{F}^{\prime}\text{ and }G\colon U_{G}\setminus K_{G}\to U_{G}^{\prime}\] _with the same dilatation as_ \(h\)_._ 2. _If_ \(f,g\) _do have periodic attractors, then they have a global pruned polynomial-like complex extensions_ \(F\colon E_{F}\cup B_{F}\to E_{G}^{\prime}\cup B_{G}^{\prime}\) _and_ \(G\colon E_{G}\cup B_{G}\to E_{G}^{\prime}\cup B_{G}^{\prime}\) _so that if we set_ \(U_{F}:=E_{F}\cup B_{F}\) _and_ \(U_{G}:=E_{G}\cup B_{G}\) _then_ \[F\colon U_{F}\setminus K_{F}\to U_{F}^{\prime}\text{ and }G\colon U_{G}\setminus K_{G}\to U_{G}^{\prime}\] _are qc-conjugate and so that this qc-conjugacy maps_ \(\Gamma_{F}\) _to_ \(\Gamma_{G}\)_._ Proof.: Let \(E_{F},E_{F}^{\prime},B_{F},B_{F}^{\prime},\Gamma_{F}\) and \(E_{G},E_{G}^{\prime},B_{G},B_{G}^{\prime},\Gamma_{G}\) to be the sets corresponding to \(\hat{E}_{\hat{F}},\hat{E}^{\prime}_{\hat{F}}\), \(\hat{B}_{\hat{F}},\hat{B}^{\prime}_{\hat{F}},\Gamma_{\hat{F}}\) and \(\hat{E}_{\hat{G}},\hat{E}^{\prime}_{\hat{G}},\hat{B}_{\hat{G}},\hat{B}^{\prime} _{\hat{G}},\Gamma_{\hat{G}}\) then the proposition follows immediately from Proposition 13.1. _Remark 13.1_.: Note that \(H\) does not necessarily map \(\phi_{X_{F}}(c_{i})\) to \(\phi_{X_{\hat{F}}}(c_{i})\) (where \(\phi_{X_{F}}\colon K_{F}\rightarrow\partial\mathbb{D}\) and \(\phi_{X_{\hat{F}}}\colon K_{\hat{F}}\rightarrow\partial\mathbb{D}\) are the boundary maps associated to the Riemann mappings from above) and so the above proposition definitely does NOT imply that \(F\colon U_{F}\to U_{F}^{\prime}\) and \(G\colon U\to U_{G}^{\prime}\) are conjugate. ## 14. Hybrid conjugacy **Definition 14.1**.: Assume that all periodic points of \(f,g\in\mathcal{A}^{\underline{\nu}}\) are either hyperbolic or simple parabolic. We say that \(f,g\colon I\to I\) are _real-hybrid conjugate_ if they are qc conjugate on a neighbourhood of \(I\), and there exist a choice for this qc-conjugacy and a neighbourhood \(W\) of the attracting periodic points of \(f\), so that \(h\) is holomorphic restricted to the intersection of \(W\) with the basin of the attracting periodic points of \(f\). We denote by \(\mathcal{H}_{f}^{\mathbb{R}}\) the real hybrid class of \(f\). The above definition does not impose anything about the domain on which the qc conjugacy is defined. By contrast, the next definition requires the qc conjugacy to be defined in the domain of pruned polynomial-like mappings. **Definition 14.2**.: We say pruned polynomial-like maps \(F\colon U_{F}\to U_{F}^{\prime}\) and \(G\colon U_{G}\to U_{G}^{\prime}\) are _hybrid conjugate_ if there exists a qc topological conjugacy \(H\) between \(F\) and \(G\) such that \(\bar{\partial}H=0\) almost everywhere on \(K_{F}\). We denote by \(\mathcal{H}_{F}\) the hybrid class of \(F\), i.e. the set of pruned polynomial-like mappings which are hybrid conjugate to \(F\). If \(F\) is real then \(\mathcal{H}_{F}^{\mathbb{R}}\) denotes the set of real pruned polynomial-like mappings which are qc conjugate to \(F\). _Remark 14.1_.: If all periodic points of \(f,g\) are repelling then [CvS], [CvST] and absence of line fields imply that these maps have pruned polynomial-like extensions \(F\) and \(G\) which are hybrid conjugate, see Theorem 15.1. _Remark 14.2_.: If two maps \(F,G\) are hybrid conjugate and they have periodic attractors, then the conjugacy is conformal on the basins of these attractors. The domain and range \(U_{F},U_{F}^{\prime}\) of a pruned polynomial-like extension of a real analytic are not uniquely defined. Hence: **Definition 14.3**.: We say pruned polynomial-like maps \(F\colon U_{F}\to U_{F}^{\prime}\) and \(G\colon U_{G}\to U_{G}^{\prime}\) are _equivalent_ if \(F=G\) on \(U_{F}\cap U_{G}\) and \(Q(F)=Q(G)\). _Remark 14.3_.: Note that this definition does _not_ consider the germ equivalence class. Indeed one way to obtain an equivalent pruned polynomial-like map from \(F\colon U_{F}\to U_{F}^{\prime}\) is by keeping, but shortening, the rays \(\Gamma_{F}\) and lowering the roofs (i.e. equipotentials) in \(\partial\hat{E},\partial\hat{E}^{\prime}\) see Figure 8 corresponding to part of the boundary \(\partial U_{F},\partial U_{F}^{\prime}\) of the domains which is not contained in \(\Gamma_{F}\). Since the pruning data of two equivalent pruned polynomial-like maps is the same, their Julia sets also are the same. **Lemma 14.1**.: _If \(F\colon U_{F}\to U_{F}^{\prime}\) and \(G\colon U_{G}\to U_{G}^{\prime}\) are equivalent, then there exists domains \(U,U^{\prime}\) so that \(U,U^{\prime}\subset U_{F}\cap U_{G}\) and so that \(F=G\colon U\to U^{\prime}\) is a pruned polynomial-like mapping._ Proof.: Let \(U^{\prime}\) be the connected component of \(U_{F}^{\prime}\cap U_{G}^{\prime}\) which intersects \(I\). Then let \(U\) be the connected component of \(F^{-1}(U^{\prime})\) containing \(I\). ## 15. Topologically conjugate maps have qc-conjugate pruned extensions One of the main consequences of having pruned polynomial-like structure is that topologically conjugate maps in \(\mathcal{A}^{\underline{\nu}}\) (without parabolic periodic points) are qc conjugate on a neighbourhood of the interval \(I\). The next theorem is based on the usual pullback argument. The most important part of the next theorem is point 3, where it is shown that the dilation of the qc conjugacy between \(F\) and \(G\) is _only_ depends on dilatation of the qc conjugacy between \(U^{\prime}_{F}\setminus(U_{F}\cup\Gamma^{\prime}_{F})\), \(U_{F}\setminus\Gamma^{\prime}_{F}\) and \(U_{F}\setminus(U^{\prime}_{F}\setminus\Gamma^{\prime}_{F})\) and \(U^{\prime}_{G}\setminus(U_{G}\cup\Gamma^{\prime}_{G})\), \(U_{G}\setminus\Gamma^{\prime}_{G}\) and \(U_{G}\setminus(U^{\prime}_{G}\setminus\Gamma^{\prime}_{G})\). A consequence of this is that nearby hyrbid conjugate maps, are \(\kappa\)-qc conjugate with \(\kappa>1\) close to one, see Corollary 15.2. **Theorem 15.1** (Pullback argument).: _Assume that \(f,g\in\mathcal{A}_{a}^{\nu}\) are topologically conjugate on \(I\) and do not have parabolic periodic points. Then the following holds._ 1. _If_ \(f,g\) _have either no periodic attractors, or all periodic attractors have small basins, these maps have pruned polynomial-like complex extensions_ \(F\colon U_{F}\to U^{\prime}_{F}\) _and_ \(G\colon U_{G}\to U^{\prime}_{G}\) _which are also qc conjugate._ 2. _If_ \(f,g\) _do have periodic attractors, then they have a global pruned polynomial-like complex extensions_ \(F\colon U_{F}\to U^{\prime}_{F}\) _and_ \(G\colon U_{G}\to U^{\prime}_{G}\) _which are qc conjugate. In this case_ \(U_{F}:=E_{F}\cup B_{F}\)_,_ \(U^{\prime}_{F}:=E^{\prime}_{F}\cup B^{\prime}_{F}\)_,_ \(U_{G}:=E_{G}\cup B_{G}\) _and_ \(U^{\prime}_{G}:=E^{\prime}_{G}\cup B^{\prime}_{G}\)_._ 3. _Moreover, if_ \(f,g\) _are real-hybrid conjugate, and if the external qc conjugacy between their pruned polynomial-like extensions_ \(F,G\) _from Proposition_ 13.2 _has dilatation at most_ \(\varkappa\) _outside_ \(K_{F}\)_, or equivalently on_ \(U^{\prime}_{F}\setminus(U_{F}\cup\Gamma^{\prime}_{F})\)_,_ \(U_{F}\setminus\Gamma^{\prime}_{F}\) _and_ \(U_{F}\setminus(U^{\prime}_{F}\setminus\Gamma^{\prime}_{F})\) _then_ \(F,G\) _are hybrid-conjugate and_ \(\varkappa\)_-qc conjugate on their entire domain._ Proof.: Since \(f,g\) are topologically conjugate, one can choose pruning intervals \(J_{i,f},J_{i,f}^{*}\ni f(c_{i})\) and \(J_{i,g},J_{i,g}^{*}\ni g(c_{i})\) so that \(Q(\hat{f}_{X})=Q(\hat{g}_{X})\). Provided we take these pruning intervals sufficiently small, there exist pruned polynomial-like extensions \(F,G\) which are complex extensions of \(f,g\) and so that \(U_{F},U^{\prime}_{F},U_{G},U^{\prime}_{G}\) are compactly contained in \(\Omega_{a}\). By Proposition 13.2, we also have that there exists a qc conjugacy \(h_{0}\) between \(F,G\) outside \(K_{F}\) and \(K_{G}\). By the Main Theorem in [CvS] the maps \(f,g\) are qs conjugate. Extend this qs conjugacy on the interval \(I\) to \(U\cup U^{\prime}\) so that it agrees with \(h_{0}\) on \(U^{\prime}\setminus U\), \(U\setminus U^{\prime}\), \(\Gamma\cap(U^{\prime}\setminus U),B\). Here we do not claim any bound on the qc bound of this extension. Denote this new qc map by \(h_{0,g}\) and let \(K\) be its quasiconformal dilatation. Since \(h_{0,g}\) is a conjugacy between the postcritical sets of \(f,g\), one can define a sequence of qc maps \(h_{n,g}\) so that \(F\circ H_{n+1,g}=H_{n,g}\circ G\) and so that the qc dilatation of \(h_{n+1,g}\) is the same as that of \(H_{n,g}\). Since the space of \(\kappa\)-qc maps is compact, \(H_{n,g}\) has a convergent subsequence. Moreover, for each \(z\in U\setminus K_{F}\) there exists \(n\) so that \(F^{n}(z)\notin U\) and therefore \(H_{n+i,g}(z)=H_{n,g}(z)\) for all \(i\geq 0\), Since \(h_{0}\colon B_{F}\to B_{G}\) is already a holomorphic conjugacy, and since \(K_{F}\setminus B_{F}\) has no interior, in fact \(h_{n}\) converges to a \(\kappa\)-qc map \(H_{\infty}\). Hence \(h_{\infty}\) conjugates \(F,G\). The last assertion follows immediately since \(F\) has no invariant line fields on \(\partial K_{F}\) and so the quasiconformal dilatation of \(h_{\infty}\) is equal to \(\varkappa(h_{g})\). If \(f,g\) have no periodic attractors then the second assertion holds: since then the filled Julia set of \(F\colon U\to U^{\prime}\) has no interior, and since there are no invariant line fields on the boundary of the filled Julia set. So assume \(f,g\) do have periodic attractors and that \(f,g\) are real-hybrid conjugate. Then by definition there exists a qc conjugacy \(\varphi\) which is holomorphic on a neighbourhood \(W\) of the set of periodic attractors. Now choose the sets \(B,B^{\prime}\) of the pruned polynomial-like maps \(F\colon E\cup B\to E^{\prime}\cup B^{\prime}\) so that \(W\) contains the closed curves \(\Gamma^{a}_{a}\) surrounding the periodic attractors, see Figure 10. Now extend the conjugacy \(\varphi\) to the basin \(B_{0}\) of the set of periodic attractors. Use this extension of \(\varphi\) to construct \(B_{G},B^{\prime}_{G}\) (so define \(B_{G}:=\varphi(B)\), \(B^{\prime}_{G}:=\varphi(B^{\prime})\)). Thus we get a pruned polynomial-like map \(G\colon E_{G}\cup B_{G}\to E^{\prime}_{G}\cup B^{\prime}_{G}\). Now modify the qc-conjugacy \(H_{0}\) near the periodic attractors \(O\) so that it agrees with \(\varphi\) on \(W\). Then using the pullback argument from above, we obtain a \(\kappa\)-qc conjugacy \(H_{\infty}\) which is holomorphic on the basins of \(O\). This completes the proof of the second and third assertion. _Remark 15.1_.: Using the pruned polynomial-like structure, one can also immediately extend the proof in [10] to obtain a qc conjugacy between \(F,G\). The previous theorem gives: **Corollary 15.2** (qc bound).: _For each \(f\in\mathcal{A}_{a}^{\underline{\nu}}\) so that all its periodic points are hyperbolic there exist \(\delta>0\) and \(L>0\) so that the following holds. Assume that \(g_{0},g_{1}\in\mathcal{A}_{a}^{\underline{\nu}}\) are real-hybrid conjugate to each other and that \(||g_{i}-f||_{\infty}<\delta\). Then there exist pruned polynomial-like extensions \(G_{i}\colon U_{G_{i}}\to U_{G_{i}}^{\prime}\) of \(g_{i}\), \(i=0,1\) and a qc conjugacy \(h_{G_{0},G_{1}}\) between them whose qc dilatation_ \[\varkappa(h_{G_{0},G_{1}})\leq 1+L||g_{0}-g_{1}||_{\infty}.\] _Here \(||\cdot||_{\infty}\) is the supremum norm on \(\overline{\Omega_{a}}\)._ Proof.: Let \(G_{i}\colon U_{G_{i}}\to U_{G_{i}}^{\prime}\) the pruned polynomial extensions of \(g_{i}\) obtained by holomorphic motion from \(F\colon U_{F}\to U_{F}^{\prime}\) on some neighbourhood \(\mathcal{U}\) of \(f\) in \(\mathcal{B}_{a}^{\underline{\nu}}\), see Proposition 10.3. The previous theorem, Theorem 15.1, implies that the dilatation of \(h_{G_{1},G_{2}}\) is bounded by the dilatation of the qc conjugacy between the sets \(U_{G_{i}}^{\prime}\setminus U_{G_{i}},U_{G_{i}}\setminus U_{G_{i}}^{\prime}, \Gamma\cap(U_{G_{i}}^{\prime}\setminus U_{G_{i}})\). Let us now obtain an upper bound for the qc-dilation for (some) such conjugacy. Assume that \(\mathcal{U}\) is a \(\delta_{0}\) ball in \(\mathcal{B}_{a}^{\underline{\nu}}\) and assume \(||g_{i}-f||_{\infty}<\delta_{0}/2\). It follows that \(\delta_{0}/2\) ball around \(g_{0}\) is contained in \(\mathcal{U}\). Let \(A=(\delta_{0}/2)/||g_{0}-g_{1}||_{\infty}\) and consider the complex one-dimensional slice \(T=\{g^{t};g^{t}=g_{0}+At(g_{1}-g_{0})\) with \(t\in\mathbb{D}\}\) parametrised by \(t\in\mathbb{D}\). It follows that \(T\subset\mathcal{U}\). Denote by \(G_{t}\) the pruned polynomial-like extension of \(g_{t}\) coming from the holomorphic motion. By Theorem 10.2, there exists a holomorphic motion \(h^{t}\) of \(\mathbb{C}\) over \(\mathbb{D}\) so that its quasiconformal dilatation is at most \[\varkappa(h^{t})\leq\frac{1+|t|}{1-|t|}\quad\forall t\in\mathbb{D}.\] Note that \(g_{1}=g^{t_{1}}\) for \(t_{1}=1/A\). Hence \[\varkappa(h_{G_{0},G_{1}})\leq\varkappa(h^{t_{1}})\leq\frac{1+t_{1}}{1-t_{1}} \leq\frac{1+2||g_{0}-g_{1}||_{\infty}/\delta_{0}}{1-2||g_{0}-g_{1}||_{\infty }/\delta_{0}}\leq 1+L||g_{0}-g_{1}||_{\infty} \tag{15.1}\] for some \(L\) which depends on \(\delta_{0}\). _Remark 15.2_.: In the previous statement it is allowed for \(g_{i}\in\mathcal{A}_{a}^{\underline{\nu}}\) to have attracting or parabolic periodic points that are not present for \(f\), as their basins would be sufficiently small and be compactly contained in \(U_{g_{i}}\) and be contained in \(K(G_{i})\). ## 16. Hybrid-conjugacy in the parabolic case For maps with parabolic periodic orbits we have the following analogue of Corollary 15.2. **Theorem 16.1** (QC-rigidity of maps with simple parabolic periodic points).: _Consider a maps \(f\) with simple parabolic periodic orbits \(O_{1},\ldots,O_{p}\). Then there exist \(\delta>0\) and \(L>0\) so that for any two topologically conjugate maps \(g_{1},g_{2}\) with precisely one simple parabolic orbit in a \(\delta\)-neighbourhood of \(O_{i}\) for each \(i=1,\ldots,p\) and with \(||g_{1}-g_{2}||_{\infty}<\delta\) the following holds. Let \(G_{1},G_{2}\) be pruned polynomial-like extensions of \(g_{1},g_{2}\). Then there exists a \(\varkappa\)-qc \(h\) between \(G_{1},G_{2}\) so that_ \[\varkappa(h)\leq\varkappa\leq 1+L\big{\|}g_{1}-g_{2}\big{\|}_{\infty}. \tag{16.1}\] _If \(g_{1},g_{2}\) are hybrid conjugate, then this \(\bar{\partial}h=0\) on the basin the periodic attractors and parabolic attractors of \(G_{1},G_{2}\)._ Proof.: Note that \(g_{1},g,g_{2}\) are all contained in an infinite dimensional manifold \(\mathcal{P}_{f}\) consisting of maps with parabolic periodic points near \(O_{i}\), \(i=1,\ldots,p\). Consider a simple parabolic periodic point of \(f\). For simplicity assume it is attracting from one side and repelling from the other side, and that this parabolic point is a fixed point at \(x=a\). This means that \(f(x)=(x-a)-(x-a)^{2}+h.o.t.\) for \(x\) near \(a\). As usual one can take the coordinate change \(w=1/(x-a)\) near \(x\in\mathbb{C}\) near \(x=a\) and in these coordinates one gets that \(f\) takes the form \(T(w)=w+1+O(1/w)\) for \(w\approx\infty\). Take a curve \(\gamma\) through the preperiodic point \(\partial Z_{0,par}\) (on the repelling side of parabolic point) which forms part of the boundary of the expanding set \(E\). Note that \(\gamma\) is a preimage of an invariant curve through a periodic point, and we can assume that \(\gamma\) (and all its forward iterates) curves are orthogonal to \(I\). Let \(\hat{\gamma}\) be the corresponding curve in \(w\) coordinates. Extend \(\gamma\) to a curve \(\tau\) containing the fixed point \(a\) in such a manner that the corresponding curve \(\hat{\tau}\supset\hat{\gamma}\) in \(w\) coordinates is disjoint from \(T(\hat{\tau})\). This implies that the region \(\hat{S}_{-}\) between \(\hat{\tau}\) and \(T(\hat{\tau})\) forms a fundamental domain. The corresponding region \(S_{-}\) in the \(z\)-plane has a crescent shape. Now fix some large \(R>0\) and consider the forward iterates of the region \(\hat{S}^{\prime}=\hat{S}\cap\{w;\operatorname{Im}(w)\geq R\}\). This region is contained in \(H=\{w;\operatorname{Re}(w)\geq R-C\}\) where \(C>0\) is a constant which is independent of \(R\). The corresponding region in the \(z\) plane is contained in a disc \(D\) whose boundary goes through \(a\). Choose \(R>0\) so large so that \(D\) is disjoint from \(\gamma\) and \(E^{\prime}\supset E\). Note that the basin of \(a\) is contained in \(D\) and forward iterates of a crescent shaped region \(S_{+}\). Let us show that there exists \(\kappa\)-qc conjugacy \(h\) between \(g_{1},g_{2}\) with the required property. Denote the regions corresponding to these maps by \(E^{\prime}_{i},E_{i},S_{i,0}\). To construct \(h\) first define \(h\colon E^{\prime}_{1}\setminus E_{1}\to E^{\prime}_{2}\setminus E_{2}\) and choose a hybrid conjugacy \(h\) on the immediate basin of the parabolic fixed point \(a\). Next extend this to a homeomorphism \(S_{1,-}\to S_{2,-}\) as a conjugacy on the boundary of \(S_{1,-}\), by extending the conjugacy in the region between \(S_{1,-}\cap D\) and \(S_{1,-}\cap E^{\prime}\). Next extend \(h\) to a neighbourhood of \(a\) by pulling back \(h|S_{i,-}\). Note that this can be done using the (restricted) holomorphic motion, and that is why, as before, we obtain the desired inequality (16.1). Figure 13. The construction of the qc conjugacy between \(g_{1}\) and \(g_{2}\) on a ‘large’ neighbourhood of \(a\). ## Part B: Applications Topological and analytic structure on the space of real analytic functions and pruned polynomial-like mappings As before we define a metric on the space \(\mathcal{A}_{a}\) by the supremum metric \[d(f,g):=||f-g||_{\overline{\Omega}_{a}}:=\sup_{z\in\overline{\Omega}_{a}}|f(z)- g(z)|.\] This makes \(\mathcal{A}_{a}^{\underline{\nu}}\) a real analytic Banach submanifold of the linear real Banach space \(\mathcal{A}_{a}\) (of real analytic maps on \(\Omega_{a}\) without any conditions on critical points), see Section 18. Using the Implicit Function Theorem, we will show in Section 19 that real-hybrid classes for so-called semi-hyperbolic maps are real analytic Banach submanifolds of \(\mathcal{A}_{a}^{\underline{\nu}}\). To show that real-hybrid classes are also real analytic Banach submanifolds of \(\mathcal{A}_{a}^{\underline{\nu}}\) is one of the main aims of the rest of this paper. The _tangent space to \(\mathcal{A}_{a}^{\underline{\nu}}\) at \(f\)_, denoted by \(T_{f}\mathcal{A}_{a}^{\underline{\nu}}\), consists of holomorphic vector fields \(v\) defined on \(\Omega_{a}\) which vanish at \(\partial I\) (since for any \(f\in\mathcal{A}_{a}\) we have \(f(\partial I)\subset\partial I\)) and so that at each critical point \(c_{i}\) of \(f\), \(1\leq i\leq\nu\), we have \(v^{(j)}(c_{i})=0\) for \(1\leq j<\ell_{i}-2\). The condition on the derivatives \(v^{(j)}(c_{i})\) ensures that the critical points do not change order. Note that critical points are allowed to vary within the space \(\mathcal{A}_{a}^{\underline{\nu}}\), and also that \(\mathcal{A}_{a}^{\underline{\nu}}\) is not a linear space. Indeed, if \(f\in\mathcal{A}_{a}^{\underline{\nu}}\) and \(v\in T_{f}\mathcal{A}_{a}^{\underline{\nu}}\) then in general \(f+tv\notin\mathcal{A}_{a}^{\underline{\nu}}\) for \(t\neq 0\). ### The real analytic topology By definition the space \(\mathcal{A}^{\underline{\nu}}\) is the union of \(\mathcal{A}_{a}^{\underline{\nu}}\) over all \(a>0\). Of course, it is also equal to the union of \(\mathcal{A}_{U}^{\underline{\nu}}\) for all open sets \(U\supset I\), where \(\mathcal{A}_{U}^{\underline{\nu}}\) denotes the set of real analytic maps in \(\mathcal{A}^{\underline{\nu}}\) which have an analytic extension to \(U\) and extend continuously to \(\overline{U}\). The _real analytic \(C^{\omega}\)_ topology on \(\mathcal{A}^{\underline{\nu}}\) (also called the _inverse limit topology_) is defined by saying that a set \(\mathcal{O}\) is open if \(\mathcal{O}\cap\mathcal{A}_{a}^{\underline{\nu}}\) is open for all \(a>0\) (in the topology induced by the supremum norm on \(\mathcal{A}_{a}^{\underline{\nu}}\)). This defines a Hausdorff topology on \(\mathcal{A}^{\underline{\nu}}\). In Appendix C we will describe more properties of this topology, in particular: * for \(f_{n},f\in\mathcal{A}^{\underline{\nu}}\) we have \(f_{n}\to f\) (in the real analytic topology) if and only if there exists \(a>0\) so that \(f_{n},f\in\mathcal{A}_{a}^{\underline{\nu}}\) and \(f_{n}\to f\) uniformly on \(\overline{\Omega}_{a}\). * for any \(f\in\mathcal{A}_{a}^{\underline{\nu}}\), any \(a^{\prime}>0\) and any open set \(\mathcal{O}\ni f\) in the real analytic topology, there exists \(g\in\mathcal{O}\setminus\mathcal{A}_{a^{\prime}}^{\underline{\nu}}\); in particular \(\mathcal{A}_{a}^{\underline{\nu}}\) is not an open subset of \(\mathcal{A}^{\underline{\nu}}\). * \(f\in\mathcal{A}_{a}^{\underline{\nu}}\) can be approximated (in the \(C^{k}\) topology on \([-1,1]\)) by real analytic maps \(f_{i}\in\mathcal{A}_{a_{i}}^{\underline{\nu}}\) which are not contained in \(f_{i}\in\mathcal{A}_{a_{i}^{\prime}}^{\underline{\nu}}\) where \(0<a_{i}<a_{i}^{\prime}\) is so that \(a_{i}^{\prime}\to 0\). The space \(\mathcal{A}^{\underline{\nu}}\) can be viewed as _germs_ of real analytic interval maps, in the usual sense, and \(\mathcal{A}^{\underline{\nu}}=\lim\mathcal{A}_{a}^{\underline{\nu}}\). In Appendix C we will elaborate on all this and discuss some further properties of the real analytic topology on \(\mathcal{A}\). This topology on \(\mathcal{A}^{\underline{\nu}}\) is useful as the spaces \(\mathcal{T}_{f}\) and \(\mathcal{H}_{f}\) do not specify the domain of these maps. ### Real-analytic manifolds \(\mathcal{M}\subset\mathcal{A}^{\underline{\nu}}\) is called a _real analytic manifold modelled on a family of Banach spaces_ (or a _real-analytic manifold_ in short), if \(\mathcal{M}\) is the union of sets of the form \(j_{U}(\mathcal{O}_{U})\), where \(j_{U}\colon\mathcal{O}_{U}\to\mathcal{M}\) is a family of (the canonical) injections, \(U\in\mathcal{U}\) and \(\mathcal{O}_{U}\) is an open subset of the Banach space \(\mathcal{A}_{U}\). The set \(j_{U}(\mathcal{O}_{U})\) is called a _Banach slice_ of \(\mathcal{M}\). A set \(\mathcal{X}\subset\mathcal{A}^{\underline{\nu}}\) is called _an immersed submanifold_ if there exists an analytic manifold \(\mathcal{M}\) and an analytic map \(i\colon\mathcal{M}\to\mathcal{A}^{\underline{\nu}}\) so that \(Di(m)\) is a linear homeomorphism onto its range, and if \(\mathcal{X}=i(\mathcal{M})\). We say that \(\mathcal{X}\) is an _embedded manifold_ if \(i\colon\mathcal{M}\to\mathcal{X}\) is a homeomorphism with the topology on \(\mathcal{X}\) coming from the one on \(\mathcal{A}^{\underline{\nu}}\). ### Topology and analytic structure on the space of germs of pruned polynomial-like mappings Following [McM, SS4] we will also endow a topology on the space of germs of pruned polynomial-like mappings: **Definition 17.1**.: \(F_{n}\colon U_{n}\to U_{n}^{\prime}\) converges to \(F\colon U\to U^{\prime}\) if 1. \((U_{n},u_{n})\to(U,u)\) in the _sense of Caratheodory_, i.e. (i) \(u_{n}\to u\), (ii) for each compact \(K\subset U\), \(K\subset U_{n}\) holds for \(n\) large and (iii) for any open connected set \(K^{\prime}\) containing \(u\), if \(K^{\prime}\subset U_{n}\) for infinitely many \(n\) then \(K^{\prime}\subset U\). Here we will take \(u_{n}=u=0\). 2. \(F_{n}\to F\) on compact subsets of \(U\). Given a pruned polynomial-like mapping \(F\colon U\to U^{\prime}\) so that all its periodic points are hyperbolic, and a neighbourhood \(\tilde{U}\) of \(\overline{U}\) on which \(F\) is holomorphic and extends continuously to its closure, there exists a neighbourhood \(B_{\tilde{U}}(F,\epsilon)\) consisting of all holomorphic \(G\colon\tilde{U}\to\mathbb{C}\) with \(||G-F||_{\tilde{U}}<\epsilon\) so that each \(G\in B_{\tilde{U}}(F,\epsilon)\) has a pruned polynomial-like map \(G\colon U_{G}\to U_{G}^{\prime}\). This is obtained via holomorphic motion, see Proposition 10.3. Let \[j\colon B_{\tilde{U}}(F,\epsilon)\to\mathcal{PPL}:=\{\text{space of pruned polynomial-like mappings}\}/\!\!\sim \tag{17.1}\] be the corresponding injection. Here \(\sim\) is the germ-equivalence relation from Definition 14.3. From the properties of the holomorphic motion \(j\) is continuous. Because of Lemma 14.1 it follows that if \(j(B_{\tilde{U}}(F,\epsilon))\cap j(B_{\tilde{U}^{\prime}}(F^{\prime},\epsilon^ {\prime}))\neq\emptyset\) the composition map \((j|B_{\tilde{U}^{\prime}}(F^{\prime},\epsilon^{\prime}))^{-1}\circ(j|B_{ \tilde{U}}(F,\epsilon))\) is the identity map. It follows that the map \(j\) from (17.1) defines an analytic structure on the space of pruned polynomial-like mappings. In fact, if we denote by \(\mathcal{PPL}_{U}\) the space of pruned polynomial-like whose domain contains \(U\), then for each \(F\in\mathcal{PPL}_{U}\) there exists \(a>0\) so that \(F\in\mathcal{A}_{a}^{\underline{\nu}}\). Thus we have the map \[\mathcal{PPL}_{U}\to\mathcal{A}_{a}^{\underline{\nu}}\to\mathcal{A}^{ \underline{\nu}}.\] The image of \(B_{U}(F,\epsilon)\) under this map \[B_{U}(F,\epsilon)\to\mathcal{PPL}\to\mathcal{A}_{a}^{\underline{\nu}}\to \mathcal{A}^{\underline{\nu}}.\] is called a _slice_ in \(\mathcal{A}^{\underline{\nu}}\). ### Organisation of this part of the paper In Sections 18-19 we will show that the real-hybrid class of any special (hyperbolic) map \(f\in\mathcal{A}_{a}^{\underline{\nu}}\) forms a real analytic Banach manifold in \(\mathcal{A}_{a}^{\underline{\nu}}\). Using this and the so-called Mating Theorem, we will then obtain in Sections 20-22 an immersed manifold structure on the hybrid class of an arbitrary function \(f\in\mathcal{A}^{\underline{\nu}}\). In Sections 23-26 we will determine the codimension of this manifold. In Section 27 we will show that this manifold is even an embedded manifold. In Section 28 we then show that \(\mathcal{H}^{\mathbb{R}}\cap\mathcal{A}_{a}^{\underline{\nu}}\) is a real Banach space, which means that it is not necessary to work within the real analytic topology or within a germ setting. In Sections 29-31 we then show that it is contractible and that these manifolds form a partial lamination. ## 18. The space \(\mathcal{A}_{a}^{\underline{\nu}}\) is a Banach manifold **Lemma 18.1**.: _The space \(\mathcal{A}_{a}^{\underline{\nu}}\) is a real analytic submanifold of the linear space \(\mathcal{A}_{a}\) (of real analytic maps on \(\Omega_{a}\) without any conditions on critical points)._ Proof.: If \(\ell_{i}=2\) for all \(i=1,\dots,\nu\) then \(\mathcal{A}_{a}^{\nu}\) is an open subset of \(\mathcal{A}_{a}\) and so there is nothing to prove. Otherwise one can prove this lemma as was done in Theorem 2.1 in [11]. Indeed, since \(f\in\mathcal{A}_{a}^{\nu}\), there exist \(\nu\), \(\ell_{1},\dots,\ell_{\nu}\) so that for each \(i=1,2,\dots,\nu\), \[f^{\prime}(c_{i})=f^{\prime\prime}(c_{i})=\dots=f^{(\ell_{i}-1)}(c_{i})=0,f^{( \ell_{i})}(c_{i})\neq 0.\] Applying the Implicit Function Theorem to the maps \(\mathcal{A}_{a}\times\mathbb{C}\ni(g,\zeta_{i})\mapsto g^{(\ell_{i}-1)}(\zeta_ {i})\) for \((g,\zeta_{i})\) near \((f,c_{i})\), gives that there exists a neighborhood \(W\) of \(f\) in \(\mathcal{A}_{a}\) and uniquely defined functions \(\zeta_{i}\colon W\to\mathbb{C}\) which are holomorphic such that \(\zeta_{i}(f)=c_{i}\) and \(g^{(\ell_{i}-1)}(\zeta_{i}(g))=0,g^{(\ell_{i})}(\zeta_{i}(g))\neq 0\) for each \(g\in W\). Replacing \(W\) by a smaller neighborhood, for each \(g\in W\) the equation \(g^{\prime}(\zeta)=0\) has \(\ell_{i}-1\) solutions \(\zeta\) (counting multiplicity) near \(c_{i}\). It follows that for any \(g\in W\cap\mathcal{A}_{a}^{\nu},\ \ g^{\prime}(\zeta)=0\) has a unique solution near \(c_{i}\) (with multiplicity \(\ell_{i}\)); hence \(\zeta_{i}(g)\) is the only critical point of \(g\in W\cap\mathcal{A}_{a}^{\nu}\) near \(c_{i}\) and it has multiplicity \(\ell_{i}-1\). For \(g\in W\), write \[\zeta_{i}^{0}(g)=g(\zeta_{i}(g)),\zeta_{i}^{1}(g)=g^{\prime}(\zeta_{i}(g)), \zeta_{i}^{2}(g)=g^{\prime\prime}(\zeta_{i}(g)),\dots.\] Thus \(\zeta_{i}(g)\) is a critical point of \(g\) with multiplicity \(\ell_{i}-1\) if and only if \(\zeta_{i}^{j}(g)=0\) for all \(1\leq j\leq\ell_{i}-2\) (note that \(g^{(\ell_{i}-1)}(\zeta_{i}(g))=0,g^{(\ell_{i})}(\zeta_{i}(g))\neq 0\) holds automatically for all \(g\in W\)). Define \(G\colon W\to\mathbb{C}^{(\ell_{1}-2)+\dots+(\ell_{\nu}-2)}\) by \[g\to(\zeta_{1}^{1}(g),\dots,\zeta_{1}^{(\ell_{1}-2)}(g),\dots,\zeta_{\nu}^{1} (g),\dots,\zeta_{\nu}^{(\ell_{\nu}-2)}(g)).\] The map \((DG)_{f}\colon T_{f}W\to\mathbb{C}^{(\ell_{1}-2)+\dots+(\ell_{\nu}-2)}\) has maximal rank. Indeed, if we take \(1\leq i_{0}\leq\nu\), \(1\leq j\leq\ell_{i_{0}}-2\) and the family \(g_{t}=f+tv\) where \(v(z)=\prod_{i\neq i_{0}}(z-c_{i})^{\ell_{i}}(z-c_{i_{0}})^{j}\) then \((DG)_{f}(v)=se_{i}\neq 0\) where \(i=(\ell_{1}-2)+\dots+(\ell_{i_{0}-1}-2)+j\), \(e_{i}\) is the standard unit vector in the \(i\)-th coordinate and \(s=j!\prod_{i\neq i_{0}}(c_{i_{0}}-c_{i})^{\ell_{i}}\neq 0\). Hence the lemma follows from the implicit function theorem. ## 19. Conjugacy classes of semi-hyperbolic maps form Banach manifolds **Definition 19.1**.: We say that \(f\in\mathcal{A}_{a}^{\nu}\) is _hyperbolic_ if all its periodic points are hyperbolic and all critical points are in basins of periodic attractors. In this paper, we say that \(f\) is _semi-hyperbolic_ if all its periodic points are hyperbolic and so that each critical point of \(f\) is either * eventually mapped onto a hyperbolic repelling periodic point, or * periodic, or * is in the basin of a hyperbolic periodic attractor. In this section we will show that the hybrid classes of such, and similar, maps form Banach manifolds. A necessary and sufficient condition for a map to be topologically or hybrid conjugate to a semi-hyperbolic map The theorem below shows, amongst other things, when a map \(g\in\mathcal{A}_{a}^{\nu}\) is real-hybrid conjugate to a semi-hyperbolic map \(f\). **Theorem 19.1**.: _Let \(f\in\mathcal{A}_{a}^{\nu}\) be a real analytic interval map which is semi-hyperbolic. Then_ 1. _There exists a neighbourhood_ \(\mathcal{O}\) _of_ \(f\) _in_ \(\mathcal{A}_{a}^{\nu}\) _and a smooth function_ \(\Psi_{H}\colon\mathcal{O}\to\mathbb{R}^{\nu^{\prime}}\) _so that_ \[\Psi_{H}(g)=\Psi_{H}(f)\iff g\text{ is real-hybrid conjugate to }f.\] _Here_ \(\nu^{\prime}=\nu+\nu_{noness-att}\) _where_ \(\nu\) _is the number of critical points of_ \(f\) _and_ \(\nu_{noness-att}\) _is the number of periodic attractors of_ \(f\) _without critical points in their basin._ _._ 2. _There there exists a neighbourhood_ \(\mathcal{O}\) _of_ \(f\) _in_ \(\mathcal{A}_{a}^{\underline{\nu}}\) _and a smooth function_ \(\Psi_{T}\colon\mathcal{O}\to\mathbb{R}^{\nu_{T}}\) _so that_ \[\Psi_{T}(g)=\Psi_{T}(f)\iff g\text{ is topologically conjugate to }f.\] _Here_ \(\nu_{T}=\nu-\zeta(f)\)_, where_ \(\nu\) _is the number of critical points of_ \(f\) _and_ \(\zeta(f)\) _is the maximal number of critical points in the basins of periodic attractors with pairwise disjoint infinite orbits._ Proof.: Let us first define \(\Psi_{H}\) as in assertion (1). Choose a set \(Atr\) of periodic attracting points, so that each periodic attracting orbit intersects \(Atr\) in precisely one point. For each \(p\in Atr\) which contains critical points in its basin, choose one such critical point \(c(p)\) and let \(\mathrm{Cr}^{\prime}_{at}(p)\) be the remaining critical points in the basin of \(p\). Note that since \(f,g\) are real analytic, there may not be any critical point in the basin of \(p\). Also note that any real analytic map \(f\) has at most a finite number of periodic attractors, see [MMvS]. By assumption, for each \(c\in\mathrm{Cr}(f)\) exactly one of the following holds 1. there exists \(q_{c}>0\) so that \(c^{\prime}(c):=f^{q_{c}}(c)\in\mathrm{Cr}(f)\) and so that \(f^{k}(c)\notin\mathrm{Cr}(f)\) for \(1\leq k<q_{c}\); 2. there exists \(1\leq l_{c}<q_{c}\) so that \(f^{q_{c}}(c)=f^{l_{c}}(c)\) and \(f^{k}(c)\notin\mathrm{Cr}(f)\) for \(1\leq k\leq q_{c}\); 3. \(c\) has an infinite orbit and is in the basin of a hyperbolic periodic attractor \(p\in Atr\) so that \(f^{k}(c)\notin\mathrm{Cr}(f)\) for all \(k>0\). For each \(c\) as in case (ii), we have by assumption that \(Df^{q_{c}-l_{c}}(f^{l_{c}}(c))\neq 1\) where \(f^{l_{c}}(c)\) is a periodic point of period \(q_{c}-l_{c}\). Let us denote the set of critical points of \(f\) for which (i), (ii) and (iii) holds by \(\mathrm{Cr}_{ec}\), \(\mathrm{Cr}_{ep}\) and \(\mathrm{Cr}_{at}\). Note that \(\mathrm{Cr}_{at}=\cup_{p\in Atr}[\mathrm{Cr}^{\prime}_{at}(p)\cup\{c(p)\}]\). Let \(\mathcal{O}\) be a neighbourhood of \(f\) in \(\mathcal{A}_{a}^{\underline{\nu}}\) so that critical points \(c_{g}\) of \(g\) depends smoothly on \(g\in\mathcal{O}\). In particular there exists a critical point \(c_{g}^{\prime}(c)\) of \(g\) near \(f^{q_{c}}(c)\) for each \(c\in\mathrm{Cr}_{ec}\). For \(p\in Atr\), let \(r\) be the period of \(p\) and let \(\varphi_{p}\colon(\mathbb{R},p)\to(\mathbb{R},0)\) be the holomorphic map so that \(\varphi_{p}\circ f^{r}(z)=\lambda\varphi_{p}(z)\) near \(p\), see [Mil]. (The map \(\varphi_{p}\) is unique up to a multiplicative constant.) For each critical point \(c\) in the basin of \(p\), let \(n_{c}\) be so that \(f^{n_{c}}(c)\) is in the immediate basin of \(p\). Normalise \(\varphi_{p}\) so that \(\varphi_{p}(f^{n_{c}}(c(p)))=\pm 1\) where \(c(p)\) is a 'preferred' critical point in the basin of \(p\) and where the sign depends on whether \(f^{n_{c}}(c(p))\) is to the left or to the right of \(p\). Note that \(\varphi_{p}\) is defined by \[\varphi_{p}(z)=\lim_{n\to\infty}(f^{nr}(z)-p_{c})/\lambda^{n}\] where \(\lambda=Df^{r}(p)\) and that \(\varphi_{p}\circ f^{r}=\lambda\varphi_{p}\). Normalise \(\varphi_{g,p}\) as above by \(\varphi_{p,g}(g^{n_{c}}(c(p)))=\pm 1\). Now define \[\Psi_{H}\colon\mathcal{O}\to\mathbb{C}^{\nu^{\prime}}\] by \[\begin{split}\Psi_{H}(g)&=\big{(}(g^{q_{c}}(c_{g})- c_{g}^{\prime}(c))_{c\in\mathrm{Cr}_{ec}},(g^{q_{c}}(c_{g})-g^{l_{c}}(c_{g}))_{c \in\mathrm{Cr}_{ep}},\\ (Dg^{r_{c}}(p_{c}),(\varphi_{p,g}(g^{n(c(p))}(c(p)))-\varphi_{p,g} (g^{n(c^{\prime})}(c^{\prime})))_{c^{\prime}\in\mathrm{Cr}_{at}(p)\setminus\{c( p)\}})_{p\in Atr}\big{)}\in\mathbb{R}^{\nu^{\prime}}.\end{split} \tag{19.1}\] Here if \(p\in Atr\) has only at most one critical point in its basin, the term \[(\varphi_{p,g}(g^{n(c(p))}(c(p)))-\varphi_{p,g}(g^{n(c^{\prime})}(c^{\prime}) ))_{c^{\prime}\in\mathrm{Cr}_{at}(p)\setminus\{c(p)\}}\] should be ignored. That \(\Psi_{H}(f)\in\mathbb{R}^{\nu^{\prime}}\) follows from \(\mathrm{Cr}=\mathrm{Cr}_{ec}\cup\mathrm{Cr}_{ep}\cup\mathrm{Cr}_{at}\) and since \(\mathrm{Cr}_{at}=\cup_{p\in Atr}[\mathrm{Cr}^{\prime}_{at}(p)\cup\{c(p)\}]\). Let us now show that there exists a neighbourhood \(\mathcal{O}\) of \(f\) so that for any \(g\in\mathcal{O}\) one has that \(\Psi_{H}(f)=\Psi_{H}(g)\) if and only if \(f,g\) are hybrid equivalent. To see that \(\Psi_{H}\) is constant on hybrid classes, first note that by definition the coordinates corresponding to \(\mathrm{Cr}_{ec}\) and \(\mathrm{Cr}_{ep}\) of \(\Psi_{H}(f)\) are zero. If \(f\) has a (hyperbolic) periodic attractor \(p\) then denote by \(p_{g}\) the corresponding periodic attractor for \(g\). Then for \(f,g\) to be hybrid conjugate it is necessary that the conjugacy \(h\) is holomorphic map from the immediate basin of \(f\) at \(p\) to the immediate basin of \(g\) at \(p_{g}\). This implies that the multiplier of \(f\) at \(p\) is equal to the multiplier of \(g\) at \(p_{g}\). Let \(H\) be the map \(h\) in terms of the \(\varphi_{p,f}\) and \(\varphi_{p,g}\) linearising coordinates. It follows that \(H\colon(\mathbb{C},0)\to(\mathbb{C},0)\) is also univalent and since the only univalent map from \(\mathbb{C}\) to \(\mathbb{C}\) is linear, it follows that this (real) linear map is completely determined by the condition that \(h(f^{n(c(p))}(c(p)))=g^{n(c(p))}(c(p))\) which implies that for the lift \(H(\varphi_{g,c}(g^{n(c(p))}(c(p))))=\varphi_{g,c}(g^{n(c(p))}(c(p)))\). Here \(c(p)\) is the preferred critical point in the basin of \(p\). Because \(\varphi\) is unique up to a multiplicative constant, but is normalised so that \(\varphi_{p,g}(g^{n_{c}}(c(p)))=\pm 1\) it follows that \(H=id\). Hence \[\mathcal{O}\ni g\mapsto\Big{(}\varphi_{p,g}(g^{n(c(p))}(c(p)))-\varphi_{p,g}( g^{n(c^{\prime})}(c^{\prime}))_{c^{\prime}\in\mathrm{Cr}_{at}(p)\setminus\{c(p) \}})_{p\in Atr}\Big{)}\in\mathbb{C}^{\nu^{\prime}}\] is constant for all maps which are hybrid conjugate to \(f\). Similarly, if \(\Psi_{H}(f)=\Psi_{H}(g)\) and \(g\in\mathcal{O}\) then all the critical relations are preserved. Moreover, there exists a qc-conjugacy \(h\) between pruned-polynomial-like extensions \(F\colon U\to U^{\prime}\) and \(G\colon U_{g}\to U_{g}^{\prime}\) of \(f\) and \(g\), whose dilatation vanishes on the basins of the periodic attractors and so that \(\bar{\partial}h=0\) on \(K_{F}\). Notice that we may need to adjust the choice of boundaries of the basins of attraction of \(G\colon U_{g}:=E_{g}\cup B_{g}\to E_{g}^{\prime}\cup B_{g}^{\prime}=:U_{g}^{\prime}\) to obtain a conformal conjugacy on basins of attraction. Thus, \(F\colon U\to U^{\prime}\) and \(G\colon U_{g}\to U_{g}^{\prime}\) are hybrid conjugate. Let us now construct the analogous map \(\Psi_{T}\) for the topological conjugacy classes. In that case, the conjugacy is more flexible inside the basin of periodic attractors. However, \(\Psi_{T}\) now needs to take account of critical relations within the basin of periodic attractors. To do this, choose a maximal subset of \(\mathrm{Cr}_{at}^{*}\) of \(\mathrm{Cr}_{at}\) so that the forward orbits of the critical orbit in \(\hat{\mathrm{Cr}}_{at}\) are pairwise disjoint. Moreover, for each \(c\in\mathrm{Cr}_{at}^{*}\) choose the subset of \(\mathrm{Cr}_{at}(c)\) of \(\mathrm{Cr}_{at}\) of critical points \(c^{\prime}\neq c\) so that there exist \(l_{c},l_{c^{\prime}}\geq 1\) so that \(f^{l_{c}}(c)=f^{l_{c^{\prime}}}(c^{\prime})\). Now define \[\Psi_{T}\colon\mathcal{O}\to\mathbb{C}^{\nu-\zeta(t)}\] by \[\begin{split}\Psi_{T}(g)&=\big{(}(g^{q_{c}}(c_{g})- c_{g}^{\prime}(c))_{c\in\mathrm{Cr}_{ee}},(g^{q_{c}}(c_{g})-g^{l_{c}}(c_{g}))_{c \in\mathrm{Cr}_{ep}},\\ (g^{l_{c}}(c_{g})-g^{l_{c^{\prime}}}(c_{g}^{\prime}))_{c\in \mathrm{Cr}_{at}^{*},c\in\mathrm{Cr}_{at}(c)}\big{)}\in\mathbb{R}^{\nu-\zeta( f)}.\end{split} \tag{19.2}\] Here, if for some \(c\in\mathrm{Cr}_{at}^{*}\) the set \(\mathrm{Cr}_{at}(c)\) is empty then that term should be ignored. ### Conjugacy classes of semi-hyperbolic mappings are real analytic Banach manifolds Let \(\nu,\nu^{\prime},\zeta(f)\) be as in Theorem 19.1. **Theorem 19.2**.: _Let \(f\in\mathcal{A}_{a}^{\underline{\nu}}\) be semi-hyperbolic. Then_ 1. _the real-hybrid conjugacy class of_ \(f\) _is a real analytic Banach manifold in_ \(\mathcal{A}_{a}^{\underline{\nu}}\) _with real codimension_ \(\nu^{\prime}\)_._ 2. _the topological conjugacy class of_ \(f\) _is a real analytic Banach manifold in_ \(\mathcal{A}_{a}^{\underline{\nu}}\) _with real codimension_ \(\nu-\zeta(f)\) Proof.: To prove (1) it is enough to show that the map \(\Psi_{H}\colon\mathcal{O}\to\mathbb{R}^{\nu^{\prime}}\) defined in equation (19.1) (in the proof of the previous theorem) has full rank. So take a unit vector \(e_{j}\in\mathbb{C}^{\nu^{\prime}}\) and show how to pick a family \(f_{t}\) through \(f\) so that \(\dfrac{d}{dt}\Psi_{H}(f_{t})=e_{j}\). For convenience we will choose \(f_{t}(x)=f(x)+tv(x)\) where \(v\) is a polynomial function which vanishes at \(c\in\operatorname{Cr}(f)\) of order \(\ell_{c}\) and which, in order to ensure that \(f_{t}\in\mathcal{A}_{a}\), also vanishes at \(\pm 1\). This implies that the critical points \(c_{t}\) of \(f_{t}\) do not depend on \(t\) and remain of the same order, and so \(v\in T_{f}\mathcal{A}_{a}^{\underline{\nu}}\). Note that \(\dfrac{d}{dt}f_{t}(x)=v(x)\). Let us consider each coordinate of \(\Psi_{H}\) in turn and show that one can choose \(v\) so that coordinate of \(D\Psi_{H}(v)\) is non-zero while the other ones are zero. Define \(\operatorname{Cr}_{ec},\operatorname{Cr}_{ep},\operatorname{Cr}_{att}\) as in the proof of Theorem 19.1. **Case 1: \(c\in\operatorname{Cr}_{ec}\).** This corresponds to the first component of \(\Psi_{H}\), see equation (19.1), i.e. to critical points which are eventually mapped onto other critical points. By the choice of \(f_{t}\) we have \(\frac{d}{dt}c_{t}|_{t=0}=0\) and a simple calculation shows \[\frac{f_{t}^{q_{c}}(c)}{dt}\big{|}_{t=0}=v(f^{q_{c}-1}(c))+Df(f^{q_{c}-1}(c))v (f^{q_{c}-2}(c))+\cdots+Df^{q_{c}-1}(f(c))v(c).\] It follows that if we choose the function \(v\) so that \(v(c)\neq 0\) and \(v(f^{i}(c))=0\) for \(0<i\leq q_{c}-1\) then \[\frac{d}{dt}(f_{t}^{q_{c}}(c_{t})-c_{t})\bigg{|}_{t=0}\neq 0. \tag{19.3}\] Thus, with this choice of \(v\), the coordinate of \(D\Psi_{H}(v)\) corresponding to \(c\in\operatorname{Cr}_{ec}\) will be non-zero, and as we will see if we choose \(v(f^{i}(\tilde{c}))=0\) for all \(\tilde{c}\neq c\) in \(\operatorname{Cr}_{ec}\) and finitely many \(i\geq 0\), then the other coordinates of \(D\Psi_{H}(v)\) will be zero. The inequality (19.3) implies that \(v\) is transversal to the hybrid class. **Case 2: \(c\in\operatorname{Cr}_{ep}\).** Next we will consider the 2nd component of \(\Psi_{H}\) corresponding to critical points which are eventually mapped to a hyperbolic periodic point: \[\frac{d}{dt}(f_{t}^{q_{c}}(c)-f_{t}^{l_{c}}(c_{t}))\bigg{|}_{t=0}=\left(v(f^{q _{c}-1}(c))+\cdots+Df^{q_{l}-l_{c}-1}(f^{l_{c}+1}(c))v(f^{l_{c}}(c_{t}))\right)+\] \[+\left(Df^{q_{c}-l_{c}}(f^{l_{c}}(c))-1\right)\frac{f_{t}^{l_{c}}(c_{t})}{dt} \bigg{|}_{t=0}\] where \[\frac{f_{t}^{l_{c}}(c_{t})}{dt}\big{|}_{t=0}=v(f^{l_{c}-1}(c))+Df(f^{l_{c}-1}( c))v(f^{l_{c}-2}(c))+\cdots+Df^{l_{c}-1}(f(c))v(c).\] From this it follows that, as \(Df^{q_{c}-l_{c}}(f^{l_{c}}(c))\neq 1\), if we take \(v(c)\neq 0\), \(v(f^{i}(c))=0\) for all \(i>0\) and \(v(f^{i}(\tilde{c}))=0\) for all critical points \(\tilde{c}\neq c\) and all \(i\geq 0\), then the coordinate corresponding to \(c\in\operatorname{Cr}_{ep}\) in \(\dfrac{d}{dt}\Psi_{H}(f_{t})\) will be non-zero whereas the others will be zero (as we will see). Note that if \(c\in\operatorname{Cr}_{ep}\) then by assumption \(Df^{q_{c}-l_{c}}(f^{l_{c}}(c))\neq 1\). Hence there exists \(p_{c,t}\) depending smoothly on \(t\) so that \(p_{c,0}=f^{q_{c}-l_{c}}(f^{l_{c}}(c))\) and so that \(f_{t}^{q_{l}-l_{c}}(p_{c,t})=p_{c,t}\) for \(t\) near \(0\). Case 2 therefore follows from **Claim:** Let \(c\in\operatorname{Cr}_{ep}\). Then under the assumption that \(Df^{q_{c}-l_{c}}(f^{l_{c}}(c))\neq 1\) we have that \(\dfrac{d}{dt}(f_{t}^{l_{c}}(c)-p_{c,t})|_{t=0}\neq 0\) is equivalent to \(\dfrac{d}{dt}(f_{t}^{q_{c}}(c)-f_{t}^{l_{c}}(c))|_{t=0}\neq 0\). **Proof of Claim:** \[\frac{d}{dt}(f_{t}^{q_{c}}(c)-f_{t}^{l_{c}}(c))|_{t=0}=\frac{d}{dt}(f_{t}^{q_{c}- l_{c}}(c))|_{t=0}+\left(Df^{q_{c}-l_{c}}(f^{l_{c}}(c))-1\right)\frac{d}{dt}f_{t}^{l_{c} }(c_{f})|_{t=0}\] Differentiating the equality \(f_{t}^{q_{l}-l_{c}}(p_{c,t})=p_{c,t}\) gives \[\frac{d}{dt}f_{t}^{q_{c}-l_{c}}(p_{c,0})|_{t=0}+Df^{q_{c}-l_{c}}(f^{l_{c}}(c)) \frac{dp_{c,t}}{dt}|_{t=0}=\frac{dp_{c,t}}{dt}|_{t=0}\] and so \[(1-Df^{q_{c}-l_{c}}(f^{l_{c}}(c)))\frac{dp_{c,t}}{dt}=\tfrac{d}{ dt}f_{t}^{q_{c}-l_{c}}(p_{c,0})|_{t=0}=\] \[\frac{d}{dt}(f_{t}^{q_{c}}(c)-f_{t}^{l_{c}}(c))|_{t=0}-\left(Df^{ q_{c}-l_{c}}(f^{l_{c}}(c))-1\right)\frac{d}{dt}f_{t}^{l_{c}}(c_{f})|_{t=0}.\] Hence \[\left(Df^{q_{c}-l_{c}}(f^{l_{c}}(c))-1\right)\left(\frac{d}{dt}\left(f_{t}^{l _{c}}(c)-p_{c,t}\right)\big{|}_{t=0}\right)=\frac{d}{dt}(f_{t}^{q_{c}}(c)-f_{ t}^{l_{c}}(c))|_{t=0}\] This implies the claim. \(\checkmark\) **Case 3: \(p\in Atr\).** Let us now consider \(p\in Atr\) and assume that \(Df^{r}(p)\neq 1\). Note that \[\frac{d}{dt}f_{t}^{r}(p_{t})=v(f^{r-1}(p))+Df(f^{r-2}(p))v(f^{r-2}(p))+\cdots+ Df^{r-1}(f(p))v(f(p))+Df^{r}(p)\frac{d}{dt}p_{t}\] is equal to \(\frac{d}{dt}p_{t}\). It follows that \(v(p)=v(f(p))=\cdots=v(f^{r-1}(p))=0\) implies \(\frac{d}{dt}p_{t}=0\) and similarly \(\frac{d}{dt}f^{i}(p_{t})=0\) for \(i\geq 0\). Hence \[\frac{d}{dt}Df_{t}^{r}(p_{t})=Df^{r}(p)\left(\frac{v^{\prime}(f^{r-1}(p))}{Df( f^{r-1}(p))}+\frac{v^{\prime}(f^{r-2}(p))}{Df(f^{r-2}(p))}+\cdots+\frac{v^{ \prime}(p)}{Df(p)}\right).\] Taking \(v^{\prime}(p)\neq 0\) and \(v^{\prime}(f^{i}(p))=0\) for \(0<i<r\) gives \(\frac{d}{dt}Df_{t}^{r}(p_{t})\neq 0\). This is therefore equal to third term component of \(D\Psi_{H}(v)\). **Case 4: \(p\in Atr\), \(c^{\prime}\in\operatorname{Cr}_{at}(p)\setminus\{c(p)\}\).** Let us finally consider the term \[A:=\left.\frac{d}{dt}[\varphi_{p_{t},f_{t}}(f_{t}^{n(c(p))}(c(p)))-\varphi_{p_{ t},f_{t}}(f_{t}^{n(c^{\prime}(p))}(c^{\prime}(p)))]\right|_{t=0} \tag{19.4}\] which is the final component of \(D\Phi_{H}(v)\), see definition (19.1). This term is related to the position of orbits of the critical points in the basin of the periodic attractor \(p\) (which contains \(c\) in its basin) and here \(\varphi_{p_{t},f_{t}}\) is the linearisation at \(p_{t}\) and so \(\varphi_{p_{t},f_{t}}(z)=\lim_{k\to\infty}f_{t}^{rk}(z)/a_{t}^{k}\) where \(a_{t}=Df_{t}^{r}(p_{t})\), and \(|a|<1\). Let us choose a real analytic function \(v\) so that \(v(x)=v^{\prime}(x)=0\) for each \(x\) in the forward orbit of \(p\). This then implies that \(\frac{d}{dt}p_{t}|_{t=0}=p\) and \(\frac{d}{dt}a_{t}|_{t=0}=a\). Note that \[\frac{d}{dt}f_{t}^{k}(z)=v(f^{k-1}(z))+Df(f^{k-1}(z))v(f^{k-2}(z))+\cdots+Df^{ k}(z)v(z). \tag{19.5}\] Let us take \(|v|<\epsilon\) along forward iterates of \(f_{t}^{n(c(p))}(c(p))\) and \(f_{t}^{n(c^{\prime}(p))}(c^{\prime}(p))\). This is possible as \(v(x)=v^{\prime}(x)=0\) for each \(x\) in the forward orbit of \(p\), by taking \(v\) zero in a finite number of forward iterates of \(f_{t}^{n(c(p))}(c(p))\) and \(f_{t}^{n(c^{\prime}(p))}(c^{\prime}(p))\). Since \(\varphi(z)=\lim_{k\to\infty}f^{rk}(z)/a^{k}\) and using equation (19.5) it follows that \[-\delta<\left.\frac{d}{dt}[\varphi_{p_{t},f_{t}}(f^{n(c(p))}(c(p)))-\varphi_{p_ {t},f_{t}}(f^{n(c^{\prime})}(c^{\prime}))]\right|_{t=0}<\delta \tag{19.6}\] where \(\delta>0\) is determined by \(\epsilon\) and the multiplier at the periodic point \(p\); in fact, \(\delta\approx\epsilon/(1-|a|)\). Hence (19.4) is, by the chain rule, equal to some \(s\in[-\delta,\delta]\) plus \[B:=\left[D\varphi_{p,f}(f^{n(c(p))}(c(p)))\left.\frac{d}{dt}f_{t}^{n(c(p))}(c( p))\right|_{t=0}-D\varphi_{p,f}(f^{n(c^{\prime})}(c^{\prime}))\left.\frac{d}{ dt}f_{t}^{n(c^{\prime})}(c^{\prime})\right|_{t=0}\right]\] Since \(D\varphi_{p,f}\neq 0\), one can choose \(A\neq 0\) and \(|B|>C\delta\) by choosing \(v(f(c))=\cdots=v(f^{n(c(p))}(c(p)))=0\) and \(v(f(c^{\prime}))=\cdots=v(f^{n(c^{\prime}(p))}(c^{\prime}(p)))=0\) and \(v(c),v(c^{\prime})\neq 0\) appropriately. Indeed, then \[B=D\varphi_{p,f}(f^{n(c(p))}(c(p)))Df^{n(c(p))}v(c)-D\varphi_{p,f}(f^{n(c^{ \prime})}(c^{\prime}(p)))Df^{n(c^{\prime})}v(c^{\prime}).\] Since the factors in front of \(v(c)\) and \(v(c^{\prime})\) are non-zero, we can choose \(v(c),v(c^{\prime})\) so that \[|A|=|B|-\delta>0.\] In particular, it follows that if \(v\) so that the values of \(v\) along the forward orbits of \(c\) and \(c^{\prime}\) as above, then the fourth component of \(\Psi_{H}(v)\) is non-zero. Combining the previous cases, we see that for each coordinate of \(\Psi_{H}\) one can choose \(v\) so that coordinate of \(D\Psi_{H}(v)\) is non-zero while the others are zero. Hence \(D\Psi\) has maximal rank and therefore Assertion (1) follows from the Implicit Function Theorem. The proof of Assertion (2) is entirely analogous. _Remark 19.1_.: Of course one could denote \[\frac{f_{t}^{n}(c)}{dt}|_{t=0}=v(f^{n-1}(c))+Df(f^{n-1}(c))v(f^{n-2}(c))+ \cdots+Df^{n-1}(f(c))v(c)\] by \(v^{n}(c)\) and obtain shorter expressions. The full expressions show how to choose \(v\) so that the required assumptions are satisfied. ### The simple parabolic case **Theorem 19.3**.: _Let \(a>0\) and \(f\in\mathcal{A}_{a}^{\underline{\nu}}\). Assume that \(f\) has a simple parabolic periodic point \(a_{0}\) of period \(n\). Then there exists a real analytic Banach manifold \(\mathcal{P}_{f,a}^{\underline{\nu}}\) in \(\mathcal{A}_{a}^{\underline{\nu}}\) consisting of maps in \(\mathcal{A}^{\underline{\nu}}\) so that each map \(g\in\mathcal{P}_{f,a}^{\underline{\nu}}\) has a unique parabolic periodic point of period \(n\) near \(a\) and of the same type as \(a_{0}\), and if each \(g\in\mathcal{A}_{a}^{\underline{\nu}}\) with this property is in \(\mathcal{P}_{f,a}^{\underline{\nu}}\)._ _If \(a_{0}\) is of saddle-node or period-doubling type, then \(\mathcal{P}_{f,a}^{\underline{\nu}}\) has codimension one, and if it is of pitchfork type it has codimension two._ _If \(f\) has \(k_{0}\) parabolic periodic points of saddle-node or period-doubling type and \(k_{1}\) periodic points of pitchfork type, then the analogous statement holds and the codimension of \(\mathcal{P}_{f,a}^{\underline{\nu}}\) is equal to \(k_{0}+2k_{1}\)._ Proof.: For any vector field \(v\) define \(v^{n}=\frac{d}{dt}(f+tv)^{n}|_{t=0}\). Assume that \(a_{0}\) is a simple parabolic periodic point of period \(n\). If \(Df^{n}(a_{0})=1\) and \(a_{0}\) is of saddle-node type then \(f^{n}(x)=a_{0}+(x-a_{0})+\alpha(x-a_{0})^{2}+O(x-a_{0})^{3}\) with \(\alpha\neq 0\). In this case, choose a vector field \(v\) so that \(v^{n}(a_{0})\neq 0\) (it is easy to see that this possible). Let \(B_{\epsilon}(a_{0})\) be a small neighbourhood of \(a_{0}\) and define \(\Psi\) on a neighbourhood \(\mathcal{O}\times(-\epsilon,\epsilon)\times B_{\epsilon}(a_{0})\) of \((f,0,a_{0})\) by \[\Psi(g,t,x)=((g+tv)^{n}(x)-x,D(g+tv)^{n}(x)-1).\] Here \(\mathcal{O}\) is an open subset of the real analytic Banach manifold \(\mathcal{A}_{a}^{\nu}\). Let us show that for each \(g\) near \(f\) there exist a unique \(t\) and \(x\) so that \(\Psi(g,t,x)=(0,0)\). Note that this implies that \(x\) is a periodic point of \(g+tv\) with multiplier \(1\) which is of saddle-node type if \(g\) is close to \(f\). The partial derivative matrix of \(\Psi\) w.r.t. \(t\) and \(x\) at \((g,t,x)=(f,0,a_{0})\) is equal to \[\left(\begin{array}{cc}v^{n}(a)&0\\ 0&2\alpha\end{array}\right).\] Since \(v^{n}(a_{0})\neq 0\) and \(\alpha\neq 0\), the Implicit Function Theorem gives for each \(g\in\mathcal{O}\) the existence of a unique solution \(t(g),x(g)\) of the equation \(\Psi(g,t(g),x(g))=(0,0)\). Here \(t(g)\) and \(x(g)\) depend smoothly on \(g\). Thus we obtain that there exists a codimension-one manifold with the desired properties, namely the space of maps of the form \(g+t(g)v\). In fact, if we let \(T\mathcal{P}_{f,a}\) be the linear space of vector fields \(\tilde{v}\) so that \(\tilde{v}^{n}(a_{0})=0\) then \(\mathcal{P}_{f,a}\) can be locally written as \(\{f+\tilde{v}+\tilde{t}(\tilde{v})v;\tilde{v}\in T\mathcal{P}_{f,a}\}\) where \(\tilde{t}\) is an analytic function. It is easy to see that \(D\tilde{t}(0)=0\) and so the tangent space of \(\mathcal{P}_{f,a}\) is equal to \(T\mathcal{P}_{f,a}\). If \(Df^{n}(a_{0})=-1\) and \(a_{0}\) is of period-doubling type then \(f^{n}(x)=a_{0}-(x-a_{0})+\alpha(x-a_{0})^{2}+O(x-a_{0})^{3}\). In this case we choose a vector field \(v\) so that \[2Dv^{n}(a_{0})+v^{n}(a_{0})D^{2}f^{n}(a_{0})\neq 0\] and argue as in the saddle-node case. Here the above condition on \(v\) follows from the calculation in Remark 11.1. If \(Df^{n}(a_{0})=1\) and \(f\) is attracting from both sides then \(f^{n}(x)=a_{0}+(x-a_{0})+\alpha(x-a_{0})^{3}+O(x-a_{0})^{4}\) with \(\alpha<0\). In this case let \(v\) and \(w\) be independent vector fields so that \(v^{n}(a_{0})\neq 0\), \(Dv^{n}(a_{0})=0\), \(w^{n}(a_{0})=0\) and \(Dw^{n}(a_{0})\neq 0\). (It is easy to see that this is possible.) Next define \[\Psi(g,t,s,x)=((g+sv+tw)^{n}(x)-x,D(g+sv+tw)^{n}(x)-1,D^{2}(g+sv+tw)).\] The partial derivative matrix of \(\Psi\) w.r.t. \(s,t,x\) at \((g,s,t,x)=(f,0,0,a_{0})\) is equal to \[\left(\begin{array}{ccc}v^{n}(a_{0})&0&0\\ 0&Dw^{n}(a_{0})&0\\ 0&0&6\alpha\end{array}\right).\] By construction this matrix is invertible, and so there exists a unique \(s(g),t(g),x(g)\) so that \(\Psi(g,s(g),t(g),x(g))=(0,0,a_{0})\). It follows that each map of the form \(g+s(g)v+t(g)w\) has a pitchfork parabolic point at \(x(g)\). Thus, using the Implicit Function Theorem we obtain a codimension-two manifold of maps with a parabolic periodic point of pitchfork type. The tangent space of this manifold is the space of vector fields \(\tilde{v}\) so that \(\tilde{v}^{n}(a_{0})=D\tilde{v}^{n}(a_{0})=0\). _Remark 19.2_.: A similar analysis can be found on page 510 [ALM]. A more general result in this setting for arbitrary (non-degenerate) parabolic periodic points can be found in the Main Theorem in [LSvS2]. Note that if \(a_{0}\) is of pitchfork type then it is degenerate in the sense of [LSvS2]. _Remark 19.3_.: One has analogous results within the space of polynomials, rational maps, maps of finite type or more general families of maps, see [Eps, LSvS1, LSvS2, LSvS3]. However, here the situation is simpler as it is easier to show that the map \(\Psi\) from above is a submersion in the present case. This is because here we can construct the appropriate transversal vector fields to the manifold by hand because the space \(\mathcal{A}_{a}^{\underline{\nu}}\) is much larger than the space of rational maps. **Corollary 19.4**.: _The conclusion of Theorem 19.2 also holds if we replace hyperbolic periodic point throughout by hyperbolic or simple parabolic periodic point._ Proof.: This follows by combining the proofs of the previous theorem with that of Theorem 19.2. ## 20. Mating pruned polynomial-like mappings The following theorem is the analogue of the mating theorems [DH] and [Ly1] for polynomial-like mappings in our context of pruned polynomial-like mappings. **Theorem 20.1**.: _Let \(F:U_{F}\to U_{F}^{\prime}\) and \(G:U_{G}\to U_{G}^{\prime}\) be pruned polynomial-like mappings with \(Q(F)=Q(G)\). Then there exists a unique pruned polynomial-like mapping \(\widetilde{F}:U_{\tilde{F}}\to U_{\tilde{F}}^{\prime}\), so that_ 1. \(\widetilde{F}\) _is hybrid conjugate to_ \(F\)_._ 2. \(\widetilde{F}\) _and_ \(G\) _have the same external mappings._ We call \(\widetilde{F}\) a _mating_ of \(F\) and \(G\). Proof.: Since \(Q(F)=Q(G)\) we obtain by Proposition 13.2 a \(\mathbb{R}\)-symmetric quasiconformal map \(H\colon(U_{F}\cup U_{F}^{\prime})\setminus K_{F}\to(U_{G}\cup U_{G}^{\prime}) \setminus K_{G}\) which conjugates \(F\colon U_{F}\to U_{F}^{\prime}\) and \(G\colon U_{G}\to U_{G}^{\prime}\) wherever this is defined. Note that \(K_{F}\cap\partial U_{F}\) consists of a finite number of non-real preimages of the pruning points. Therefore we can extend \(H\) near each of these points \(K_{F}\cap\partial U_{F}\) by taking preimages under \(F\) thus obtaining a qc conjugacy \(H\colon U_{F}^{*}\setminus K_{F}\to U_{G}^{*}\setminus K_{G}\) between \(F\) and \(G\) (on these sets) where \(U_{F}^{*}\), \(U_{G}^{*}\) are neighbourhood of \(K_{F}\) resp. \(K_{G}\). Here we use that \(F,G\) extend holomorphically to neighbourhoods of the closures of their domains. Since \(\bar{\mathbb{C}}\setminus(\partial U_{F}\cup\partial U_{F}^{\prime})\) and \(\bar{\mathbb{C}}\setminus(\partial U_{G}\cup\partial U_{G}^{\prime})\) are quasi disks (and one choose \(U_{F}^{*},U_{G}^{*}\) conveniently), \(H\) has a qc extension \(H\colon\bar{\mathbb{C}}\setminus K_{F}\to\bar{\mathbb{C}}\setminus K_{G}\). Let us consider the standard conformal structure \(\sigma\) on \(\bar{\mathbb{C}}\). Then \(\nu=H^{*}\sigma\) is a conformal structure on \(\bar{\mathbb{C}}\setminus K_{F}\), which is invariant under \(F\) on its domain. We extend \(\nu\) to \(\mathbb{C}\) by the standard conformal structure \(\sigma\). Thus we obtain a conformal structure \(\mu\) on \(\bar{\mathbb{C}}\), which is invariant under \(F\) on \(U_{F}^{*}\setminus K_{F}\). We have that \(\mu=H^{*}\sigma\) on \(\mathbb{C}\setminus K_{F}\). Applying the Measurable Riemann Mapping Theorem, see [AB], we straighten \(\mu\), and obtain a qc homeomorphism \(H_{\mu}\colon\mathbb{C}\to\mathbb{C}\) which is \(\mathbb{R}\)-symmetric and sends the ellipse field defined by \(\mu\) to the standard complex structure. It follows that \(\widetilde{F}:=H_{\mu}\circ F\circ H_{\mu}^{-1}\) is \(\mathbb{R}\)-symmetric conformal pruned polynomial-like mapping on \(U_{\tilde{F}}=H_{\mu}U_{F}\). We can and will normalise \(H_{\mu}\) so that \(H_{\mu}(-1)=-1\), \(H_{\mu}(1)=1\) which implies that \(\widetilde{F}(\partial I)\subset\partial I\). Since \(\mu=0\) on \(K_{F}\), \(\bar{\partial}H_{\mu}=0\) a.e. on \(K_{F}\) and it follows that \(\tilde{F}\) is hybrid conjugate to \(F\). This proves property (1). Note that by construction \(H\circ H_{\mu}^{-1}\colon\bar{\mathbb{C}}\setminus K_{\tilde{F}}\to\bar{ \mathbb{C}}\setminus K_{G}\) is conformal. Denote by \[\phi_{F}\colon\overline{\mathbb{C}}\setminus K_{F,X_{F}}\to\overline{\mathbb{C }}\setminus\overline{\mathbb{D}}\text{ and }\phi_{G}\colon\overline{\mathbb{C}}\setminus K_{G,X_{G}}\to\overline{ \mathbb{C}}\setminus\overline{\mathbb{D}}\] the normalized Riemann mappings, then \(\hat{H}=\phi_{G}\circ H\circ\phi_{F}^{-1}\) is a qc-conjugation between \(\hat{F}_{X}\) and \(\hat{G}_{X}\) near \(\partial\mathbb{D}\), i.e. \(\hat{H}\circ\hat{F}_{X}=\hat{G}_{X}\circ\hat{H}\) since \(\hat{F}_{X}=\phi_{F}\circ F\circ\phi_{F}^{-1}\) and \(\hat{G}_{X}=\phi_{G}\circ G\circ\phi_{G}^{-1}\). Hence \(\Psi\colon\bar{\mathbb{C}}\setminus K_{\widetilde{F}}\to\bar{\mathbb{C}}\setminus \bar{\mathbb{D}}\) defined by \(\Psi:=\hat{H}\circ\phi_{F}\circ H_{\mu}^{-1}=\phi_{G}\circ H\circ\phi_{F}^{-1} \circ\phi_{F}\circ H_{\mu}^{-1}=\phi_{G}\circ H\circ H_{\mu}^{-1}\) is conformal. Due to the normalisations, \(\Psi\) therefore agrees with \(\phi_{\tilde{F}}\). Moreover \(\Psi\circ\tilde{F}\circ\Psi^{-1}=(\hat{H}\circ\phi_{F}\circ H_{\mu}^{-1}) \circ(H_{\mu}\circ F\circ H_{\mu}^{-1})\circ(H_{\mu}\circ\phi_{F}^{-1}\circ \hat{H}^{-1})=\hat{H}\circ(\phi_{F}\circ F\circ\phi_{F}^{-1})\circ\hat{H}^{- 1}=\hat{H}\circ\hat{F}_{X}\circ\hat{H}^{-1}=\hat{G}_{X}\), proving property (2). Let us now assume that \(\tilde{F}_{1},\tilde{F}_{2}\) are two such maps. Then \(\tilde{F}_{1},\tilde{F}_{2}\) are qc-conjugate and by the remark below this proof the conjugacy has the property that \(\bar{\partial}H=0\) on \(K(\tilde{F}_{1})\). At the same the external maps of \(\tilde{F}_{1},\tilde{F}_{2}\) are the same, and so \(\tilde{F}_{1},\tilde{F}_{2}\) are conformally conjugate outside their filled Julia sets. By construction these maps agree on their filled Julia sets, and so \(\tilde{F}_{1},\tilde{F}_{2}\) are conformally conjugate. Since the conjugacy send \(\pm 1\) to \(\pm 1\) it follows that \(\tilde{F}_{1}=\tilde{F}_{2}\). ### Hybrid classes are locally conformally equivalent Using the previous mating result we obtain: **Theorem 20.2**.: _Suppose that \(F_{0}:U_{F_{0}}\to U^{\prime}_{F_{0}}\) and \(G_{0}:U_{G_{0}}\to U^{\prime}_{G_{0}}\) are pruned polynomial-like mappings so that \(Q(F_{0})=Q(G_{0})\). Then_ * _the hybrid classes of_ \(F_{0}\) _and_ \(G_{0}\) _are homeomorphic;_ * _if the hybrid class of_ \(G_{0}\) _has an analytic structure, then the one for_ \(F_{0}\) _also has an analytic structure._ Proof.: Compare [1, Lemma 4.3]. We recall the following: **Fact.** If \(F_{\lambda}=H_{\lambda}\circ F_{0}\circ H_{\lambda}^{-1}\), \(\lambda\in\Lambda\subset\mathbb{C}\) is a family of holomorphic mappings and \(H_{\lambda}\) is a holomorphic family of qc mappings, then \(F_{\lambda}\) is holomorphic in \(\lambda\). By the Mating Theorem, Theorem 20.1, we have an invertible mapping \(\psi:\mathcal{H}_{G_{0}}\to\mathcal{H}_{F_{0}}\). Let us prove that it is continuous (the proof that \(\psi^{-1}\) is continuous is the same). Note that there exists a neighbourhood \(\mathcal{U}\) of \(G_{0}\) in the Caratheodory topology defined on the space of pruned polynomial-like mappings (see Definition 17.1), so that for each \(G\in\mathcal{U}\) in the hybrid class of \(G_{0}\) we can select holomorphically moving domains \(U_{G},U^{\prime}_{G}\) so that \(G:U_{G}\to U^{\prime}_{G}\) is a pruned polynomial-like mapping and \(Q(G)=Q(G_{0})\). This holomorphic motion defines a family of conformal structures on \(U^{\prime}_{G_{0}}\setminus U_{G_{0}}\). More precisely, we obtain a holomorphic motion \(H_{G}:(U_{G_{0}},U^{\prime}_{G_{0}})\to(U_{G},U^{\prime}_{G})\) of \(\partial U_{G_{0}}\cup\Gamma_{G_{0}}\) over \(\mathcal{U}\) conjugating \(G_{0}:U_{G_{0}}\to U^{\prime}_{G_{0}}\) with \(G:U_{G}\to U^{\prime}_{G}\) on \(\partial U_{G_{0}}\cup\Gamma_{G_{0}}\). Let us start with the standard conformal structure on \(\mathbb{C}\setminus U_{G}\). Pulling it back by \(H_{G}\), we obtain a conformal structure \(H_{G}^{*}\sigma\) on \(\mathbb{C}\setminus U^{\prime}_{G_{0}}\). Since the external mappings of \(F_{0}\) and \(G_{0}\) are qc conjugate, there exists a qc mapping \(H:U_{F_{0}}\to U_{G_{0}}\) that conjugates \(F_{0}\) with \(G_{0}\) on \(\partial U_{G_{0}}\cup\Gamma_{G_{0}}\). Now, \(H^{*}H_{G_{0}}^{*}\sigma\) is a conformal structure defined on \(U^{\prime}_{F_{0}}\setminus U_{F_{0}}\). Pulling it back by \(F_{0}^{n},n=0,1,2,\dots\), we obtain an invariant Beltrami differential defined on \(U^{\prime}_{F_{0}}\setminus K_{F_{0}}\), and extending this to \(\mathbb{C}\) by \(\sigma\), we obtain a Beltrami differential \(\nu_{G}\), which is invariant on a neighbourhood of \(K_{F_{0}}\), and which depends holomorphically on \(G\). By the Measurable Riemann Mapping Theorem and the fact above we obtain a family \(F_{G}:U_{F_{G}}\to U^{\prime}_{F_{G}}\) of pruned polynomial-like mappings in \(\mathcal{H}_{F_{0}}\), which depends continuously on \(G\). So if \(G_{\lambda}\) is a holomorphic family of mappings of pruned polynomial-like mappings then \(F_{G_{\lambda}}\) also depends analytically on \(\lambda\). Later on we will show that \(\mathcal{T}_{f}\) can be viewed as the product of \(\mathcal{H}_{f}\) and the Teichmuller spaces of some punctured tori, see Theorem 22.1. ## 21. Hybrid classes form immersed analytic manifold Our goal now is to show that the real-hybrid class of a real analytic mapping \(f\) has the structure of an analytic manifold. To do this, we show that it can be exhausted by a union of "compatible" complex analytic manifolds. The difficulty that we need to overcome germs of real analytic maps of the interval are equivalence classes of pruned polynomial-like maps, and we need to take prunings arbitrarily close to the real line. **Lemma 21.1**.: _Suppose that \(f\) is a real analytic mapping of the interval, so that each periodic point of \(f\) is either hyperbolic or is a simple parabolic periodic point. Assume that \(f\in\mathcal{A}_{a}^{\underline{\nu}}\). Then there exists a sequence of real analytic interval map \(g_{n}\in\mathcal{A}_{a}^{\underline{\nu}}\) and truncated polynomial polynomial-like maps \(F,G_{n}\) which are complex extensions of \(f,g_{n}\) so that_ 1. \(Q(F)=Q(G_{n})\)_;_ 2. _the domains of_ \(F,G_{n}\) _are compactly contained in_ \(\Omega_{a}\)_; and so that_ \(g_{n}\to f\) _on_ \(\overline{\Omega_{a^{\prime}}}\) _for each_ \(0<a^{\prime}<a\)_._ 3. _If_ \(f\) _has only hyperbolic periodic points, then each map_ \(g_{n}\) _is hyperbolic;_ 4. _If_ \(f\) _has parabolic periodic points, then we can choose_ \(g_{n}\in\mathcal{P}_{f}\) _(where_ \(\mathcal{P}_{f}\) _is as in Theorem_ 19.3_) so that each critical point of_ \(g_{n}\) _is either in the basin of a parabolic periodic point or a hyperbolic periodic point._ Proof.: This follows from density of hyperbolicity, see [12]. Here, if \(f\) has simple parabolic points we restrict to the manifold of maps with corresponding parabolic periodic from Theorem 19.3. _Remark 21.1_.: Note that \(f\in\mathcal{A}^{\underline{\nu}}\) may have periodic attractors which do have critical points in their basin, and in this case \(g_{n}\) will have the same property for \(n\) large. **Theorem 21.2** (Manifold structure of \(\mathcal{H}_{f}^{\mathbb{R}}\)).: _Assume that \(f_{0}\) is a real analytic mapping so that all its periodic points are hyperbolic or simple parabolic points. Then the real-hybrid class of the germ of \(f_{0}\) is an immersed real analytic submanifold of \(\mathcal{A}^{\underline{\nu}}\)._ Proof.: Choose a sequence of finite pruning data \(Q_{n}\subset\partial\mathbb{D}\), \(n\geq 1\), associated to \(f_{0}\), so that \(Q_{n+1}\supset Q_{n}\) and \(Q_{n}\) is an admissible pruning set for \(f_{0}\) for each \(n\geq 1\). In particular, \(f_{0}\) has pruned polynomial-like extensions \(F_{0,n}\colon U_{0,n}\to U_{0,n}^{\prime}\) for each \(n\) (corresponding to pruning data \(Q_{n}\)). We can choose \(Q_{n}\) so that each \(f\in\mathcal{H}_{f_{0}}^{\mathbb{R}}\) has a pruned polynomial-like extension \(F\) corresponding to some \(Q_{n}\). From the previous lemma, if \(f_{0}\) has no parabolic periodic points, there exists a sequence \(\{g_{n}\}\) of mappings converging to \(f_{0}\) on \(\overline{\Omega}_{a}\) with the properties that \(g_{n}\) is hyperbolic and so that \(Q_{n}\) is an admissible pruning set for \(g_{n}\). It follows that for each \(f\in\mathcal{H}_{f_{0}}^{\mathbb{R}}\) and all \(n\) sufficiently large, \(f\) and \(g_{n}\) have pruned polynomial-like extensions \(F_{n}\colon U_{n}\to U_{n}^{\prime}\) and \(G_{n}\colon U_{g_{n}}\to U_{g_{n}}^{\prime}\) so that \(Q_{n}:=Q(G_{n})=Q(F_{n})\) and so that the domains of \(F_{n},G_{n}\) are compactly contained in the domain \(\Omega_{a}\) of analyticity of \(f\) and \(g_{n}\). If \(f_{0}\) has parabolic periodic points, but all of them of simple type, then we can assume that \(g_{n}\in\mathcal{P}_{f}\) and that all critical points of \(g_{n}\) are in the basins of hyperbolic or simple parabolic periodic points. Here \(n\) (and so \(Q_{n}\)) depends on \(f\) and in particular on the domain of analyticity of the map \(f\) and thus on the size of the pruning intervals \(J_{i}\supset J_{i}^{*}\ni f(c_{i})\). By Theorem 20.2, we can therefore parameterize the real-hybrid class of \(F_{0,n}\colon U_{0,n}\to U_{0,n}^{\prime}\) by the real-hybrid class of \(G_{n}\colon U_{G_{n}}\to U_{G_{n}}^{\prime}\). By the properties of \(g_{n}\), from Theorem 19.2 (or Corollary 19.4 if \(f_{0}\) has parabolic periodic points) it follows that \(\mathcal{H}_{g_{n}}^{\mathbb{R}}\cap\mathcal{A}_{a^{\prime}}\) is a real analytic Banach manifold for each \(a^{\prime}\in(0,a]\) sufficiently small. Therefore, Theorem 20.2 implies that the real-hybrid class of \(F_{0,n}\colon U_{0,n}\to U^{\prime}_{0,n}\) is an immersed analytic submanifold \(X_{n}\) of \(\mathcal{A}^{\underline{\nu}}\). Moreover, \(f_{0}\) is in this hybrid class for all \(n\) sufficiently large. Since \(Q_{n}\subset Q_{n+1}\) we have \(X_{n}\subset X_{n+1}\). Since any \(f\in\mathcal{H}_{f_{0}}\) is included in some \(X_{n}\) we have \(\cup X_{n}=\mathcal{H}_{f_{0}}\). ## 22. Topological conjugacy classes form immersed analytic manifold To discuss the conjugacy class of \(\mathcal{T}_{f_{0}}\) it will be useful to denote by \(\mathrm{T}(T_{\mathrm{g}}^{2})\) the _Teichmuller space_ of a torus \(T^{2}\) with \(\mathrm{g}\) marked points. This means that \(\mathrm{T}(T_{\mathrm{g}}^{2})\) is the space of all pairs \((X,\phi)\) where \(X\) is a \(2\)-torus with \(\mathrm{g}\) marked points and \(\phi\colon T^{2}\to X\) (mapping marked points to marked points) is a quasiconformal map so that \((X_{1},\phi_{1})\) is considered equivalent to \((X_{2},\phi_{2})\) if \(\phi_{2}\phi_{1}^{-1}\colon X_{1}\to X_{2}\) is isotopic to a holomorphic map. If \(g>0\) this space has real dimension \(2\mathrm{g}\). If \(\mathrm{g}=0\) then \(T_{0}^{2}\) is not a hyperbolic surface, and the Teichmuller space of \(T_{0}^{2}\) is equal to the the upper half-space \(\mathbb{H}\) and so again its real dimension is two. For more on this see [IT]. In the theorem below, we only count critical points whose orbits is infinite, so not critical points which are eventually mapped into the periodic orbit. **Theorem 22.1** (Manifold structure of \(\mathcal{T}_{f}\)).: _Assume that \(f\in\mathcal{A}^{\underline{\nu}}\) has only hyperbolic periodic points. Then \(\mathcal{T}_{f}\) is conformally equivalent to the product \(\mathcal{H}_{f}\) and sets of the form \(\mathrm{T}(T_{\mathrm{g}_{1}}^{2})\):_ \[\mathcal{T}_{f}\approx\mathcal{H}_{f}\times\mathrm{T}(T_{\mathrm{g}_{1}}^{2}) \times\ldots\mathrm{T}(T_{\mathrm{g}_{a}}^{2}) \tag{22.1}\] _and is a real analytic manifold. Here \(a\) is equal to the number of hyperbolic periodic attractors, and \(\mathrm{g}_{i}\) is equal to the number of critical points with disjoint infinite orbits in the basin of the \(i\)-th periodic attractor._ Proof.: If \(f_{0}\) has no periodic attractors or parabolic periodic points, then \(\mathcal{H}_{f_{0}}=\mathcal{T}_{f_{0}}\) and so this statement follows from the previous corollary. So assume that \(f_{0}\) has precisely \(a>0\) attracting periodic orbits \(O_{1},\ldots,O_{a}\) of period \(m_{1},\ldots,m_{a}\). Pick a periodic point \(p_{i}\in O_{i}\) and let \(\mathrm{g}_{i}\) be the number of critical points with disjoint orbits that are contained in \(B(O_{i})\setminus O_{i}\). Now take a fundamental annulus \(A_{i}\) around \(p_{i}\) so that \(f^{m_{i}}\) is a diffeomorphism on the disk bounded by \(A_{i}\) and so that each orbit of a critical point entering the basin of \(p_{i}\) (and which is not mapped to \(O_{i}\)) enters \(A_{i}\). The modulus of the annulus is equal to that of the annulus \(\{z:|\lambda_{1}|<|z|<1\}\) where \(\lambda_{i}=Df^{m_{i}}(p_{i})\in\mathbb{R}\). Identifying the inner and outer boundary of \(A_{i}\) via the identification \(z\mapsto f^{m_{i}}(z)\) on the boundary, we obtain a torus \(T_{i}^{2}\). Given a map \(f\in\mathcal{H}_{f_{0}}\), choose a fundamental annulus \(A_{i,f}\) near the periodic attractor \(p_{i,f}\) (corresponding to the periodic attractor \(p_{i}\) of \(f_{0}\)) which is the holomorphic image of the fundamental annulus \(A_{i}\) for \(f_{0}\) (mapping iterates of critical points of \(f_{0}\) in \(A_{i}\) to corresponding ones for \(f\)). By identifying the inner and outer boundary points of \(A_{i}\) by the dynamics of \(f_{0}\) we obtain a torus \(T_{i}^{2}\) with \(\mathrm{g}_{i}\) marked points and similarly \(A_{i,f}\) induces a marked torus \(T_{i,f}\) and we have a conformal map \(\phi_{f}\colon T_{i,f}\to T_{i}\). Now pick an element of \(\mathrm{T}(T_{\mathrm{g}_{i}}^{2})\), i.e. \(X\) and a quasiconformal map \(\phi\colon T_{i}\to X\). Thus we obtain a map \(\phi\circ\phi_{f}\colon T_{i,f}\to X\). Let \(\mu_{i}\) be the Beltrami coefficient of this map on \(A_{i,f}\). Using dynamics this induces an invariant Beltrami coefficient \(\mu_{i}\) on the basin of \(p_{i,f}\). Now do this for each periodic attractor \(p_{i,f}\) of \(f\). Now extend the Beltrami coefficient on the union of the basins to an invariant Beltrami coefficient \(\mu=\mu_{\phi_{1},\ldots,\phi_{a}}\) on all of \(\mathbb{C}\), so that it is zero outside the basins. Using the Riemann mapping theorem, we obtain a unique \(\mathrm{qc}\) map \(h_{\mu}\) normalised so that \(h_{\mu}(\pm 1)=\pm 1\). Because \(\mu\) is \(f\)-invariant, the map \(f_{\mu}=h_{\mu}\circ f\circ h_{\mu}^{-1}\) is again a holomorphic map. If we take \(\mu\) real-symmetric then \(f_{\mu}\) will again be an interval map. Thus we obtain a map \(\Psi\) defined by \[\mathcal{H}_{f_{0}}\times\mathrm{T}(T_{\mathrm{g}_{1}}^{2})\times\ldots\mathrm{ T}(T_{\mathrm{g}_{a}}^{2})\ni(f,\phi_{1},\ldots,\phi_{a})\mapsto f_{\mu(\phi_{1}, \ldots,\phi_{a})}\in\mathcal{T}_{f_{0}}.\] Since \(h_{\mu}\) depends holomorphically on all choices, this map is holomorphic as a map into \(\mathcal{A}^{\underline{\nu}}\). To see that the above map is injective, assume that \((f,\phi_{1},\dots,\phi_{a}),(\tilde{f},\tilde{\phi}_{1},\dots,\tilde{\phi}_{a}) \in\mathcal{H}_{f_{0}}\times\mathrm{T}(T^{2}_{\mathrm{g}_{1}})\times\dots \mathrm{T}(T^{2}_{\mathrm{g}_{a}})\) are so that \(\Psi(f,\phi_{1},\dots,\phi_{a})=\Psi(\tilde{f},\tilde{\phi}_{1},\dots,\tilde{ \phi}_{a})\). Then \(f,\tilde{f}\) are hybrid conjugate. In particular the multipliers are corresponding periodic attractors are the same, and the \(\bar{\partial}\) derivative of \(h^{-1}_{\mu(\tilde{f},\tilde{\phi}_{1},\dots,\tilde{\phi}_{a})}\circ h_{\mu(f, \phi_{1},\dots,\phi_{a})}\) vanishes a.e. on the basin of \(f\). By the Weyl lemma it follows that \(h^{-1}_{\mu(f,\tilde{\phi}_{1},\dots,\tilde{\phi}_{a})}\circ h_{\mu(f,\phi_{1},\dots,\phi_{a})}\) is conformal. Hence \(\phi_{i}=\tilde{\phi}_{i}\). To see that \(\Psi\) is surjective, note that any \(g\in\mathcal{T}_{f_{0}}\) can be obtained in this manner by quasiconformal surgery. Indeed, for each periodic attracting periodic orbit, choose a periodic point \(p\), and take a fundamental annulus \(A_{i,g}\) surrounding \(p\) and containing the \(\mathrm{g}_{i}\) forward iterates of all critical point in the basin of \(p\) (and define \(A_{i,f_{0}}\) similarly. Choose diffeomorphisms \(\phi_{i}\colon A_{i,f_{0}}\to A_{i,g}\) which preserve the real line, and so that the \(f_{0}\)-iterates points of \(f_{0}\) in \(A_{i,f_{0}}\) are mapped to the corresponding critical iterates of \(g\) in \(A_{i,f_{0}}\). Next let \(\mu\) be the Beltrami coefficient associated to \(\phi_{i}\) in \(A_{i,f_{0}}\), and extend \(\mu\) so that it is invariant and zero outside the basins of attracting periodic points. Let \(h_{\mu}\) be the the normalised qc map with Beltrami coefficient \(\mu\), and define \(g_{0}=h_{\mu}\circ f_{0}\circ h_{\mu}^{-1}\). As usual \(g\) is hybrid conjugate to \(f_{0}\) and \(g\in\mathcal{A}^{\underline{\nu}}_{a}\). Moreover, by construction \[(g,\phi_{1},\dots,\phi_{a})\mapsto g_{\mu(\phi_{1},\dots,\phi_{a})}=f.\] _Remark 22.1_.: In Theorem 26.2 we will determine the codimension of this manifold as a subset of \(\mathcal{A}^{\underline{\nu}}\). _Remark 22.2_.: The space \(\mathcal{T}_{f}\) consists of real maps, and so the marked points in the tori \(T_{i}\) are real. The Teichmuller space of \(T_{i}\) with \(\mathrm{g}_{i}\) real marked points can be considered as an ordered collection of \(\mathrm{g}_{i}\) real points in these tori. Thus we see that this real Teichmuller space is a simplex of real dimension \(\mathrm{g}_{i}\). If \(\mathrm{g}_{i}=0\) then the modulus of the fundamental annulus, and therefore the conformal structure on the torus, is determined by the multiplier \(\lambda_{i}\). _Remark 22.3_.: A similar discussion can also be found in Section 6 of [McSu]. _Remark 22.4_.: If \(f\) has parabolic periodic points, then instead of the annuli \(A_{i}\) the fundamental domains are the crescent shapes \(S\) from Figure 12. Identifying boundaries we obtain an infinite cylinder, i.e. \(\mathbb{C}_{*}\) with \(\mathrm{g}_{i}\) marked points. Denote its Teichmuller space by \(\mathcal{T}(\mathbb{C}_{*,\mathrm{g}_{i}})\). If all parabolic periodic points of \(f\) are simple, then we obtain that \[\mathcal{T}_{f}\approx\mathcal{H}_{f}\times\mathrm{T}(T^{2}_{\mathrm{g}_{1}}) \times\dots\mathrm{T}(T^{2}_{\mathrm{g}_{a}})\times\mathrm{T}(\mathbb{C}_{*,g ^{\prime}_{1}})\times\dots\times\mathrm{T}(\mathbb{C}_{*,g^{\prime}_{p}}) \tag{22.2}\] where \(p\) is the number of parabolic periodic points and \(\mathrm{g}^{\prime}_{1},\dots,\mathrm{g}^{\prime}_{p}\) is the number of infinite orbits in the basins of the \(i\)-th parabolic periodic point. ## 23. Infinitesimal theory, horizontal, vertical and transversal vector fields So far we have shown that \(\mathcal{H}_{f}\) and \(\mathcal{T}_{f}\) have manifold structures. In the next few sections we will show that they have the correct codimension. A continuous vector field \(\alpha\) on an open set \(U\subset\overline{\mathbb{C}}\) is called \(K\)-_quasiconformal_, abbreviated \(K\)-_qc_, if it has locally integrable distributional partial derivatives \(\partial\alpha\) and \(\bar{\partial}\alpha\), and \(\|\bar{\partial}\alpha\|_{\infty}\leq K\). A vector field is _quasiconformal_ if it is \(\kappa\)-qc for some \(\kappa\). We say that a qc vector field is normalized if it vanishes on \(\{-2,2,\infty\}\) or on \(\{-1,1,\infty\}\). If \(\alpha\) is a continuous vector field, defined on a closed set \(X\), we say that \(\alpha\) is _quasiconformal_ if it extends to a normalized qc vector field on \(\overline{\mathbb{C}}\). We define \[\left\|\alpha\right\|_{\text{qc}}=\inf\left\|\xi\right\|_{\infty},\] where the infimum is taken over all normalized qc-extensions \(\xi\) of \(\alpha\) to the Riemann sphere. **Definition 23.1** (Hybrid horizontal vector fields \(E_{f}^{h}\)).: Let \(f\in\mathcal{A}^{\underline{v}}\). Then \(E_{f}^{h}\) is the space of all holomorphic vector fields \(v\in T_{f}\mathcal{A}^{\underline{v}}\) defined in a neighbourhood of the interval such that there exists a pruned polynomial-like extension \(F\colon U\to U^{\prime}\) of \(f\) and a qc vector field \(\alpha\) on \(U\) so that \[v(z)=\alpha\circ F(z)-DF(z)\alpha(z)\text{ for }z\in U \tag{23.1}\] and so that \(\bar{\partial}\alpha=0\) on \(K_{F}\). We will call such a vector field \(v\)_hybrid-horizontal_. A vector fields \(v\) satisfying (23.1) is called _horizontal_ if \(\bar{\partial}\alpha=0\) on \(K_{F}\) does not necessarily hold. _Remark 23.1_.: Define \(v^{n}=\frac{d}{dt}(f+tv)^{n}|_{t=0}\) and assume that each critical point of \(F\) is in the basin of a periodic attractor and that \(v\) satisfies \[Dv^{n}(p)(Df^{n}(p)-1)=v^{n}(p)D^{2}f^{n}(p). \tag{23.2}\] for each attracting periodic orbit. Then a computation shows that infinitesimally the multipliers of these periodic orbit stays the same for maps of the form \(f+tv\). However, (23.2) and (23.1) are not enough to guarantee that \(v\in\mathcal{H}^{f}\) if there are several critical points in the basin of \(p\), see the discussion in Theorem 22.1. _Remark 23.2_.: Definition 23.1 of hybrid tangent does not depend on the choice of \(F\colon U\to U^{\prime}\) in the following sense: assume that \(F\colon U\to U^{\prime}\) and \(\tilde{F}\colon\tilde{U}\to\tilde{U}^{\prime}\) are pruned polynomial-like extension of \(f\) so that \(\tilde{U}\subset U\). If (23.1) holds on \(U\) then it also holds on \(\tilde{U}\), since then \(\tilde{F}|\tilde{U}=F|\tilde{U}\). Because of this, whether or not \(v\in E_{f}^{h}\) is a condition on \(v\) restricted to \(I\). This also become apparent from the next proposition. The next proposition motivates the above definition and shows that hybrid-horizontal vector fields \(v\) can be used to parametrise \(\mathcal{H}_{f}\) in the same way as the exponential map does in a Riemannian manifold. **Proposition 23.1** (\(T_{f}\mathcal{H}_{f}=E_{f}^{h}\)).: _Given each \(v\in E_{f}^{h}\) there exists a one-parameter family of maps \(f_{t,v}\in\mathcal{H}_{f}\) with \(f_{0,v}=f\), depending analytically on \(t\) and so that \(\left.\frac{d}{dt}f_{t,v}\right|_{t=0}=v\). Vice versa, for each \(g\in\mathcal{H}_{f}\) near \(f\) there exists \(v\in E_{f}^{h}\) so that \(f_{1,v}=g\)._ _Similarly, for each \(g\in\mathcal{T}_{f}\) near \(f\) there exists a horizontal vector field \(v\) so that \(f_{1,v}=g\), and each horizontal vector field \(v\) corresponds to a family \(f_{t,v}\in\mathcal{T}_{f}\)._ Proof.: Assume that \(v\in E_{f}^{h}\). Then \(v\) is holomorphic and there exists a pruned polynomial-like extension \(F\colon U\to U^{\prime}\) and qc vector field \(\alpha\) so that \(v(z)=\alpha(F(z))-F^{\prime}(z)\alpha(z)\) for \(z\in U\). Taking the \(\bar{\partial}\) derivative, we see that \(0=\bar{\partial}\alpha(F(z))\overline{F^{\prime}(z)}-F^{\prime}(z)\bar{ \partial}\alpha(z)\) for all \(z\in U\). i.e. \(F^{*}\mu=\mu\). Let \(H_{t}\) be the normalised qc homeomorphism associated to \(t\mu\). Since \(\mu\) is an invariant linefield, we obtain that there exists a real analytic map \(f_{t}\) with a pruned polynomial-like extension \(F_{t}\colon U_{t}\to U_{t}^{\prime}\) so that \(F_{t}\circ H_{t}=H_{t}\circ F\). Taking the \(\frac{d}{dt}\) derivative in \(F_{t}\circ H_{t}=H_{t}\circ F\), writing \(\alpha=\left.\frac{d}{dt}H_{t}\right|_{t=0}\), \(v=\left.\frac{d}{dt}F_{t}\right|_{t=0}\) we see that \(v(z)+DF(z)\alpha(z)=\alpha\circ F(z)\) as claimed. Vice versa, if \(g\in\mathcal{H}_{f}\) then from Theorem 15.1 the maps \(f,g\) have pruned polynomial-like extensions \(F\colon U_{F}\to U_{F}^{\prime}\) and \(G\colon U_{G}\to U_{G}^{\prime}\), which are quasiconformally conjugate and so that the \(\bar{\partial}\) of the conjugacy vanishes on \(K_{F}\). Let \(\mu\) be the Beltrami coefficient associated to this qc conjugacy. Letting \(h_{t}\) be the qc conjugacy associated to \(t\mu\) and \(f_{t}=h_{t}\circ f\circ h_{t}^{-1}\), we obtain the vector field \(v(z)=\frac{d}{dt}f_{t}(z)|_{t=0}\) which is in \(E_{f}^{h}\) so that \(f_{1,v}=g\). _Remark 23.3_.: Let \(\mu_{t}\) be the Beltrami coefficient of a holomorphic motion \(h_{t}\), and let \(\alpha=\frac{d}{dt}h_{t}|_{t=0}\). Then \(\mu_{t}\) depends holomorphically on \(t\), \(\alpha\) is a qc vector field and \(\bar{\partial}\alpha=\frac{d\mu_{t}}{dt}|_{t=0}\), see [1, Lemma 2.10] and [1, Lemma 19]. **Definition 23.2** (Hybrid-vertical vector fields \(E_{f}^{v}\)).: A holomorphic vector field \(v\in T_{f}\mathcal{A}^{\underline{\nu}}\) is _hybrid-vertical_ if there exists a pruned polynomial-like extension \(F:U_{F}\to U_{F}^{\prime}\) of \(f\) and a holomorphic vector field \(\alpha\) on \(\overline{\mathbb{C}}\setminus K_{F}\) vanishing at \(\infty\) such that \[v(z)=\alpha\circ F(z)-DF(z)\alpha(z)\text{ for }z\in U_{F}\setminus K_{F}. \tag{23.3}\] Let \(E_{F}^{v}\) denote the space of hybrid-vertical vector fields at \(f\) corresponding to \(F\). _Remark 23.4_.: The definition of \(E_{F}^{v}\) seems to depend on \(F\). Indeed, suppose that \(F\colon U\to U^{\prime}\) and \(\tilde{F}\colon\tilde{U}\to\tilde{U}^{\prime}\) are both pruned polynomial-like extensions of \(f\) and that \(\tilde{U}\subset U\). Then a vertical vector field \(v\) associated to \(F\) is not necessarily one for \(\tilde{F}\). This is because \(\tilde{U}\setminus K_{\tilde{F}}\) is not necessarily contained in \(U\setminus K_{F}\) and so \(\alpha\) satisfying (23.3) does not necessarily induces \(\tilde{\alpha}\) on \(\tilde{U}\setminus K_{\tilde{F}}\). **Definition 23.3** (Hybrid-transversal).: We say that a vector field \(v\in T_{f}\mathcal{A}_{a}^{\underline{\nu}}\) is _hybrid-transversal_ if any family \(f_{t}\in\mathcal{A}_{a}^{\underline{\nu}}\) for which \(\frac{d}{dt}f_{t}=v\) has the property that \(f_{t}\) is not hybrid conjugate to \(f\) for \(|t|\) small and \(t\neq 0\). _Remark 23.5_.: Assume that \(f\) has no parabolic periodic points. Then, taking into account the Teichmuller spaces \(\mathcal{T}^{\mathbb{g}_{t}}\) from Theorem 22.1, it is also possible to define the notion of a topological-vertical vector field, namely a hybrid-vertical vector field with the additional property that it infinitesimally preserves the positions of the infinite critical orbits in the basins of periodic attractors (in terms of the Teichmuller space \(\mathcal{T}^{\mathbb{g}_{t}}\)) and the multipliers of these periodic attractors. We will not develop or need this description. **Proposition 23.2**.: _For \(f\in\mathcal{A}_{a}^{\nu}\), then for each pruned polynomial-like extension \(F\) of \(f\) we have \(T_{f}\mathcal{A}^{\nu}=E_{f}^{h}\oplus E_{F}^{v}\)._ Proof.: Suppose that \(v\in T_{f}\mathcal{A}^{\underline{\nu}}\) is a holomorphic vector field defined in a neighbourhood \(\Omega_{a}\) of the interval. Select a pruned polynomial-like extension \(F:U\to U^{\prime}\) of \(f\) so that \(v\) is well-defined in a neighbourhood of \(U\cup U^{\prime}\). Let \(\Gamma\) be the set of curves associated to \(F\). We claim that there exists a smooth vector field \(w\) defined in \(\overline{\mathbb{C}}\setminus K_{F}\) vanishing near \(\infty\) and such that \[v(z)=w(F(z))-DF(z)w(z), \tag{23.4}\] for all \(z\in U\) where this is well-defined. To construct such a \(w\) we do the following. Let \(\gamma\) denote one of the curves in \(\Gamma\) landing at a periodic point \(\alpha\) of \(F\). First we define \(w\) in a neighbourhood, \(O\cap\gamma\), of the fundamental domain \(\gamma\cap(U^{\prime}\setminus U)\) so that (23.4) holds on \(O\cap\gamma\). Then we extend \(w\) to \(\gamma\cap U\) using the relation: \(v(F^{n-1}z)=w(F^{n}(z))-DF^{n}(z)w(z)\), and then to \(\cup_{n=0}^{N}F^{-n}(\gamma)\), where \(N\) is chosen so that the complex pullbacks of \(F^{-N+1}(\gamma)\) intersect \(\partial U^{\prime}\). This defines \(w\) on \(\partial U^{\prime}\cap F^{-(N+1)}(\gamma)\). Do this for all of such curves in \(\Gamma\). Next extend \(w\) smoothly to the rest of \(\partial U^{\prime}\), and define \(w\) on \(\partial U\) by \(v(z)=w(F(z))-DF(z)w(z)\). Notice that this definition agrees with the previous one on \(\partial U\cap\partial U^{\prime}\). Now extend \(w\) to \(\overline{\mathbb{C}}\setminus U\) smoothly and so that it vanishes at infinity, and to \(\mathbb{C}\setminus K_{F}\) by \(v(F^{n-1}z)=w(F^{n}(z))-DF^{n}(z)w(z)\). Thus we have constructed \(w\) as in equation (23.4). Let us consider the Beltrami differential \(\mu=\bar{\partial}w\) in \(\overline{\mathbb{C}}\setminus K_{F}\), extended by \(0\) to \(K_{F}\). Since \(v\) is holomorphic on \(U\), \(\mu\) is \(F\) invariant on \(U\): \[\bar{\partial}v=0=\bar{\partial}(w\circ F)-\bar{\partial}(DF\cdot w),\] \[0=(\bar{\partial}w\circ F)DF-DF\,\bar{\partial}w,\quad\text{so}\quad\bar{ \partial}w\circ F=\bar{\partial}w.\] Thus we have that \(\mu\) has bounded \(L^{\infty}\)-norm on the sphere. Now, we solve the \(\bar{\partial}\)-problem: \(\bar{\partial}u=\mu\) where \(u\) is a qc vector field on \(\overline{\mathbb{C}}\) vanishing at \(\infty\). The vector field \(v^{h}=u\circ F-DFu\) on \(U\) is holomorphic, since \(\mu\) is \(F\)-invariant. Since \(\bar{\partial}u=\mu=0\) on \(K_{F}\), \(v^{h}\) is hybrid-horizontal. Let \(\alpha=w-u\). Since \(\bar{\partial}\alpha=0\) on \(\overline{\mathbb{C}}\setminus K_{F}\), \(\alpha\) is holomorphic on this set \(\overline{\mathbb{C}}\setminus K_{F}\) and \(\alpha\) vanishes at \(\infty\). Moreover, \(v-v^{h}=\alpha\circ F-DF\alpha\) on \(U\), so \(v-v^{h}\) is a vertical vector field. _Uniqueness of the splitting:_ Assume that there exists a vector field \(v\in E^{h}_{f}\cap E^{v}_{f}\), \(v\neq 0\). Then there exists a pruned polynomial-like representative \(F:U\to U^{\prime}\) of \(f\), a qc vector field \(w\) defined in the neighbourhood \(U^{\prime}\) of \(K_{F}\) so that \(v=w\circ F-DFw\) so that \(\bar{\partial}w=0\) on \(K_{f}\) and a holomorphic vector field \(\alpha\) in \(\overline{\mathbb{C}}\setminus K_{F}\), which vanishes at \(\infty\) so that \(v=\alpha\circ F-DF\alpha\). Let us consider vector field \(u=w-\alpha\) defined on \(U^{\prime}\setminus K_{F}\). Since \(u\circ F^{n}=(F^{n})^{\prime}u\) it follows that \(|u(z)|\to 0\) as \(z\to K_{F}\), \(z\in U\). It is important to observe that we have the same estimate at the points in \(K_{F}\cap\partial U\). Thus we have that \(u\) admits a continuous extension to \(\overline{U}\) which vanishes on \(K_{F}\). Hence we have that \(w\) and \(\alpha\) agree on the pruned Julia set. Let \(\beta\) be the vector field which is equal to \(w\) on \(K_{F}\) and \(\alpha\) on \(\overline{\mathbb{C}}\setminus K_{F}\). Then \(\beta\) has distributional derivatives of class \(L^{2}\) and \(\bar{\partial}\beta=0\). Thus by Weyl's Lemma, \(\beta\) is a holomorphic vector field defined on \(\overline{\mathbb{C}}\), and since \(\beta\) vanishes at \(\infty\), it is linear. Hence we can write \(\beta(z)=[az+b]/dz\). Thus \[v(z)=v_{0}(z)/\partial z\text{ where }v_{0}(z)=[af(z)+b-f^{\prime}(z)(az+b)].\] From this it is easy to see that \(v=0\). Indeed, note that \(f_{t}\) is a family satisfying \(f_{t}(\partial I)\subset\partial I\). Hence \(\frac{df_{t}}{dt}=v\) implies \(v(-1)=v(1)=0\). Let us show that this implies \(a=b=0\). To do this we need to consider several possibilities. (i) If \(f(-1)=-1\) and \(f(1)=1\) then \(v_{0}(-1)=af(-1)+b-f^{\prime}(-1)(-a+b)=(b-a)(1-f^{\prime}(-1))=0\) and \(v_{0}(1)=af(1)+b-f^{\prime}(1)(a+b)=(a+b)(1-f^{\prime}(1))=0\). Since we assume that \(f\) has no parabolic points this implies \(a=b=0\). (ii) If \(f(-1)=-1\) and \(f(1)=-1\) then \(v_{0}(-1)=af(-1)+b-f^{\prime}(-1)(-a+b)=(b-a)(1-f^{\prime}(-1))=0\), which gives \(a=b\) and therefore \(v_{0}(1)=af(1)+b-f^{\prime}(1)(a+b)=(b-a)-f^{\prime}(1)(a+b)]=0\) implies \(a+b=0\) because we assume \(f\) has no critical points in \(\partial I\). (iii) If \(f(-1)=1\) and \(f(1)=-1\) then we get \(v_{0}(-1)=af(-1)+b-f^{\prime}(-1)(-a+b)=(a+b)-f^{\prime}(-1)(b-a)=0\) and \(v_{0}(1)=af(1)+b-f^{\prime}(1)(a+b)=(b-a)-f^{\prime}(1)(a+b)=0\). Combined this gives \((b-a)(1-f^{\prime}(-1)f^{\prime}(1))=0\) which again implies \(b=a\) and therefore, using the previous expressions, \(a=b=0\). ## 24. Estimates for vertical vector fields **Lemma 24.1**.: _Let \(F\colon E\to E^{\prime}\) be the part of the pruned polynomial-like mapping \(F\colon E\cup B\to E^{\prime}\cup B^{\prime}\) coming from an expanding Markov structure of the external map. Then there exist \(\lambda>1\) and \(N\in\mathbb{N}\) such that \(|DF^{N}(z)|>\lambda\) for all \(z\in F^{-N}(\partial E)\) so that \(z,\dots,F^{N-1}(z)\in E\)._ _Remark 24.1_.: The set \(E\) also contains the basins of periodic attractors, provided these all can be included in the set \(K_{X,O}\). If \(f\) has periodic attractors with 'large' basins then its global pruned polynomial-like extension is of the form \(F\colon E\cup B\to E^{\prime}\cup B^{\prime},\) and then the above expansion holds on \(F^{-N}(\partial U),\) and not on \(F^{-N}(\partial B).\) Proof.: If \(F^{n}(z)\) is in the interior of \(E^{\prime}\) then take a balls \(D\Subset D^{\prime}\subset E^{\prime}\) containing \(F^{n}(z)\). By the Koebe Distortion Theorem and since puzzle pieces shrink in diameter to zero, it follows that there if \(n\) is large enough then \(|DF^{n}(z)|\geq 2\). If \(F^{n}(z)\in\partial E\cap\partial E^{\prime}\) then \(F^{n}(z)\in\Gamma\). If \(F^{n}(z)\) is contained in a curve \(\gamma\subset\Gamma\) which is part of the expanding Markov structure, then there exists \(n_{0}\) which is independent of \(z\) so that \(z,\dots,F^{n-n_{0}}(z)\in E\). Hence the proof goes as before in this case. If \(\gamma\) is part of the attracting structure, then \(\gamma\) is an invariant curve going through a repelling periodic point then it is possible that \(F^{k}(z),\dots,F^{n-1}(z)\in\gamma\) and \(z,\dots,F^{k-1}(z)\in E\) and we also get expansion because the multiplier at the periodic point is repelling and because of the first part of the argument. Now we are in a position to prove the following proposition, which is a modification of [Ly1, Lemma 4.10]. To deal with the fact that pruned polynomial-like mappings do not have moduli bounds, we use the expansion along the boundaries of the puzzle pieces. This lemma is one of the key technical tools in obtaining a lower bound on the codimension of \(\mathcal{H}_{f}\). In the statement, we use one more pullback than in [Ly1, Lemma 4.10] since if \(F:U\to U^{\prime}\) is a pruned polynomial-like mappings, we need not have that \(U^{\prime}\supset U;\) however, we do have that \(U\supset F^{-1}(U).\) **Proposition 24.2** (Control for vertical vector fields).: _Let \(F\colon U=E\cup B\to E^{\prime}\cup B^{\prime}=U^{\prime}\) be a global pruned polynomial-like mapping. Let \(W^{\prime\prime}=E\), \(W^{\prime}=F^{-1}(W^{\prime\prime})\) and \(W=F^{-1}(W^{\prime}).\) Let \(v\) and \(\alpha\) be holomorphic vector fields satisfying_ \[v(z)=\alpha\circ F(z)-DF(z)\alpha(z),\quad z\in W^{\prime},\] _where \(v\) is holomorphic in \(W^{\prime\prime}\) and \(\alpha\) is holomorphic in \(\overline{\mathbb{C}}\setminus K_{F}.\) Then there exists a constant \(C\) depending on \(\lambda,\) the constant from Lemma 24.1 and the extremal widths of rectangles comprising \(U^{\prime}\setminus U\), such that_ \[\|\alpha\|_{\overline{\mathbb{C}}\setminus W}\leq C\|v\|_{W^{\prime}}\text{ and }\|v\|_{W}\geq C^{-1}\|v\|_{W^{\prime}}.\] Proof.: Let \(\gamma=\partial W\). By Lemma 24.1, there exists \(N\in\mathbb{N}\) and \(\lambda>1\) so that for \(z\in F^{-N}\gamma,\)\(|DF^{N}(z)|>\lambda.\) We have that \[\alpha\circ F^{N}(z)-DF^{N}(z)\alpha(z)=DF^{N}(z)\sum_{k=0}^{N-1}\frac{v(F^{k} (z))}{DF^{k+1}(z)},\] so \[\alpha(z)=\frac{\alpha\circ F^{N}(z)}{DF^{N}(z)}-\sum_{k=0}^{N-1}\frac{v(F^{k }(z))}{DF^{k+1}(z)}\] Since \(|DF^{k+1}(z)|\) is bounded away from zero for all \(z\in F^{-N}\gamma\) and all \(0\leq k\leq N-1,\) this expression implies that there exists a constant \(A>0\) (which depends on \(N\) and on \(F\colon U\to U^{\prime}\)), so that \[\|\alpha\|_{F^{-N}\gamma}\leq\frac{\|\alpha\|_{\gamma}+\lambda A\|v\|_{W}}{ \lambda}\leq\frac{\|\alpha\|_{\overline{\mathbb{C}}\setminus W}+\lambda A\|v \|_{W}}{\lambda}, \tag{24.1}\] where the last inequality follows from the Maximum Principle and since \(\partial W=\gamma\). By the Maximum Principle we also have \[\|\alpha\|_{\overline{\mathbb{C}}\setminus W}\leq\|\alpha\|_{F^{-N}\gamma}.\] Thus \[\|\alpha\|_{\overline{\mathbb{C}}\setminus W}\leq\frac{\|\alpha\|_{\overline{ \mathbb{C}}\setminus W}+\lambda A\|v\|_{W}}{\lambda},\] and so \[\|\alpha\|_{\overline{\mathbb{C}}\setminus W}\leq\frac{\lambda A\|v\|_{W}}{ \lambda-1}.\] This proves the first inequality. The second inequality follows from it and the Maximum Principle: \[\|v\|_{V^{\prime}}\leq\|\alpha\|_{\overline{\mathbb{C}}\setminus V^{\prime \prime}}+\|DF\|_{V^{\prime}}\|\alpha\|_{\overline{\mathbb{C}}\setminus V^{ \prime}}\] \[\leq\|\alpha\|_{\overline{\mathbb{C}}\setminus W}(1+\|DF\|_{W^{\prime}})\leq \frac{\lambda A}{\lambda-1}(1+\|DF\|_{W^{\prime}})\|v\|_{W}.\] _Remark 24.2_.: The previous proposition is the main reason why we can deal with vertical vector fields more easily than in [ALM] where puzzle mappings are introduced which have infinitely many domains and which form a necklace neighbourhood of \(I.\) ## 25. Estimates for horizontal vectors fields In this section we will prove the following result, which is called the Key Estimate in [ALM]. Here the proof follows easily from Part A of our paper. **Lemma 25.1** (Control for horizontal vector fields).: _Suppose that \(f\in\mathcal{A}_{a}^{\underline{\nu}}\) has only hyperbolic periodic orbits. Then there exists a neighbourhood \(\mathcal{W}\subset\mathcal{A}_{a}^{\underline{\nu}}\) of \(f\) and \(C>0\), so that for any \(g\in\mathcal{W}\) and any \(v\in T_{g}\mathcal{H}_{g},\) there exist a pruned polynomial-like map \(G\colon U_{g}\to U_{g}^{\prime}\) and a qc vector field \(\alpha\) so that_ \[v(z)=\alpha\circ G(z)-DG(z)\alpha(z)\quad\text{for}\;z\in U_{g},\] _so that_ \[\|\bar{\partial}\alpha\|_{\mathrm{qc}}\leq C\|v\|_{\infty}.\] Proof.: Take \(v\in T_{g}\mathcal{H}_{g}\). Then by Proposition 23.1 there exists a family \(g_{t}\in\mathcal{H}_{g}\) with \(\frac{d}{dt}g_{t}\big{|}_{t=0}=v\). Let \(G_{t}\) and \(G\) be pruned polynomial-like extensions of \(g_{t}\) and \(g\). By Theorem 15.1 there exists a family of qc maps \(h_{t}\) so that \(G_{t}=h_{t}\circ G\circ h_{t}^{-1}\). By Corollary 15.2 we have \[\varkappa(h_{t})\leq 1+L||g_{t}-g||_{\infty},\] where \(\varkappa(h_{t})\) is the qc dilatation of \(h_{t}\) and as before \(||\cdot||_{\infty}\) stands for the supremum norm on \(\overline{\Omega}_{a}\). So if we write \(\alpha=\frac{dh}{dt}|_{t=0}\) we get \(||\bar{\partial}\alpha||_{qc}\leq L||v||_{\infty}\). As in the proof of Proposition 23.1 we also have \(v(z)=\alpha\circ G(z)-G^{\prime}(z)\alpha(z)\) for \(z\in U_{g}\) ## 26. The codimension of conjugacy classes In Section 21 we showed that the conjugacy class of a map \(\mathcal{A}^{\underline{\nu}}\) is a real analytic manifold. In this section, we show it has the expected codimension in \(\mathcal{A}^{\underline{\nu}}\). The lower bound for the dimension comes from Proposition 24.2. To obtain an upper bound for the dimension we shall use the following lemma, which follows from Lemma 25.1 (which is our version of the "Key Estimate" of [ALM]): **Lemma 26.1** (Continuity of tangent space).: _If \(f_{n}\in\mathcal{A}^{\nu}_{a}\) converge to \(f\in\mathcal{A}^{\nu}_{a}\) where \(f\) has only hyperbolic periodic points, and let \(F_{n}\colon U_{n}\to U^{\prime}_{n}\) be pruned polynomial-like extensions of \(f_{n}\) so that \(U^{\prime}_{n}\) contains a \(\delta\)-neighbourhood of \(I\) for each \(n\geq 0\). Then if \(v_{n}\in E^{h}_{f_{n}}\) is a sequence of horizontal vectors with \(||v_{n}||_{U^{\prime}_{n}}=1\) then \(v_{n}\) converges to a horizontal vector \(v\in E^{h}_{f}\) with \(||v||_{U^{\prime}}\geq 1\) for some \(U^{\prime}\supset I\)._ Proof.: Since \(v_{n}\in E^{h}_{f_{n}}\) there exists a qc vector field \(\alpha_{n}\) on \(U^{\prime}_{n}\) so that \(v_{n}(z)=\alpha_{n}(F_{n}(z))-DF_{n}(z)\alpha_{n}(z)\) for all \(z\in U_{n}\) and \(v_{n}\) is holomorphic on \(U_{n}\). By Lemma 25.1 (the Key Estimate) we have that \(\alpha_{n}\) is a sequence of quasiconformal vector fields with bounded dilatation, thus by the compactness lemma for qc vector fields, see for example [ALM], there exists qc vector field \(\alpha\) so that \(\alpha_{n}\to\alpha\) along some subsequence and, as \(||v_{n}||_{U^{\prime}_{n}}=1\) there exists a holomorphic \(v\) so that \(v_{n}\to v\) on some definite neighbourhood of \(I\). Moreover, \(v(z)=\alpha(F(z))-DF(z)\alpha(z)\) for all \(z\) in a definite neighbourhood of \(I\) and therefore \(v\) is a horizontal vector field, \(v\in E^{h}_{f}\). We now show that the hybrid classes have the expected codimension, _cf._[Ly1, Theorem 4.11] and also [Sm, Theorem 10.4]. **Theorem 26.2**.: _Assume that \(f\in\mathcal{A}^{\underline{\nu}}_{a}\) has only hyperbolic periodic points. Then_ 1. \(\mathcal{H}^{\mathbb{R}}_{f}\) _is a real analytic manifold whose codimension in_ \(\mathcal{A}^{\nu}_{a}\) _is equal to_ \(\nu_{H}=\nu+\xi_{\text{noness}-att}\) _where_ \(\nu\) _is the number of critical points of_ \(f\) _and_ \(\xi_{\text{noness}-att}\) _is the number of periodic attractors without (real) critical points in their basins - we call these non-essential attractors._ 2. \(\mathcal{T}_{f}\) _is a real analytic manifold whose codimension in_ \(\mathcal{A}^{\nu}_{a}\) _is equal to_ \(\nu_{T}=\nu-\zeta(f)\) _where_ \(\nu\) _is the number of critical points of_ \(f\) _and_ \(\zeta(f)\) _is the maximal number of critical points in the basins of periodic attractors with pairwise disjoint_ infinite _orbits._ Proof.: Let us only consider the space \(\mathcal{H}^{\mathbb{R}}_{f}\). The dimension of the space \(\mathcal{T}_{f}\) follows from this using Theorem 22.1. _Proof of the lower bound on the codimension_. Recall that functions in \(\mathcal{A}^{\underline{\nu}}_{a}\) are holomorphic on \(\Omega_{a}=\{z\in\mathbb{C}:|z-I|<a\}.\) Suppose that \(f_{n}\in\mathcal{A}^{\underline{\nu}}_{a}\) is a sequence of semi-hyperbolic mappings \(f_{n}\to f\) on \(\overline{\Omega_{a}}\). That this is possible, follows from density of hyperbolicity, see [KSvS2]. To each \(f_{n}\) we associate a pruned polynomial like mapping \(F_{n}:U_{n}:=E_{n}\cup B_{n}\to E^{\prime}_{n}\cup B^{\prime}_{n}=:U^{\prime}_ {n}\) and to \(f\) we associate a pruned polynomial-like mapping \(F:U:=E\cup B\to E^{\prime}\cup B^{\prime}=:U^{\prime}\). We choose them so that \(U^{\prime}\subset\Omega_{a}\) and \(U^{\prime}_{n}\subset\Omega_{a}\) for all \(n\) sufficiently large, and so that \(F_{n}\colon U_{n}\to U^{\prime}_{n}\) converges to \(F\colon U\to U^{\prime}\) (and so in particular \(U_{n}\) contains each compact subset of \(U\) for \(n\) large (and similarly for \(U^{\prime}_{n}\) and \(U^{\prime}\)). By Theorem 19.2, for each \(F_{n}\) the space of vectors vertical to \(\mathcal{H}_{F_{n}}\) has dimension \(\nu_{H}\), and let \(\{v^{1}_{n},v^{2}_{n},\ldots,v^{\nu_{H}}_{n}\}\) be a basis. Moreover, we can assume that these vectors have unit length and by a Theorem of Riesz, that they are almost orthogonal in the sense that \[\operatorname{dist}(v^{i}_{n},\operatorname{span}\{v^{1}_{n},\ldots,v^{i-1}_{n }\})>\frac{1}{2},\text{ for }i=2,3,\ldots,\nu_{H}, \tag{26.1}\] where the distance is in the space of bounded holomorphic functions on \(U_{n}\) which extend continuously to \(\overline{U}_{n}\). Let us show that the unit ball in the vertical direction is compact, _cf._ [Sm, Corollary 10.1]: Suppose that \(w_{n}\) is a sequence of vertical vector fields at \(F\) of unit length. Then there exist \(\alpha_{n}\) holomorphic on \(\overline{\mathbb{C}}\setminus K_{F}\), vanishing at \(\infty\) and satisfying \[w_{n}=\alpha_{n}\circ F_{n}-DF_{n}\alpha_{n}\text{ on }E_{n}.\] Notice that because of the Maximum Principle and equation (24.1), for each \(i\) there exists \(C_{i}\) so that \[||\alpha_{n}||_{\overline{\mathbb{C}}\setminus F_{n}^{-i}(E_{n}^{\prime})} \leq C_{i}\text{ for all }n\geq 0.\] Hence, because the basins \(B_{n}\) are contained in the interior of \(K(F_{n})\), there exists a subsequence \(\alpha_{n_{j}}\) which converges on compact subsets of \(\mathbb{C}\setminus K_{F}\) to a holomorphic vector field \(\alpha\), so that the associated vector fields \(w_{n_{j}}\) converge to a vector field \(w\) and so that \(w=\alpha\circ F-DF\alpha\) on \(U\setminus K_{F}\). Thus the unit ball in the vertical direction is compact. It follows that we can assume that for each \(i\), \(v_{n}^{i}\to v^{i}\), uniformly on compact subsets of \(U\). By Proposition 24.2, we have that \(v^{i}\) cannot be zero. Finally since the vectors \(v_{n}^{i}\) are almost orthogonal, see equation (26.1), the collection \(\{v^{1},\dots,v^{\nu\mu}\}\) is linearly independent. Proof of the upper bound on the codimension.: Assume by contradiction that for \(n\) sufficiently large we have that \(\operatorname{codim}(\mathcal{H}_{F})>\operatorname{codim}(\mathcal{H}_{F_{n}})\). Because of Proposition 23.2 it then follows that there exists a non-zero vertical vector field \(v\) in the tangent space to \(F\), which can be approximated by horizontal vector fields \(v_{n}\) tangent to \(F_{n}\). This contradicts Lemma 26.1. ## 27. Hybrid conjugacies are embedded manifolds **Theorem 27.1**.: _Take \(f\in\mathcal{A}_{a}^{\underline{\nu}}\), assume that \(f\) has only hyperbolic periodic points and let \(v\) be a hybrid-vertical vector field. Then there exists \(\epsilon>0\) so that the family \(f_{t}=f+tv\) only intersects each hybrid class once._ Proof.: This follows immediately from the next lemma. **Lemma 27.2**.: _Assume that \(f_{n},g_{n}\in\mathcal{A}_{a}^{\underline{\nu}}\) are real-hybrid conjugate, converge to \(f\in\mathcal{A}_{a}^{\underline{\nu}}\) and that \(f\) has only hyperbolic periodic orbits. Then any limit \(v\) of_ \[\frac{f_{n}-g_{n}}{||f_{n}-g_{n}||_{a}}\] _is a hybrid-tangent vector to \(f\), i.e. \(v\in T_{f}\mathcal{H}_{f}^{\mathbb{R}}\)._ Proof.: See [ALM, Lemma 8.1]. _Remark 27.1_.: The above Theorem and lemma do not hold unless we assume that \(f\) has only hyperbolic periodic points, see [ALM, page 453 - footnote]. In [CvS2] we will elaborate on this. ## 28. Hybrid classes are Banach manifolds In Theorem 26.2 we showed that \(\mathcal{H}_{f}\) is a real analytic manifold, in the germ sense. Let us now improve this statement by showing: **Theorem 28.1**.: _For each \(f_{0}\in\mathcal{A}_{a}^{\underline{\nu}}\),_ 1. \(\mathcal{H}_{f_{0}}^{\mathbb{R}}\cap\mathcal{A}_{a}^{\underline{\nu}}\) _is real Banach submanifold of_ \(\mathcal{A}_{a}^{\underline{\nu}}\) _of codimension_ \(\nu_{H}=\nu+\xi_{noness-att}\)_;_ 2. \(\mathcal{T}_{f_{0}}^{\mathbb{R}}\cap\mathcal{A}_{a}^{\underline{\nu}}\) _is real Banach submanifold of_ \(\mathcal{A}_{a}^{\underline{\nu}}\) _of codimension_ \(\nu_{T}=\nu-\zeta(f)\)_._ Proof.: In Section 19 it is shown that any \(f\in\mathcal{H}_{f_{0}}^{\mathbb{R}}\) has a pruned polynomial-like extension \(F\colon U\to U^{\prime}\) so that its hybrid conjugacy class is conformally equivalent to the hybrid conjugacy class of \(G\colon U_{G}\to U_{G}^{\prime}\) where \(G\) is the extension of a real analytic semi-hyperbolic map \(g\in\mathcal{A}_{a}^{\underline{\nu}}\). This implies that there exists \(a^{\prime}>0\) so that \(f\in\mathcal{A}_{a^{\prime}}^{\underline{\nu}}\) and so that \(\mathcal{H}_{f_{0}}\cap\mathcal{A}_{a^{\prime}}^{\underline{\nu}}\) is a Banach manifold of codimension-\(\nu_{H}\) near \(f\) where \(\nu_{H}=\nu+\xi_{noness-att}\). From this we obtain that there exists a real analytic function \[\Psi\colon\mathcal{A}_{a^{\prime}}^{\underline{\nu}}\to\mathbb{R}^{\nu_{H}}\] defined near \(f\), which has maximal rank at \(f\) and so that on some neighbourhood of \(f\) in \(\mathcal{A}_{a^{\prime}}^{\underline{\nu}}\) one has \[\mathcal{H}_{f_{0}}^{\mathbb{R}}\cap\mathcal{A}_{a^{\prime}}^{\underline{\nu} }=\Psi^{-1}(0).\] Since \(\mathcal{A}_{a}^{\underline{\nu}}\) is dense in \(\mathcal{A}_{a^{\prime}}^{\underline{\nu}}\) it follows that the restriction \(\tilde{\Psi}\) of \(\Psi\) to \(\mathcal{A}_{a}^{\underline{\nu}}\) also has maximal rank at each \(\tilde{f}\in\mathcal{A}_{a}^{\underline{\nu}}\) in a neighbourhood of \(f\) in \(\mathcal{A}_{a^{\prime}}^{\underline{\nu}}\). Moreover, whether \(\tilde{f}\) is in \(\mathcal{H}_{f_{0}}^{\mathbb{R}}\) only depends on \(\tilde{f}\) restricted to \(I\). It follows that \(\mathcal{H}_{f_{0}}^{\mathbb{R}}\cap\mathcal{A}_{a}^{\underline{\nu}}=\tilde {\Psi}^{-1}(0)\) and since \(\tilde{\Psi}\colon\mathcal{A}_{a^{\prime}}^{\underline{\nu}}\to\mathbb{R}^{\nu _{H}}\) has maximal rank, it follows that \(\mathcal{H}_{f_{0}}^{\mathbb{R}}\cap\mathcal{A}_{a}^{\underline{\nu}}\) is a Banach manifold. The second assertion follows similarly. ## 29. Conjugacy classes of real analytic maps are path connected In this section we obtain Theorem B by proving the following: **Theorem 29.1**.: _Let \(f,\tilde{f}\colon I\to I\) be two real analytic maps in \(\mathcal{A}^{\underline{\nu}}\) with all periodic points hyperbolic, which are real-hybrid conjugate (via an order preserving conjugacy). Then there exists a family of real maps \(f_{t}\in\mathcal{A}^{\underline{\nu}},\,t\in[0,1]\) depending real analytically on \(t\in[0,1]\) so that \(f_{0}=f\), \(f_{1}=\tilde{f}\) and so that \(f_{t}\) is real-hybrid conjugate to \(f\) and \(\tilde{f}\) for each \(t\in[0,1]\)._ _Similarly, if \(f,\tilde{f}\in\mathcal{A}^{\underline{\nu}}\) are topologically conjugate and only have hyperbolic periodic orbits, then there exists a real analytic path connecting \(f,\tilde{f}\) in \(\mathcal{T}(f)\)._ Proof.: From [CvS] there exists a quasisymmetric conjugacy between \(f\) and \(\tilde{f}\). Let \(F\colon U_{F}\to U_{F}^{\prime}\), \(\tilde{F}\colon U_{\tilde{F}}\to U_{\tilde{F}}^{\prime}\) be the pruned polynomial-like extensions of \(f,\tilde{f}\) from Theorem 3.1. Choose a quasiconformal map \(h\colon U_{F}^{\prime}\to U_{\tilde{F}}^{\prime}\) so that \(h\) maps \(\partial U_{F}\) to \(\partial U_{\tilde{F}}\) and \(\Gamma_{F}\) to \(\Gamma_{\tilde{F}}\), and so that \(h\) is a conjugacy on these sets and also so that \(h\) agrees with the qs-conjugacy between \(f\) and \(\tilde{f}\) on the real line. We can also ensure that \(h\) is a conformal conjugacy near hyperbolic periodic attractors. Now let \(H_{0}=h\) and define \(H_{n+1}\) by \(\tilde{F}\circ H_{n+1}=H_{n}\circ F\). Since \(F,\tilde{F}\) are conformal, critical values of \(F\) are mapped to critical values of \(\tilde{F}\), and the conjugacy relation holds, \(H_{n+1}\) is well-defined and has the same quasiconformal dilatation as \(H_{n}\). It follows that there exists a subsequence of \(H_{n}\) converging to some quasiconformal map \(H\). Let \(Y_{n}=U_{F}^{\prime}\setminus\cup_{i=0}^{n}(F^{-i}(U_{f}^{\prime}\setminus U_{F}))\). Then for each \(n\geq 0\), \(Y_{n+1}\subset Y_{n}\) and \(H_{n+1}\) agrees with \(H_{n}\) outside \(Y_{n}\) and also on \(\gamma_{f}\). Because of the last assertion of Proposition 6.3 it follows that \(\cap_{n\geq 0}Y_{n}\) has empty interior and so for each point \(z\) outside this set there exists \(n\) so that \(H_{n+i}(z)=H_{n}(z)\) for all \(i\geq 0\). It follows that any convergent subsequence of \(H_{n}\) converges to the same map \(H\colon U_{F}\cup U_{F}^{\prime}\to U_{\tilde{F}}\cup U_{\tilde{F}}^{\prime}\) and therefore that \(\tilde{F}\circ H=H\circ F\). Let \(\mu\) be the Beltrami-coefficient associated to \(H\) on \(U_{F}\cup U_{F}^{\prime}\) (describing the ellipse field obtained by the pullback of the standard circle field under \(H\)). Then \(F\) preserves the ellipse field defined by \(\mu\) since \(\tilde{F}\) is conformal and since \(F=H^{-1}\circ\tilde{F}\circ H\). Now extend \(\mu\) to \(\bar{\mathbb{C}}\) by setting \(\mu=0\) on \(\mathbb{C}\setminus(U_{F}\cup U_{F}^{\prime})\) and let \(H_{t\mu}\) be the corresponding quasiconformal map corresponding to the Beltrami coefficient \(t\mu\) normalised so that \(H_{t\mu}(I)=I\) and \(H_{t\mu}(\infty)=\infty\) (here we use the Measurable Riemann Mapping Theorem). Since \(F\) preserves the ellipse field defined by \(\mu\) and \(F\) is conformal, \(F\) also preserves the ellipse field defined by \(t\mu\) for each \(t\in[0,1]\). It follows that \(G_{t}:=H_{t\mu}\circ F\circ H_{t\mu}^{-1}\) is a conformal map on \(H_{t\mu}(U_{F})\). We can ensure that \(\mu\) is \(z\mapsto\bar{z}\) symmetric, and so we can assume that \(H_{t\mu}\) is real for \(t\) real. In particular \(g_{t}=F|_{I}\) is a family of analytic maps of the interval. We also have the following: **Proposition 29.2**.: _Assume that \(f_{0}\in\mathcal{A}_{a}^{\underline{\nu}}\). Then \(\mathcal{H}_{f_{0}}^{\mathbb{R}}\cap\mathcal{A}_{a}^{\underline{\nu}}\) is path connected._ Proof.: To see this, take \(f,\tilde{f}\in\mathcal{H}_{f_{0}}^{\mathbb{R}}\cap\mathcal{A}_{a}^{\underline {\nu}}\) and construct a path \(g_{t}\in\mathcal{H}_{f_{0}}^{\mathbb{R}}\) connecting these maps as in the proof of the previous theorem. It could happen that \(g_{t}\) is not analytic on \(\Omega_{a}\). To rectify this, we use the argument from the proof of [ALM, Theorem 9.2], which we include for completeness. This argument shows that the curve \(\{g_{t}\}\) can be approximated by a curve in \(\mathcal{H}_{f}\cap\mathcal{A}_{a}^{\underline{\nu}}\) connecting \(f_{0}=f\) and \(f_{1}=\tilde{f}\). Indeed, let \(a^{\prime}\in(0,a)\) be so that for every \(t\in[0,1]\), \(\{g_{t}\}\subset\mathcal{A}_{a^{\prime}}^{\underline{\nu}}\). Let \(\Pi_{a,a^{\prime}}:\mathcal{A}_{a}^{\underline{\nu}}\to\mathcal{A}_{a^{\prime}} ^{\underline{\nu}}\) be the inclusion mapping, given by the restriction of \(f\) to \(\Omega_{a^{\prime}}\). Next, consider a one parameter real analytic family of vector fields \(\{\hat{v}_{t}\}\) in \(T\mathcal{A}_{a^{\prime}}^{\nu}\) so that for each \(t\), \(\hat{v}_{t}=(v_{t}^{1},\ldots,v_{t}^{\nu^{\prime}})\) a basis for \(E_{g_{t}}^{v}\), the vector space vertical to \(T_{g_{t}}(\mathcal{H}_{f_{0}}\cap\mathcal{A}_{a^{\prime}}^{\underline{\nu}})\). Recall that \(\nu^{\prime}=\nu+\nu_{noness}\). (Here we use Theorem 28.1.) Define \[P(t,\hat{s})=g_{t}+\hat{s}\cdot\hat{v}_{t}\in\mathcal{A}_{a^{\prime}}^{ \underline{\nu}},\ (t,\hat{s})\in[0,1]\times[-1,1]^{\nu^{\prime}} \tag{29.1}\] and let \(i_{0}:[0,1]\to[0,1]\times[-1,1]^{\nu^{\prime}}\), where \(i_{0}(t)=(t,0,\ldots,0)\). Let \(\Phi:[0,1]\times[-1,1]^{\nu^{\prime}}\to\mathcal{A}_{a}^{\underline{\nu}}\) be a real analytic family so that \(\Phi(0,0,\ldots,0)=P(0,0,\ldots,0)=f_{0}\), \(\Phi(1,0,\ldots,0)=P(1,0,\ldots,0)=f_{1}\) and \(\Pi_{a,a^{\prime}}\circ\Phi\) is \(C^{1}\) close to \(P\). To see that such a \(\Phi\) exists we argue as follows: For each \(a^{*}\in(0,a^{\prime})\) and each \(\epsilon>0\) there exists an integer \(k>0\) so that the \(k\)-th order polynomial \(J^{k}g_{t}\) which agrees with the first \(k\) term of the Taylor expansion of \(g_{t}\) at, say, the critical point \(c_{1}\), has the property \(||J^{k}g_{t}-g_{t}||_{\Omega_{a^{*}}}<\epsilon\). (One can see this by considering the \(a^{\prime}\)-ball around each \(x\in I\) and looking at the power series expansion.) By the Implicit Function Theorem, there exists a real analytic curve \(\zeta:[0,1]\to[0,1]\times[-1,1]^{\nu^{\prime}}\), \(C^{1}\) close to \(i_{0}\), so that \(\Pi_{a,a^{\prime}}\circ\Phi\circ\zeta\) is contained in \(\mathcal{H}_{f}\cap\mathcal{A}_{a^{\prime}}^{\nu}\) and so that the first coordinate of \(\zeta(t)\) is equal to \(t\). Since \(\Pi_{a,a^{\prime}}\circ\Phi\) is \(C^{1}\) close to \(P\), we have that the curve \(\Pi_{a,a^{\prime}}\circ\Phi(\{0\}\times[-1,1])\) only intersects the real-hybrid class of \(f_{0}\) at \(f_{0}\), and \(\Pi_{a,a^{\prime}}\circ\Phi(\{1\}\times[-1,1])\) only intersects the hybrid class of \(f_{0}\) at \(f_{1}\). Thus \(\Phi(\zeta(0))=f_{0}\) and \(\Phi(\zeta(1))=f_{1}\). Since \(f_{0},f_{1}\in\mathcal{A}_{a}^{\nu}\), it follows that \(\Phi\circ\zeta:[0,1]\to\mathcal{A}_{a}^{\underline{\nu}}\) is a real-analytic path connecting \(f_{0}\) to \(f_{1}\) in \(\mathcal{H}_{f}\cap\mathcal{A}_{a}^{\nu}\). _Remark 29.1_.: In the previous theorem we cannot drop the assumption that \(f,\tilde{f}\) only hyperbolic periodic orbits. This is because it is possible that \(f,\tilde{f}\) are topologically conjugate on the interval \(I\), so that \(f\) has only repelling periodic orbits and \(\tilde{f}\) has some periodic orbits which topologically repelling and parabolic (up to coordinate change, an iterate takes the form \(x\mapsto x+x^{2n+1}\)). On the other hand, the proof goes through if we consider maps for which each parabolic periodic point is simple. ## 30. Hybrid conjugacies form a partial lamination In this section we will prove Theorem C: **Theorem 30.1** (Partial lamination).: _Every \(f\in\mathcal{A}^{\underline{\nu}}\) without parabolic points has a neighbourhood which is laminated by real-hybrid conjugacy classes. More precisely, for each neighbourhood \(\mathcal{V}_{2}\) of \(f\) there exists neighbourhood \(\mathcal{V}_{1}\subset\mathcal{V}_{2}\) of \(f\) so that for each \(g_{0},g_{1}\in\mathcal{V}_{1}\cap\mathcal{H}_{f}\) then there exists a path \(g_{t}\in\mathcal{A}^{\underline{\nu}}\), \(t\in[0,1]\) inside \(\mathcal{V}_{2}\cap\mathcal{H}_{g}\) connecting \(g_{0},g_{1}\)._ Proof.: Consider the pruned polynomial-like extension \(F\colon E\cup B\to E^{\prime}\cup B^{\prime}\) of \(f\). Since \(f\) has no parabolic periodic points, this pruned polynomial-like extension persists over a neighbourhood \(\mathcal{V}_{1}\) of \(f\) (of not necessarily real pruned polynomial-like maps) by holomorphic motion. It follows that there exist pruned polynomial-like extensions \(G_{i}\colon U_{G_{i}}\cup B_{G_{i}}\to U_{G_{i}}\cup B^{\prime}_{G_{i}}\) of \(g_{i}\) which are obtainied by holomorphic motion from the pruned polynomial-like extension \(F\). By [BR, Theorem 2] it follows that for each \(\epsilon>0\) there exists a neighbourhood \(\mathcal{V}_{1}\) so that the dilatation \(K\) of the 'external' qc-conjugacy from Proposition 13.2 is at most \(1+\epsilon\). As in the proof of Theorem 15.1 and Corollary 15.2 it follows that there exists arc \(g_{t}\) connecting \(g_{0}\) and \(g_{1}\) whose diameter is small if \(\epsilon>0\) is close to zero. ## 31. Conjugacy classes of real analytic maps are contractible Take \(f_{0}\in\mathcal{A}^{\underline{\nu}}_{\underline{a}}\). In order to show that the space \(\mathcal{T}_{f_{0}}\) is contractible, we will first need to show that the pruned polynomial-like structure persists on all of \(\mathcal{T}_{f_{0}}\) for some \(a^{\prime}\in(0,a)\). To do this on the entire infinite dimensional space \(\mathcal{H}_{f_{0}}\) we cannot use holomorphic motions, and therefore will use the notion of quasiconformal motions, see [ST]. **Definition 31.1**.: Let \(\mathcal{S}\) be a connected topological Hausdorff space and \(X\subset\mathbb{C}\). Then the a map \(H\colon\mathcal{S}\times X\to\mathbb{C}\) is called a _quasiconformal motion_ if, writing \(H_{t}(z):=H(t,z)\), the following holds: 1. for some base point \(t_{0}\in\mathcal{S}\) we have \(H_{t_{0}}=id\); 2. for any \(t\in\mathcal{S}\) and any \(\epsilon>0\) there exists a neighbourhood \(U\) of \(t\) such that for all \(t^{\prime},t^{\prime\prime}\in U\) and for all quadruples \(x_{1},x_{2},x_{3},x_{4}\) of distinct points in \(X\) the cross-ratios of \(H_{t^{\prime}}(x_{1}),H_{t^{\prime}}(x_{2}),H_{t^{\prime}}(x_{3}),H_{t^{ \prime}}(x_{4})\) and of \(H_{t^{\prime\prime}}(x_{1}),H_{t^{\prime\prime}}(x_{2}),H_{t^{\prime\prime}}( x_{3}),H_{t^{\prime\prime}}(x_{4})\) all lie within an \(\epsilon\)-ball in the Poincare metric of \(\mathbb{C}\setminus\{0,1\}\). In our setting we will take \(\mathcal{S}=\mathcal{H}_{f_{0}}\cap\mathcal{A}^{\underline{\nu}}_{\underline{a }}\). Since we cannot use holomorphic motions, will use the following result of Douady-Earle on extensions of qs maps on the boundary of a disk. To state it will be useful to associate to a qc map \(H\colon\mathbb{D}\to\mathbb{D}\) its dilatation: \[\mu_{H}=\bar{\partial}H/\partial H.\] **Theorem 31.1** (Douady-Earle extension).: _Let \(h\colon\partial\mathbb{D}\to\partial\mathbb{D}\) and define_ \[G(z,w)=\frac{1}{2\pi}\int_{\partial\mathbb{D}}\frac{h(\zeta)-w}{1-\bar{w}h( \zeta)}\frac{1-|z|^{2}}{|z-\zeta|^{2}}\,|d\zeta|.\] 1. _Given_ \(z\in\mathbb{D}\) _there exists a unique_ \(w\) _so that_ \(G(z,w)=0\)_. Define_ \(H\colon\mathbb{D}\to\mathbb{D}\) _by_ \(H_{h}(z)=w\)_. Then_ \(H_{h}\) _is a real analytic diffeomorphism on_ \(\mathbb{D}\)_._ 2. _The map_ \(H_{h}\colon\mathbb{D}\to\mathbb{D}\) _extends to a continuous map of_ \(\overline{\mathbb{D}}\) _to_ \(\overline{\mathbb{D}}\) _so that_ \(H_{h}|\partial\mathbb{D}=h\)_._ 3. _The quasiconformal dilatation of_ \(H_{h}\colon\mathbb{D}\to\mathbb{D}\) _is bounded if_ \(h\colon\partial\mathbb{D}\to\partial\mathbb{D}\) _is quasisymmetric._ _._ 4. _for any_ \(\epsilon>0\) _there exists_ \(\delta>0\) _so that if there exists a qc extension of_ \(h\) _whose dilation is_ \(\leq K\) _then the dilatation of_ \(H_{h}\) _is at most_ \(K^{3+\epsilon}\) _provided_ \(K\leq 1+\delta\)_._ 5. _for any_ \(\epsilon>0\) _and any qs map_ \(h_{1}\colon\partial\mathbb{D}\to\partial\mathbb{D}\) _there exists_ \(\delta>0\)_, so that if_ \(h_{2}\colon\partial\mathbb{D}\to\partial\mathbb{D}\) _is a qs maps so that the map_ \(h_{2}\circ h_{1}^{-1}\) _has a qc extension_ \(H\) _to_ \(\mathbb{D}\) _with_ \(||\mu_{H}||_{\infty}\leq\delta\) _then_ \(||\mu_{H_{h_{1}}}-\mu_{H_{h_{2}}}||_{\infty}\leq\epsilon\)_._ Proof.: Items (1)-(3) are proved in [DE], see also [Hub]. Item (4) is Corollary 2 on page 41 of [DE]. Item (5) generalises item (4) and follows similarly. Indeed, let \(M\) be the open unit ball in the Banach space \(L^{\infty}(\mathbb{D},\mathbb{C})\). Given \(\mu\in M\), by the Measurable Riemann Mapping Theorem, there exists a unique qc map \[\phi^{\mu}\colon\mathbb{D}\to\mathbb{D}\] fixing \(\pm 1\) and \(i\) so that \(\mu=\mu_{\phi^{\mu}}\) (i.e. \(\bar{\partial}\phi^{\mu}=\mu\cdot\partial\phi^{\mu}\)). Set \(h^{\mu}=\phi^{\mu}|\partial\mathbb{D}\). Now define \(\sigma\colon M\to M\) by \[\sigma(\mu)=\mu_{H_{h\mu}}.\] So \(\sigma\) assigns to the Beltrami coefficient of any extension of a qs map, the Beltrami coefficient of the Douady-Earle extension of this qs map. In particular, \[\sigma(\mu_{H_{h_{1}}})=\mu_{H_{h_{1}}}.\] By assumption there exists a map \(H\colon\mathbb{D}\to\mathbb{D}\) so that \(H\circ h_{1}=h_{2}\) on \(\partial\mathbb{D}\) and so that \(||\mu_{H}||_{\infty}\leq\epsilon\). Hence \[\sigma(\mu_{H\circ H_{h_{1}}})=\mu_{H_{h_{2}}}.\] By [AIM, p 182. Theorem 5.5.6] \[\mu_{\psi\circ\varphi^{-1}}(w)=\frac{\mu_{\psi}(z)-\mu_{\varphi}(z)}{1-\mu_{ \psi}(z)\overline{\mu_{\varphi}(z)}}\cdot\left(\frac{\varphi_{z}(z)}{|\varphi _{z}(z)|}\right)^{2}.\] and taking \(\psi=H\circ H_{h_{1}}\) and \(\varphi=H_{h_{1}}\) we get from this formula that \[||\mu_{H_{h_{1}}}-\mu_{H\circ H_{h_{1}}}||_{\infty}\leq||\mu_{H}||_{\infty} \leq\epsilon.\] Since \(\sigma\) is continuous, there exists \(\delta\) so that \[||\mu_{H_{h_{1}}}-\mu_{H_{h_{2}}}||_{\infty}=||\sigma(\mu_{H_{h_{1}}})-\sigma (\mu_{H\circ H_{h_{1}}})||_{\infty}<\delta.\] **Proposition 31.2**.: _Let \(F_{0}\colon U_{0}\to U_{0}^{\prime}\) be a pruned polynomial-like mapping with rays \(\Gamma_{0}\). For each \(\kappa>1\) one can redefine the domains of \(F_{0}\) by possibly shortening the rays \(\Gamma_{0}\) and lowering the roofs, and thus obtain an equivalent pruned polynomial-like map \(F_{0}\colon U_{F_{0}}\to U_{F_{0}}^{\prime}\) so that the following holds._ * _Let_ \(\mathcal{H}_{F_{0}}(\kappa)\) _be the set of maps in_ \(\mathcal{H}_{F_{0}}^{\mathbb{R}}\) _which are_ \(\kappa^{\prime}\)_-qc conjugate to_ \(F_{0}\colon U_{F_{0}}\to U_{F_{0}}^{\prime}\) _for some_ \(\kappa^{\prime}<\kappa\)_. Then_ \(\mathcal{H}_{F_{0}}^{\mathbb{R}}(\kappa)\) _is an open subset of_ \(\mathcal{H}_{F_{0}}^{\mathbb{R}}\)_,_ \(\mathcal{H}_{F_{0}}^{\mathbb{R}}(\kappa)\subset\mathcal{H}_{F_{0}}^{\mathbb{R }}(\kappa)\) _for_ \(\kappa<\kappa^{*}\) _and_ \(\cup_{\kappa>0}\mathcal{H}_{F_{0}}^{\mathbb{R}}(\kappa)=\mathcal{H}_{F_{0}}^{ \mathbb{R}}\)_._ * _For each_ \(G\in\mathcal{H}_{F_{0}}^{\mathbb{R}}(\kappa)\) _there exist a pruned polynomial-like mapping_ \[G:U_{G}\to U_{G}^{\prime}\] _with rays_ \(\Gamma_{G}\) _and a quasiconformal motion_ \[H\colon\mathcal{H}_{F_{0}}^{\mathbb{R}}(\kappa)\times X_{n}\to\mathbb{C}\] _of_ \(X_{n}=\overline{U_{F_{0}}\cup U_{F_{0}}^{\prime}\cup\Gamma_{F_{0}}}\) _over_ \(\mathcal{H}_{F_{0}}(\kappa)\) _so that_ 1. \(\partial U_{G}=H_{F}(\partial U_{G_{0}}),\partial U^{\prime}_{G}=H_{G}(\partial U_ {F_{0}})\) _where we write_ \(H_{G}(\cdot)=H(G,\cdot)\)_._ \[H_{G}\circ F_{0}(z)=G\circ H_{G}(z)\text{ for all }z\in\partial U_{F_{0}}\cup\Gamma_{F_{0}}\] _and so_ 2. _that the Beltrami coefficient of_ \(H_{G}\) _depends continuously on_ \(G\in\mathcal{H}_{F_{0}}(\kappa)\)_, i.e._ \(G\mapsto\mu_{H_{G}}\in L^{\infty}(U_{F_{0}}\cup U^{\prime}_{F_{0}},\mathbb{D})\) _is continuous._ Proof.: One can choose the smooth curves \(\Gamma_{\hat{f}_{X}}\) from Lemma 6.2 so that they are orthogonal to \(\partial\mathbb{D}\). Indeed, consider a linearisation at a repelling periodic point \(p\) of \(F_{X}|\partial\mathbb{D}\). The fact that \(\hat{f}_{X}\) preserves \(\partial\mathbb{D}\) implies that the multiplier at \(p\) is real. Using the linearisation we immediately see that there is a unique smooth invariant curve through \(p\) which is orthogonal to \(\partial\mathbb{D}\). This ensures that the curves \(\hat{\Gamma}_{X}\) are uniquely determined (by \(f\) and the intervals \(J\)) apart from their length. (In fact, a lower bound for the length of these smooth curves through \(\partial\mathbb{D}\) is determined by the multiplier at the repelling periodic points and the upper bounds of \(|D\hat{f}_{X}|\) and \(|D^{2}\hat{f}_{X}|\).) Note that \(\mathcal{H}_{F_{0}}^{\mathbb{R}}(\kappa)\) can be considered as a subset of \(\mathcal{A}_{a}^{\underline{\nu}}\) which is a metric space and therefore admits a partition of unity. Hence, using a partition of unity argument one can choose \(\Gamma_{\hat{f}_{X}}\) so that the arc length of each of these curves depends continuously on \(f\in T_{n}\subset\mathcal{H}_{f_{0}}^{\mathbb{R}}\cap\mathcal{A}_{a}^{ \underline{\nu}}\). Similarly, one can choose the 'roof' of the sets \(\hat{V}_{\hat{f}_{X}}\) to depend continuously on \(f\in T_{n}\subset\mathcal{H}_{f_{0}}^{\mathbb{R}}\cap\mathcal{A}_{a}^{ \underline{\nu}}\). Here we will choose these roof curves to be circles near \(\partial\mathbb{D}\). Note that the normalised Riemann mapping from \(\mathbb{C}\setminus K_{X}(f)\to\mathbb{C}\setminus\mathbb{D}\) depends continuously on \(K_{X}(f)\) and therefore on \(f\), see Theorem B.1. It follows that the sets \(\partial U_{F}\) and \(\partial U^{\prime}_{F}\), \(\Gamma_{F}\) corresponding tot \(\partial V_{\hat{f}_{X}}\) and \(\partial V_{\hat{f}_{X}}\) and the set \(\Gamma_{\hat{f}_{X}}\) also move continuously with \(F\). In other words, the pruned polynomial-like maps \(F\colon U_{F}\to U^{\prime}_{F}\) also move continuously with \(f\) (in the Caratheory topology). Now parametrise the fundamental domains in \(\Gamma_{F}\) by arc length (and on all of \(\Gamma_{F}\) dynamically). Similarly parametrise \(\partial U_{F}\) and \(\partial U^{\prime}_{F}\) through arc length. Using this, one can define maps \(H_{F}\) on \(\partial U_{F_{0}}\cup\partial U^{\prime}_{F_{0}}\) to \(\partial U_{F}\cup\partial U^{\prime}_{F}\) so that \(H_{F}\) conjugates \(F\) and \(F_{0}\) on these sets. Similarly, if necessary, we do this for the attracting structures \(B_{f}\) associated to \(f\). By construction, the sets \(U_{F}\), \(U^{\prime}_{F}\), \(U^{\prime}_{F}\setminus U_{F}\), \(U_{F}\setminus U^{\prime}_{F}\) (and similarly \(B_{F}\) etc) form quasidiscs for each \(f\). This means that we can use the Douady-Earle extension to extend \(H_{F}\) to a quasiconformal \(H_{F}\colon U_{F_{0}}\cup U^{\prime}_{F_{0}}\to U_{F}\cup U^{\prime}_{F}\). Here we use that each the finitely many components of \(\mathbb{C}\setminus(\partial U_{F}\cup\partial U^{\prime}_{F})\) is a quasidisc, and so we can use the Douady-Earle extension on each of these separately. Thus we obtain a quasiconformal map \(H_{F}\colon U_{F_{0}}\to U_{F}\) which depends continuously on \(f\) in the required sense, see item 5 in Theorem 31.1. _Remark 31.1_.: The reason we cannot use a holomorphic motion here is because it needs to be defined over all of the infinite dimensional space \(\mathcal{H}_{F_{0}}^{\mathbb{R}}(\kappa)\). This forces us to use a partition of unity argument. Thus we obtain deformations which are not holomorphic in \(t\in\mathcal{H}_{F_{0}}^{\mathbb{R}}(\kappa)\). **Theorem 31.3**.: _Assume that \(F_{0}\colon U_{0}\to U^{\prime}_{0}\) be a pruned polynomial-like mapping. Then \(\mathcal{H}_{F_{0}}^{\mathbb{R}}\) is contractible._ Proof.: Let \(n>1\) be an integer. Let us first show that \(\mathcal{H}_{F_{0}}^{\mathbb{R}}(n)\) is contractible. From the previous proposition and the pullback argument from Theorem 15.1 we obtain for each \(\mathcal{H}_{F_{0}}^{\mathbb{R}}(n)\) a qc conjugacy \(H_{G}\) between \(G\colon U_{G}\to U^{\prime}_{G}\) and \(F_{0}\colon U_{0}\to U^{\prime}_{0}\) so that the map \(G\mapsto\mathcal{H}_{F_{0}}^{\mathbb{R}}(n)\) is contractible. \(\mu_{H_{G}}\in L^{\infty}(U_{F_{0}}\cup U^{\prime}_{F_{0}},\mathbb{D})\) is continuous. Let \(H_{t\mu_{H_{G}}}\) be the qc conjugacy associated to \(t\mu_{H_{G}}\) normalised so that \(H_{t\mu_{H_{G}}}(\pm 1)=\pm 1\) and \(H_{t\mu_{H_{G}}}(\infty)=\infty\). Thus we get a new map \[R_{t}(G)=H_{t\mu_{H_{G}}}\circ F_{0}\circ H_{t\mu_{H_{G}}}^{-1}\in\mathcal{H}_ {F_{0}}^{\mathbb{R}}\] depending analytically on \(t\) and so that \(R_{0}(G)=G\) and \(R_{1}(G)=F_{0}\). Since \(G\mapsto\mu_{H_{G}}\in L^{\infty}\) is continuous, the map \((t,G)\mapsto H_{t\mu_{G}}\) depends continuously on \(t\) and \(G\). Hence the retract \((t,G)\to R_{t}(G)\) is also continuous. Since \(\mathcal{H}_{F_{0}}^{\mathbb{R}}(n)\) can be viewed as a subset of \(\mathcal{A}_{a}^{\underline{\nu}}\) the result follows from the next theorem. **Theorem 31.4** ([Ae]).: _If a normal space \(\mathcal{S}\) is the union of a sequence of open subsets \(\mathcal{S}_{n}\) such that \(\overline{\mathcal{S}}_{n}\subset\mathcal{S}_{n+1}\) and \(\mathcal{S}_{n}\) contracts to a point in \(\mathcal{S}_{n+1}\) for each \(n\geq 1\), then \(\mathcal{S}\) is contractible._ _Remark 31.2_.: By Lemma C.3, for any open set \(O\) in the space \(\mathcal{A}^{\underline{\nu}}\) with the real analytic topology and any \(a>0\) there exists \(g\in O\setminus\mathcal{A}_{a}^{\underline{\nu}}\). That is why the above argument is insufficient to conclude that \(\mathcal{H}_{f}^{\underline{\nu}}\) is contractible. We do however have the following: **Theorem 31.5**.: _Let \(f_{0}\in\mathcal{A}_{a}^{\underline{\nu}}\). Then \(\mathcal{H}_{f_{0}}\cap\mathcal{A}_{a}^{\underline{\nu}}\) is contractible._ Proof.: Choose a sequence of pruning intervals \(J_{f_{0},n}\) for \(f_{0}\) and let \(X_{n},Q_{n}\) be the corresponding pruning data. Choose these pruning intervals so that \(Q_{n+1}\supset Q_{n}\). Let \(F_{0,n}\colon U_{F_{0},n}\to U^{\prime}_{F_{0},n}\) be a pruned polynomial-like mapping which extends \(f_{0}\colon I\to I\) corresponding to pruning data \(Q_{n}\), so that \(F_{0,n+1}\) is a restriction of \(F_{0,n}\). Let \(\mathcal{S}_{n}\) be the set of maps \(f\in\mathcal{S}:=\mathcal{H}_{f_{0}}\cap\mathcal{A}_{a}^{\underline{\nu}}\) so that \(f\) has a pruned polynomial-like mapping extension \(F\colon U_{F,n}\to U^{\prime}_{F,n}\) corresponding to pruning data \(Q_{n}\) and so that \(F_{n}\) and \(F_{0,n}\) are qc conjugate with a conjugacy which has dilatation \(<n\). In other words, one has a pruned Julia set \(K_{f,n}\) which is associated to \(f\) corresponding to the pruning data \(Q_{n}\) and the set \(X_{n}\). This means that one can associate an external map \(\hat{f}_{X_{n}}\) to \(f\) and \(X_{n}\). Let us first show that \(\mathcal{S}_{n}\) is contractible in \(\mathcal{S}_{n+1}\). From Proposition 31.2 and the Pull-back Argument (see Theorem 15.1) we obtain for each \(f\in\mathcal{S}_{n}\) a qc conjugacy \(H_{F}\) between \(F\colon U_{F,n}\to U^{\prime}_{F,n}\) and \(F_{0}\colon U_{0,n}\to U^{\prime}_{0,n}\) so that the map \(F\mapsto\mu_{H_{F}}\in L^{\infty}(U_{F_{0}}\cup U^{\prime}_{F_{0}},\mathbb{D})\) is continuous. Let \(H_{t\mu_{H_{F}}}\) be the qc conjugacy associated to \(t\mu_{H_{F}}\) normalised so that \(H_{t\mu_{H_{F}}}(\pm 1)=\pm 1\) and \(H_{t\mu_{H_{F}}}(\infty)=\infty\). Thus we get a new map \[R_{t}(F)=H_{t\mu_{H_{F}}}\circ F_{0}\circ H_{t\mu_{H_{F}}}^{-1}\in\mathcal{H}_ {F_{0}}\] depending analytically on \(t\) and so that \(R_{0}(F)=F_{0}\) and \(R_{1}(F)=F\). Since \(F\mapsto\mu_{f}\in L^{\infty}\) is continuous, the map \((t,F)\mapsto H_{t\mu_{F}}\) depends continuously on \(t\) and \(F\). Hence the retract \((t,F)\to R_{t}(F)\) is also continuous. Note that \(R_{t}(F)\) may not be in \(\mathcal{A}_{a}^{\underline{\nu}}\) for all \(t\in[0,1]\). However, \(R_{0}(F)\) and \(R_{1}(F)\) are in \(\mathcal{A}_{a}^{\underline{\nu}}\). Since \(H_{t\mu_{H_{F}}}\) is at most a \(n\)-qc homeomorphism for each \(t\in[0,1]\), it follows that there exists \(a^{\prime}\in(0,a)\) so that \(R_{t}(F)\in\mathcal{H}_{f_{0}}\cap\mathcal{A}_{a^{\prime}}^{\underline{\nu}}\) for all \(t\in[0,1]\). In other words, \(R_{t}(F)\) has an analytic extension to \(\Omega_{a^{\prime}}\). We do not claim that the domain of the pruned polynomial-like map \(R_{t}(F)\) is inside \(\Omega_{a}\). To obtain a family \(\tilde{R}_{t}(F)\) so that \(R_{t}(F)\in\mathcal{H}_{f_{0}}\cap\mathcal{A}_{a}^{\underline{\nu}}\), we argue as in Proposition 29.2. Indeed, choose vector fields \(\{\hat{v}_{f}\}\) in \(T_{f}\mathcal{A}_{a^{\prime}}^{\nu}\) depending smoothly on \(f\in\mathcal{H}_{f_{0}}\cap\mathcal{A}_{a^{\prime}}^{\underline{\nu}}\) and so that the vectors \(\hat{v}_{f}=(v_{f}^{1},\ldots,v_{f^{\prime}}^{\nu^{\prime}})\) together with \(T_{f}(\mathcal{H}_{f_{0}}\cap\mathcal{A}_{a^{\prime}}^{\underline{\nu}})\) span the tangent space \(T_{f}(\mathcal{A}_{a^{\prime}}^{\underline{\nu}})\). Next define \[P(f,t,\hat{s})=R_{t}(f)+\hat{s}\cdot\hat{v}_{R_{t}(f)}\in\mathcal{A}_{a^{\prime}}^ {\underline{\nu}},\;(f,t,\hat{s})\in\mathcal{S}_{n}\times[0,1]\times[-1,1]^{ \nu^{\prime}} \tag{31.1}\] and let \(i_{0}:\mathcal{S}_{n}\times[0,1]\to\mathcal{S}_{n}\times[0,1]\times[-1,1]^{\nu^ {\prime}}\), where \(i_{0}(f,t)=(f,t,0,\dots,0)\) for all \(f\in\mathcal{S}_{n}\) and \(t\in[0,1]\). Let \(\Phi:\mathcal{S}_{n}\times[-1,1]^{\nu^{\prime}}\to\mathcal{A}_{a}^{\underline {\nu}}\) be a real analytic family so that \(\Phi(f,0,\dots,0)=P(f,0,\dots,0)=f\) for each \(f\) and so that \(\Pi_{a,a^{\prime}}\circ\Phi\) is \(C^{1}\) close to \(P\). This can be done, as in Proposition 29.2, by a polynomial approximation. By Theorem 28.1 the space \(\mathcal{S}_{n}\) is a Banach manifold. Therefore, using the Implicit Function Theorem (on Banach manifolds), we obtain \(\zeta:\mathcal{S}_{n}\times[0,1]\to\mathcal{S}_{n}\times[0,1]\times[-1,1]^{\nu ^{\prime}}\), \(C^{1}\) close to \(i_{0}\) so that \(\Pi_{a,a^{\prime}}\circ\Phi\circ\zeta\) is contained in \(\mathcal{H}_{f_{0}}\cap\mathcal{A}_{a^{\prime}}^{\nu}\). As before \(\Phi\circ\zeta:[0,1]\to\mathcal{A}_{a}^{\underline{\nu}}\) is a smooth path connecting \(f\) to \(f_{0}\) in \(\mathcal{H}_{f_{0}}\cap\mathcal{A}_{a}^{\nu}\). Since \(\Phi\circ\zeta\) is \(C^{1}\) close to \(P\) fact lies in \(\mathcal{S}_{n+1}\). It follows that \(\mathcal{S}_{n}\) is contractible inside \(\mathcal{S}_{n+1}\). Theorem 31.4 therefore implies that \(\mathcal{S}\) is contractible. ### Part C: Open questions Let \(Pol^{\mathbb{R},d}\) be the space of real polynomials of degree \(d\). **Question 1**.: _Let \(f\in\mathcal{A}_{a}^{\underline{\nu}}\cap Pol^{\mathbb{R},d}\). Is \(\mathcal{T}_{f,a}^{\underline{\nu}}\cap Pol^{\mathbb{R},d}\) a real analytic manifold?_ The answer to this question is affirmative, if real critical points of \(f\) have finite orbits, due to transversality, see [Eps, LSvS3]. On the other hand, it is obviously not enough that \(\mathcal{T}_{f,a}^{\underline{\nu}}\) is a real analytic manifold, see Theorem A, to conclude that \(\mathcal{T}_{f,a}^{\underline{\nu}}\cap Pol^{\mathbb{R},d}\) is a real analytic manifold. In a similar vain, we have shown in Theorem B that two real polynomials which are real-hybrid conjugate, can be connected by a one-parameter family of real analytic maps of the interval within the same real-hybrid class. It would be interesting to know whether one can find a one-parameter family of such polynomials. A partial answer to this question was given in [?] where the space \(U^{2d}\) was considered for \(d=2\). Here \(\mathcal{U}^{2d}\) is the space of real polynomials of degree \(2d\) with a unique critical point on the real line. So these maps are unimodal maps of the real line. **Question 2**.: _Assume that the real critical points of \(f_{0},f_{1}\in\mathcal{U}^{4}\) are periodic and that \(f_{0},f_{1}\) are topologically conjugate on \(\mathbb{R}\). Does this imply that there exists a continuous family \(f_{t}\) so that every map \(f_{t}\) is also topologically conjugate to \(f_{0},f_{1}\) on \(\mathbb{R}\)._ In [CvS] it was shown that for \(d=2\) the answer if affirmative. This implies that the sets of maps in \(\mathcal{U}^{4}\) with a given topological entropy form connected subsets, giving a partial generalisation of [MTr, BvS] to polynomials with non-real critical points. ## Appendices and References * [1] A. Local connectivity of the pruned Julia set and complex box mappings which include domains that do not intersect the real line * [2] The aim of this appendix is to prove Theorem 4.2. In order to do this we need the following notation. Given an interval \(I\subset\mathbb{R}\), and \(\theta\in(0,\pi)\), we denote by \(D_{\theta}^{+}(I)\) (respectively \(D_{\theta}^{-}(I)\)) the region in the upper (respectively lower) half-plane bounded by \(I\) together with the circle arc subtending \(I\) that meets the real axis with external angle \(\theta\) at each boundary point of \(I\). We let \(D_{\theta}(I)=D_{\theta}^{+}(I)\cup D_{\theta}^{-}(I)\cup I\) and call this set a _Poincare disc_. If \(\theta>\pi/2\) we shall call this a _lens domain_. This set corresponds to the set of points with a fixed distance to \(I\) in the Poincare metric in \(\mathbb{C}_{I}\). Let \(\operatorname{Cr}_{na}(f)\) be the set of critical points which are not in the basin of periodic attractors. **Theorem A.1**.: _There exist complex neighbourhoods \(W\subset W^{\prime}\) of \(\operatorname{Cr}_{na}(f)\) and \(\delta>0\) so that if \(J_{i}\) are intervals containing \(f(c_{i})\) with \(|J_{i}|<\delta\) for all \(i\) and taking \(J=\cup J_{i}\), the following holds. Let \(R_{K_{1}^{*}}\) be the first return map to \(K_{1}^{*}:=cc_{\operatorname{Cr}}f^{-1}(J)\). Then \(R_{K_{1}^{*}}\) has an extension to a possibly multi-valued mapping \(F\colon W\to W^{\prime}\). More precisely,_ 1. \(W^{\prime}\supset K_{1}^{*}\) _and_ \(W^{\prime}\) _is a union of pairwise disjoint Poincare domains each based on a real interval;_ 2. \(W\subset W^{\prime}\)_, i.e. for each_ \(x\in K_{1}^{*}\) _and_ \(c\in\operatorname{Cr}_{na}(f)\) _and each_ \(n>0\) _with_ \(f^{n}(x)\in W^{\prime}_{c}\) _is so that_ \(W^{\prime}_{c}\) _is a component of_ \(W^{\prime}\) _then we have_ \[cc_{x}f^{-n}(W^{\prime}_{c})\subset W^{\prime}.\] Note that \(K_{1}^{*}:=\operatorname{cc}_{\operatorname{Cr}}f^{-1}(J)\) is not contained in \(\mathbb{R}\), but that \(f(K_{1}^{*})\subset\mathbb{R}\). _Remark A.1_.: It follows that if \(A\subset I\) is an attracting periodic orbit, so that the immediate basin of each \(a\in A\cap W^{\prime}\) is contained in \(W^{\prime}\) then for each \(x\in K_{1}^{*}\) which is eventually mapped into \(A\), then the component of the basin of \(A\) containing \(x\) is also contained in \(W^{\prime}\). ### Terminology We say that a puzzle piece is \(\omega(c)\)-_critical_ if it contains a critical point in \(\omega(c)\). Let \(P\) be an \(\omega(c)\)-critical puzzle piece. An \(\omega(c)\)-critical puzzle piece \(Q\) is a called a _child_ of \(P\) if it is a unicritical pullback of \(P\); that is, there exists a positive integer \(n\) such that \(Q\) is a component of \(f^{-n}(P)\) containing a critical point in \(\omega(c)\), and there exists a puzzle piece \(Q^{\prime}\supset f(Q)\) such that the map \(f^{n-1}\colon Q^{\prime}\to P\) is a diffeomorphism. A map \(f\) is called _persistently recurrent on \(\omega(c)\)_ if \(c\) is recurrent, non-periodic and each \(\omega(c)\)-critical puzzle piece has only finitely many children. If \(c\) is recurrent, but not persistently recurrent then we say that \(c\) is _reluctantly recurrent_. It will be convenient to define \[\mathcal{L}_{x}V:=\operatorname{cc}_{x}f^{-n}(V)\text{ and }\hat{\mathcal{L}}_{x }V:=\operatorname{cc}_{x}f^{-n^{\prime}}(V)\] where \(n>0\) and \(n^{\prime}\geq 0\) are the smallest integers so that \(f^{n}(x),f^{n^{\prime}}(x)\in V\). Let \(\rho>0\). A puzzle piece \(P\) around a persistently recurrent critical point is called \(\rho\)-_nice_ if for any \(x\in P\cap\omega(c)\) one has \(\operatorname{mod}(P\setminus\mathcal{L}_{x}(P))\geq\rho\), and \(\rho\)-_free_ if there are puzzle pieces \(P^{+}\supset P\supset P^{-}\) such that \((P^{+}\setminus P^{-})\cap\omega(c)=\emptyset\), \(\operatorname{mod}(P^{+}\setminus P)\geq\rho\) and \(\operatorname{mod}(P\setminus P^{-})\geq\rho\). We refer to the annulus \(P^{+}\setminus P^{-}\), which is disjoint from \(\omega(c)\), as _free space_. We say that a simply connected domain \(U\) has \(\rho\)-_bounded geometry with respect to \(x\in U\)_ if the Euclidian ball \(B(x,\rho\cdot\operatorname{diam}(U))\subset U\). A domain \(U\) is said to have \(\rho\)-_bounded geometry_ if there is an \(x\in U\) such that \(U\) has \(\rho\)-bounded geometry with respect to \(x\). We say that \(K^{\prime}\) is a \(\rho\)-scaled neighbourhood of \(K\) if each component of \(K^{\prime}\setminus K\) has length \(\geq\rho|K|\). ### Proof of Theorem a.1 Proof.: This result uses the complex bounds from [CvST, CvS] but is more general in two ways: we will consider mappings \(F\colon U\to V\) which could include domains that do not intersect the real line and also \(V\) contains a neighbourhood of all critical points simultaneously. To construct \(F\) decompose the set of critical points \(\operatorname{Cr}_{na}(f)\) into the following four sets: 1. \(\operatorname{Cr}_{nr}\): non-recurrent critical points; 2. \(\operatorname{Cr}_{rr}\): reluctantly recurrent critical points; 3. \(\mathrm{Cr}_{pr}\): persistently recurrent but not infinitely renormalizable critical points; 4. \(\mathrm{Cr}_{ren}\): infinitely renormalizable critical points. Defining \(\mathrm{Cr}^{\prime}\) to be \(\mathrm{Cr}_{pr}\) or \(\mathrm{Cr}_{nr}\), let \(\mathrm{Cr}^{\prime}_{2},\mathrm{Cr}^{\prime}_{1}\) be the set of critical points \(c\in\mathrm{Cr}^{\prime}\) so that \(\mathcal{L}_{c}I_{\mathrm{Cr}^{\prime}}=I_{\mathrm{Cr}^{\prime}}(c)\) resp. so that \(\mathcal{L}_{c}I_{\mathrm{Cr}^{\prime}}\Subset I_{\mathrm{Cr}^{\prime}}(c)\). To prove Theorem A.1 we need to combine the following complex bounds around each of these sets in such a way that the appropriate non-real domains are included. **Theorem A.2** (Complex bounds in the persistently recurrent case).: _Suppose that \(c\in\mathrm{Cr}_{pr}\cup\mathrm{Cr}_{ren}\). Then there exist \(\rho_{0}>0\) and combinatorially defined intervals (puzzle pieces) \(I\ni c\) of arbitrarily small diameter so that the following holds. Let_ \[\hat{I}:=\bigcup_{c^{\prime}\in\mathrm{Crit}(f)\cap\omega(c)}\hat{\mathcal{L} }_{c^{\prime}}(I).\] 1. _Suppose that_ \(f\) _is non-renormalizable. Then the first return map to_ \(\hat{I}\) _extends to a complex box mapping_ \[F\colon U\to V\text{ so that }V\cap\mathbb{R}=\hat{I}\text{ and }\] * _for each component_ \(U_{i}\) _of_ \(U\)_,_ \(F|U_{i}\) _has at most one critical point,_ * _each component of_ \(V\) _is_ \(\rho_{0}\)_-nice and_ \(\rho_{0}\)_-free,_ * _each component of_ \(V\) _has_ \(\rho_{0}\)_-bounded geometry._ 2. _Suppose that_ \(I\) _is a terminating interval for_ \(f\)_. Then the return map to_ \(I^{\infty}\) _extends to a polynomial-like map_ \(F\colon U\to V\) _such that_ \(\mathrm{mod}(V\setminus U)>\rho_{0}\)_._ Proof.: This is Theorem 1.1 in [30]. _Remark A.2_.: In the first case there exists \(\theta_{0}\in(0,\pi/2)\) so that \(V_{c}\subset D_{\theta_{0}}(I_{c}^{*})\) for each \(c\in\mathrm{Cr}_{pr}\cup\mathrm{Cr}_{ren}\), where \(I_{c}^{*}\) is a neighbourhood of \(I_{c}=V_{c}\cap\mathbb{R}\) for which \(I_{c}^{*}\cap\omega(c)\subset I_{c}\). Such a \(\theta_{0}\) exists, because \(V_{c}\) is \(\rho_{0}\)-free w.r.t. \(\omega(c)\) and since \(V_{c}\) has bounded geometry. Since \(V_{c}\) has \(\rho_{0}\)-bounded geometry, we can assume that \(I_{c}^{*}\) is small when \(I\) is small. _Remark A.3_.: In the 2nd case, by replacing \(V\) by the disc \(V^{\prime}\) bounded by the core curve in \(V\setminus U\) and replacing \(U\) by the pullback \(U^{\prime}\) of \(V^{\prime}\), we can assume that \(V\) is \(\rho\)-nice, \(\rho\)-free and has \(\rho\)-bounded geometry (all w.r.t. to \(\omega(c)\)). **Theorem A.3** (Complex bounds in the reluctantly recurrent and non-recurrent case).: _Let \(\mathrm{Cr}^{\prime}\) be either \(\mathrm{Cr}_{rr}\) or \(\mathrm{Cr}_{nr}\). Then there exists \(\theta_{0}\in(0,\pi/2)\) and arbitrarily small combinatorially defined real neighbourhoods \(I_{\mathrm{Cr}^{\prime}}\) of \(\mathrm{Cr}^{\prime}\) such that_ * _the return mapping to_ \(I_{\mathrm{Cr}^{\prime}}\) _extends to a complex box mapping_ \(F_{\mathrm{Cr}^{\prime}}\colon U_{I_{\mathrm{Cr}^{\prime}}}\to V_{I_{\mathrm{ Cr}^{\prime}}}\) _where_ \(V_{I_{\mathrm{Cr}^{\prime}}}\subset\cup_{c^{\prime}\in\mathrm{Cr}^{\prime}}D_{ \theta}(I_{\mathrm{Cr}^{\prime}})\) _where_ \(\theta\in(\theta_{0},\pi/2)\)_;_ * \(V_{c}=D_{\pi/2}(I_{c})\) _for each_ \(c\in\mathrm{Cr}^{\prime}_{1}\)_._ Proof.: In the setting that all critical points have even order this is Theorem 3 and the first line of the proof of Proposition 1 in [30]. In [30, Theorem 5.3] this is extended to the general case. We will also use that diffeomorphic pullback of Poincare discs remain under control: **Lemma A.4**.: _For any \(\theta\in(0,\pi/2),\) there exist \(\varepsilon>0\) and \(\tilde{\theta}\in(0,\pi/2)\) such that the following holds. Suppose that \(|J_{s}|<\varepsilon\), and \(f^{s}\colon J_{0}\to J_{s}\) is a diffeomorphism. Let \(\{J_{j}\}_{j=0}^{s}\) be the chain such that \(J_{j}=\mathit{cc}_{f^{j}(J_{0})}f^{-(s-j)}(J_{s})\). Let \(U_{s}=D_{\theta}(J_{s})\), and set \(U_{j}=\mathit{cc}_{J_{j}}(f^{-(s-j)}(U_{s}))\) for \(j=0,\ldots,s\). Then \(U_{j}\subset D_{\tilde{\theta}}(J_{j})\), where can make the difference \(\theta-\tilde{\theta}\) as small as we like by taking \(\varepsilon>0\) sufficiently small._ Proof.: This builds on result follows from [GSS] and [LS, Theorem B]. Under the additional assumption that \(J_{0}\cap\omega(c)\neq\emptyset\) this lemma is precisely Lemma 5.9 [CvST]. If \(J_{0}\cap\omega(c)\neq\emptyset\) then it follows immediately from Lemmas 4.4 and 4.5 in [CvS]. The main issue in proving Theorem A.1 is to pullback ranges around one type of critical point back to the range near another type of critical point. Note that in the above theorems the intervals \(I\) are all nice intervals, whose boundaries are periodic or pre-periodic points. By definition the forward orbit of \(c\in\operatorname{Cr}_{pr}\) does not accumulate onto \(\operatorname{Cr}_{nr}\cup\operatorname{Cr}_{rr}\cup\operatorname{Cr}_{ren}\) and similarly the forward orbit of \(c\in\operatorname{Cr}_{ren}\) does not accumulate to \(\operatorname{Cr}_{nr}\cup\operatorname{Cr}_{rr}\cup\operatorname{Cr}_{pr}\). So we can choose the neighbourhoods \(I_{\operatorname{Cr}_{nr}}\), \(I_{\operatorname{Cr}_{rr}}\), \(I_{\operatorname{Cr}_{pr}}\) and \(I_{\operatorname{Cr}_{ren}}\) so small that forward iterates of \(c\in\operatorname{Cr}_{ren}\) avoid \(I_{\operatorname{Cr}_{nr}}\), \(I_{\operatorname{Cr}_{rr}}\), \(I_{\operatorname{Cr}_{pr}}\), \(I_{c^{\prime}}\) for any \(c^{\prime}\in\operatorname{Cr}\) with \(c^{\prime}\neq\omega(c)\) and similarly so that forward iterates of \(c\in\operatorname{Cr}_{pr}\) avoid \(I_{\operatorname{Cr}_{nr}}\), \(I_{\operatorname{Cr}_{rr}}\), \(I_{\operatorname{Cr}_{ren}}\) and \(I_{c^{\prime}}\) for any \(c^{\prime}\in\operatorname{Cr}\) with \(c^{\prime}\neq\omega(c)\). Let \(\theta_{0}>0\) be as in Remark A.3. Let \(V_{c}\) be the corresponding ranges of the complex box mapping containing \(c\) constructed in Theorems A.2 and A.3. Choose for each \(c\in\operatorname{Cr}\) the interval \(J(c)\) around \(f(c)\) so small that \(\operatorname{cc}_{c}f^{-1}(J)\subset V_{c}\). The main issue now will be to deal with the case that a point \(x\) enters a component of \(V\) which does not contain \(x\). We claim that for each \(c\in\operatorname{Cr}_{pr}\cup\operatorname{Cr}_{ren}\) one can choose \(I_{c}\) so small so that for each \(c^{\prime}\in\operatorname{Cr}^{\prime}:=\operatorname{Cr}_{rr}\cup \operatorname{Cr}_{nr}\) and each \(x\in\operatorname{cc}_{c^{\prime}}f^{-1}(J)\) with \(f^{n}(x)\in V_{I_{c}}\) for some \(n>0\), one has \(\mathcal{L}_{x}(V_{I_{c}})\subset V_{I_{c^{\prime}}}\). Similarly, we claim that if some forward iterate of \(x\in I_{c}\) for some \(c\in\operatorname{Cr}_{pr}\) enters \(V_{I_{c}}\) for some \(c\in\operatorname{Cr}_{ren}\) then \(\mathcal{L}_{x}(V_{I_{c}})\subset V_{I_{c^{\prime}}}\). Let us prove the first claim; the second claim goes in the same way. So take \(x\in\operatorname{cc}_{c^{\prime}}f^{-1}(J)\) and assume that \(f^{n}(x)\in V_{c}\) where \(c\in\operatorname{Cr}_{pr}\cup\operatorname{Cr}_{ren}\). Then take \(k<n\) maximal so that \(f^{k}(x)\in V_{I_{\operatorname{Cr}^{\prime}}}\) and let \(k<k^{\prime}\leq n\) be minimal so that whenever \(f^{i}(x)\), \(k^{\prime}\leq i\leq n\) is contained in \(V_{I_{c}}\) then it is contained in a domain of \(F\colon U_{I_{c}}\to V_{I_{c}}\) which intersects \(\omega(c)\). If such an integer \(k^{\prime}\) does not exist then set \(k^{\prime}=n\) anyway. By Theorem A.2, \(U_{I_{c}}\subset V_{I_{c}}\) and so we have that \(\mathcal{L}_{f^{k^{\prime}}(x)}(V_{I_{c}})\subset V_{I_{c}}\) if \(k^{\prime}<n\) and if \(k^{\prime}=n\) then we have by assumption \(f^{k^{\prime}}(x)\in V_{I_{c}}\). By Remark A.3 we have \[f^{k^{\prime}}(x)\in V_{I_{c}}\subset D_{\theta_{0}}(I_{c}^{*})\] where \(I_{c}^{*}\) is so that \((I_{c}^{*}\setminus I_{c})\cap\omega(c)=\emptyset\). Because of the choices of the intervals \(I_{c}\), since \((I_{c}^{*}\setminus I_{c})\cap\omega(c)=\emptyset\) and by the definition of \(k,k^{\prime}\) it follows that the pullback of \(D_{\theta_{0}}(I_{c}^{*})\) to \(f^{k}(x)\) by \(f^{-(k^{\prime}-k)}\) does not meet any critical points from \(\omega(c)\), \(c\in\operatorname{Cr}_{pr}\cup\operatorname{Cr}_{ren}\). So the map \(f^{k^{\prime}-k}\colon\mathcal{L}_{f^{k}(x)}(D_{\theta_{0}}(I_{c}^{*}))\to D_{ \theta_{0}}(I_{c}^{*})\) is a diffeomorphism. Hence, by Lemma A.4 and by possibly choosing the interval \(I_{c}\) smaller, we may assume that the set \(\mathcal{L}_{f^{k}(x)}(V_{I_{c}})\subset D_{\theta_{0}/2}(\mathcal{L}_{f^{k}(x )}(I_{c}^{*}))\). Let us show that for each \(\rho_{1}>0\) there exists \(\epsilon>0\) so that if \(|I_{c}|<\epsilon\) then the \(\rho_{1}\)-scaled neighbourhood of \(\mathcal{L}_{f^{k}(x)}(I_{c}^{*})\) is contained in the corresponding component of \(I_{\operatorname{Cr}^{\prime}}\). Indeed, if \(I_{c}\) is small, then \(I_{c}^{*}\) is small. It follows that all preimages of \(I_{c}^{*}\) are also small - this follows the Contraction Principle, see [dMvS, Section IV.5] (this principle is related to the absence of wandering intervals). So \(\mathcal{L}_{f^{k}(x)}(I_{c}^{*})\) is small compared to \(I_{c^{\prime}}\) if \(\epsilon>0\) is sufficiently small. If the assertion fails, then \(\mathcal{L}_{f^{k}(x)}(I_{c}^{*})\) is contained in a very small interval containing a boundary point of \(I_{\operatorname{Cr}^{\prime}}\). Note that boundary points of \(I_{\operatorname{Cr}^{\prime}}\) are pre-images of some periodic point \(p\). Hence some iterate \(K\) of \(\mathcal{L}_{f^{k}(x)}(I_{c}^{*})\) is near the periodic point \(p\) before it is mapped onto \(I_{c}^{*}\). Because of this, taking \(\epsilon>0\) small (and therefore \(I_{c}\) and \(I_{c}^{*}\) small) will ensure that \(K\) occupies only a very small part of a fundamental domain of the periodic point \(p\). This in turn implies that a \(\rho_{1}\)-scaled neighbourhood of \(\mathcal{L}_{f^{k}(x)}(I_{c}^{*})\) is contained in \(I_{\mathrm{Cr}^{\prime}}\), provided \(\epsilon>0\) is small enough. From the choice of \(k\) it follows that \(f^{k}(x)\) must be contained in \(I_{c^{\prime}}\) with \(c^{\prime}\in\mathrm{Cr}^{\prime}_{1}\) because the components \(I_{c^{\prime}}\) with \(c^{\prime}\in\mathrm{Cr}^{\prime}_{2}\) are mapped onto a component \(I_{c^{\prime}}\) with \(c^{\prime}\in\mathrm{Cr}^{\prime}_{1}\). Here we use that points can only escape the box mapping around \(\mathrm{Cr}^{\prime}\) (to the persistently recurrent \(c\)) via the components \(I_{c^{\prime}}\) with \(c^{\prime}\in\mathrm{Cr}_{1}\). Because a \(\rho_{1}\)-scaled neighbourhood of \(\mathcal{L}_{f^{k}(x)}(I_{c}^{*})\) is contained in \(I_{c^{\prime}}\), this and the 2nd part of Theorem A.3 implies that \(D_{\theta_{0}/2}(\mathcal{L}_{f^{k}(x)}(I_{c}^{*}))\subset V_{I_{c^{\prime}}}\) (provided \(\rho_{1}\) is sufficiently large). It follows that \(\mathcal{L}_{f^{k}(x)}(V_{c})\subset V_{I_{c^{\prime}}}\). Here, if \(k=0\), then we use that \(J\) and therefore \(J^{-1}\) can be taken arbitrarily small (this argument is needed and sufficient as \(x\) may not be real but \(f(x)\in J\) is). If \(k>0\) then we pullback using Theorem A.3 till we reach \(x\). This shows how to pullback the ranges of the complex box mappings around the critical points in \(\mathrm{Cr}_{pr},\mathrm{Cr}_{ren}\) back into range near \(\mathrm{Cr}_{nr},\mathrm{Cr}_{rr}\) (and also how to pullback range around \(\mathrm{Cr}_{ren}\) back into the range around near \(\mathrm{Cr}_{pr}\)). Now we show how to pullback the range near \(\mathrm{Cr}_{nr}\) and \(\mathrm{Cr}_{rr}\) back into the range near \(c\in\mathrm{Cr}_{pr}\). This will involve choosing the neighbourhoods \(J(c)\) of \(f(c)\) sufficiently small when \(c\in\mathrm{Cr}_{pr}\). So take \(x\in\mathrm{cc}_{c}f^{-1}(J)\) and a (minimal) \(n(x)\) so that \(f^{n(x)}(x)\in I_{c^{\prime}}\) for some \(c^{\prime}\in\mathrm{Cr}_{rr}\cup\mathrm{Cr}_{nr}\). By assumption the forward orbit of \(c\) does not enter \(I_{\mathrm{Cr}_{rr}}\cup I_{\mathrm{Cr}_{nr}}\). In particular, \(f^{n(x)-1}\colon\mathcal{L}_{f(x)}V_{I_{c^{\prime}}}\to V_{I_{c^{\prime}}}\) is a diffeomorphism and \(n(x)\) is very large if \(J(c)\) is small. It follows that the real trace of \(\mathcal{L}_{f(x)}V_{I_{c^{\prime}}}\) is small when \(J(c)\) is small, and so the components of \(f^{-1}(\mathcal{L}_{f(x)}V_{I_{c^{\prime}}})\) that are close to \(c\) are all contained in \(V_{I_{c}}\). If \(f^{n}(x)\in I_{c^{\prime}}\) for a non-minimal \(n\) then we apply Theorem A.3 to \(f^{n-n(x)}\) and again \(\mathrm{cc}_{x}f^{-n}(V_{I_{c^{\prime}}})\subset V_{I_{c}}\). This completes the proof of Theorem A.1. Note that although \(\mathcal{L}_{f(x)}V_{I_{c^{\prime}}}\) intersects the real line, some components of the form \(f^{-1}(\mathcal{L}_{f(x)}V_{I_{c^{\prime}}})\) may be disjoint from the real line. ### Proof of Theorem 4.2 Let \(F\colon W\to W^{\prime}\) be the map from Theorem A.1. This theorem shows that \(K_{X}(f)\cap\partial W\subset\mathbb{R}\). Using Lemma A.4 statements (1) and the first part of (2) follow. In particular, for each \(a^{\prime}<a\) we can arrange it so that \(K_{X}(f)\subset\Omega_{a^{\prime}}\). \(K_{X}(f)\) **has no interior.** To see this, assume by contradiction that \(K_{X}(f)\) contains an open set \(U\). Then \(U\) intersects \(K_{N}\) for some \(N\). Since \(K_{X}(f)\) is forward invariant, it follows that there exists an open set \(U^{\prime}\) which intersects \(I\) and which contains \(K_{X}(f)\). However, this is impossible: consider a periodic point \(p\) in \(U^{\prime}\cap I\) of period \(k\) and construct a periodic 'ray' \(\gamma\) through \(p\), i.e. so that \(f^{k}(\gamma)\supset\gamma\), and so that \(f^{k}(\gamma)\setminus\gamma\) is in the complement of \(\Omega_{a^{\prime}}\). In particular, each point in \(\gamma\setminus\{p\}\) is eventually mapped outside \(\Omega_{a^{\prime}}\). Since \(K_{X}(f)\subset\Omega_{a^{\prime}}\) and since \(K_{X}(f)\) is forward invariant this implies that no point in \(\gamma\setminus\{p\}\) can be contained in \(K_{X}(f)\). This contradicts that \(U^{\prime}\) contains \(K_{X}(f)\). \(K_{X}(f)\) **is full.** Suppose by contradiction that some \(\mathbb{C}\setminus K_{X}(f)\) contains a bounded component \(U\). Then the \(U\) is the interior of a topological disc \(D\) bounded by a set of the form \(\overline{\cup_{N\geq 0}\tau_{N}}\) where \(\tau_{N}\) is a sequence of subtrees \(\tau_{N}\) of the finite tree \(K_{N}\) so that \(\tau_{N}\subset\tau_{N+1}\) for all \(N\geq 0\) and so that some endpoints \(x_{N},y_{n}\) of \(\tau_{N}\) both converge to the same point \(x\). Then, using the same argument as in the previous paragraph, some iterate of \(D\) intersects the real line. In other words, the closure \(D^{\prime}\) of some component \(U^{\prime}\) of \(\mathbb{C}\setminus K_{X}(f)\) intersects \(I\) in an arc \(J\). Since \(K_{X}(f)\) is forward invariant, and since \(f\colon I\to I\) has no wandering intervals, it follows that \(U^{\prime}\) must be periodic. Hence \(D^{\prime}\cap I\) contains an attracting periodic point. However, by the choice of the pruning intervals \(J_{i}\), see the discussion in Example 4.1 of Figure 3, the set \(K_{N}\) inside the basin of periodic attractors does not grow with \(N\), and so the above situation cannot arise. \(K_{X}(f)\) **is locally connected.** To show this consider the map \(F\colon W\to W^{\prime}\) and let \(V\) be a component of \(W\) and let \(U\) be a pullback of \(V\) by \(f^{n}\). If \(U\) contains a critical point \(c\in\operatorname{Cr}_{pr}\cup\operatorname{Cr}_{ren}\) then \(f^{n}\) is an iterate of the first return map of \(F\colon W_{I_{c}}\to W^{\prime}_{I_{c}}\). Since \(K_{X}(f)\cap\partial W\subset\mathbb{R}\), it follows that \(U\cap K_{X}(f)\) is connected. In general, \(f^{n}\) is a composition of a diffeomorphic iterate of \(f\), an iterate of the box mapping \(F_{rr}\colon W_{I_{C^{\prime}}}\to W^{\prime}_{I_{C^{\prime}}}\) from Theorem A.3 (possibly the identity), a diffeomorphic iterate and an iterate of \(F\colon W_{I_{c}}\to W^{\prime}_{I_{c}}\) where \(c\in\operatorname{Cr}_{pr}\cup\operatorname{Cr}_{ren}\) (possibly the identity). By Theorems A.2 and A.3, puzzle pieces of \(F_{rr}\colon W_{I_{C^{\prime}}}\to W^{\prime}_{I_{C^{\prime}}}\) and of \(F\colon W_{I_{c}}\to W^{\prime}_{I_{c}}\) shrink in diameter as the depth increases. Therefore, and because of Lemma A.4, it follows that the diameter of \(U\) shrinks to zero as \(n\to\infty\) unless there are critical points at which \(f\) is infinitely renormalizable. In that case one can take a shrinking sequence of renormalization intervals and also obtain that there exist arbitrarily small puzzle pieces around each point. Since \(\partial W\) only intersect \(K_{X}(f)\) in the real line, it follows that each point \(x\in K_{X}(f)\) is either contained in preimages of some component of \(W\) of arbitrarily small diameter, or \(x\) is real and does not accumulate to \(\operatorname{Cr}\). In the latter case, by the (real) Mane theorem, see [dMvS], \(x\) is contained in a hyperbolic set and again \(x\) is contained in arbitrarily small neighbourhoods \(W_{n}(x)\) so that \(W_{n}(x)\cap K_{X}\) is connected. It follows that \(K_{X}\) is locally connected. Hence there are rays landing at each point \(z\in K_{X}(f)\). **At most finitely many rays lands at any point of \(K_{X}(f)\).** Notice that \(K_{X}(f)\) is the closure of a nested sequence of trees \(K_{n+1}\supset K_{n}\). Let \(z\in K_{X}(f)\) and assume that there are two rays \(\gamma_{1},\gamma_{2}\) landing on \(z\). Then both components of \(\mathbb{C}\setminus(\gamma_{1}\cup\{z\}\cup\gamma_{2})\) have a non-empty intersection with \(K_{X}\) and therefore with \(K_{n}\) for some \(n\). As \(K_{n}\) is a tree it follows that \(z\in K_{n}\). Moreover, \(K_{n+1}\setminus K_{n}\) does not contain arcs that are connected to endpoints of \(K_{n}\). It follows that if there are an infinite number of distinct rays \(\gamma_{i}\) landing on \(z\), then when \(i\neq j\) each component of \(\mathbb{C}\setminus(\gamma_{i}\cup\{z\}\cup\gamma_{j})\) has a non-empty intersection with \(K_{n}\) (where \(n\) does not depend on \(i,j\)). Since \(K_{n}\) is a finite tree with finite degree, this is impossible. This proves assertion (3). **Each non-real periodic point in \(K_{X}(f)\) is repelling**. Since each non-real point \(z\in K_{X}\) is contained in a sequence of puzzle pieces which lie nested, if \(z\) is periodic then there exist topological discs \(D^{\prime}\supsetneq D\ni z\) with \(D\cap\mathbb{R}=\emptyset\) and \(n\) so that \(f^{n}(z)=z\) and \(f^{n}\) maps \(D\) diffeomorphically to \(D^{\prime}\). Hence by the Schwarz Lemma \(z\) is a hyperbolic periodic point. ## Appendix B Continuity of the pruned Julia set **Definition B.1**.: Let \((U_{n},u_{n})\) and \((U,u)\) be pointed open discs. We say that \((U_{n},u_{n})\to(U,u)\) in the _sense of Caratheodory_ if (i) \(u_{n}\to u\), (ii) for each compact \(K\subset U\), \(K\subset U_{n}\) holds for \(n\) large and (iii) for any open connected set \(N\) containing \(u\), if \(N\subset U_{n}\) for infinitely many \(n\) then \(N\subset U\). Let \(f\in\mathcal{A}_{a}^{\underline{\nu}}\) and let \(f_{n}\in\mathcal{A}_{a}^{\underline{\nu}}\) be so that \(f_{n}\to f\) on \(\overline{\Omega_{a}}\). Moreover, take the intervals \(J_{n,i}\ni f_{n}(c_{i})\) so small that Theorem 4.2 holds and assume that \(J_{n,i}\to J_{i}\) (i.e. the boundary points of \(J_{n,i}\) converge to the boundary points of \(J_{i}\)). Let \(X,X_{n}\) correspond to \(\partial J^{-1}\) and \(\partial J_{n}^{-1}\) where \(J_{n}=f_{n}^{-1}(J)\setminus\mathbb{R}\). Similarly choose \(J_{n,i}^{*}\Subset J_{n,i}\) so that \(J_{n,i}^{*}\ni f_{n}(c_{i})\) and so that the boundary points of \(J_{n,i}^{*}\) are either (pre)periodic or contained in the basin of a periodic attractor. Pick a point \(x\in\mathbb{C}\setminus K_{X}(f)\) and \(x_{n}\in\mathbb{C}\setminus K_{X_{n}}(f_{n})\). As before let \(\psi_{X}\colon\overline{\mathbb{C}}\setminus\overline{\mathbb{D}}\to\overline{ \mathbb{C}}\setminus K_{X}\) be the uniformising map that fixes \(\infty\) and with real derivative at \(\infty\), and write \(\phi_{X}=\psi_{X}^{-1}\colon\overline{\mathbb{C}}\setminus K_{X}\to\overline{ \mathbb{C}}\setminus\overline{\mathbb{D}}\). **Theorem B.1**.: _We have the following:_ 1. \(K_{X_{n}}(f_{n})\to K_{X}(f_{n})\) _in the Hausdorff topology;_ 2. \((\mathbb{C}\setminus K_{X_{n}}(f_{n}),x_{n})\) _converges in the Caratheodory topology to_ \((\mathbb{C}\setminus K_{X}(f),x)\)_;_ 3. \(\phi_{X_{n}}\to\phi_{X}\)_;_ 4. \(\hat{F}_{X_{n}}\to\hat{F}_{X}\) _on a neighbourhood of_ \(\partial\mathbb{D}\setminus\hat{J}^{*}\) _where_ \(\hat{J}\) _is the set corresponding to_ \(J^{*}\)_._ Proof.: Statement (1) follows from the proof in the previous Appendix. Statement (2) follows from the definition of Caratheodory convergence. Statement (3) follows from Caratheodory's kernel theorem, see [Pom]. Statement (4) follows from (3). ## Appendix C A topological and analytic structure on the space of real analytic functions Let \(\mathcal{A}\) be the space of real analytic functions on \(I=[-1,1]\). Following [Ly1, Appendix 2], we will define in this section the real analytic topology and an analytic structure on \(\mathcal{A}\). This space can be viewed as a space of germs in the following way: **Definition C.1** (\(\mathcal{A}\) as a space of germs).: Let \(\mathcal{U}\) be a collection of open sets in \(\mathbb{C}\) containing \([-1,1]\) so that for each \(a>0\) there exists an open set \(U\in\mathcal{U}\) so that \(I\subset U\subset\Omega_{a}:=\{z\in\mathbb{C}:\operatorname{dist}(z,I)<a\}\). For \(U\in\mathcal{U}\), let \(\mathcal{A}_{U}\) be the Banach space of holomorphic maps on \(U\) which extend continuously to \(\overline{U}\). If \(U\subset V\) then define \(i_{U,V}\colon\mathcal{A}_{V}\to\mathcal{A}_{U}\) as the restriction function (for \(f\in\mathcal{A}_{V}\) let \(i_{U,V}f=f|U\)). We say that \(f_{1}\in\mathcal{A}_{U}\) and \(f_{2}\in\mathcal{A}_{V}\) are _equivalent_ if there exists \(W\in\mathcal{U}\) with \(W\subset U\cap V\) so that \(i_{W,U}f_{1}=i_{W,V}f_{2}\). Equivalence classes are called _germs_ and the space of such germs is called the inductive limit of the Banach spaces \(\mathcal{A}_{U}\). We say that \(f\in\mathcal{A}\) is in \(\mathcal{A}_{U}\) if some representative of \(f\) is in \(\mathcal{A}_{U}\). _Remark C.1_.: \(i_{U,V}\colon\mathcal{A}_{V}\to\mathcal{A}_{U}\) is continuous (when \(U\subset V\)). **Definition C.2** (The inverse limit topology on \(\mathcal{A}\)).: In the \(C^{\omega}\)_topology_ (also called the real-analytic topology, or the inverse limit topology) on \(\mathcal{A}\) a set \(\mathcal{O}\) is defined to be open if and only if \(\mathcal{O}\cap\mathcal{A}_{U}\) is open for any \(U\in\mathcal{U}\) (in the Banach space topology on \(\mathcal{A}_{U}\)). Because of the assumption on \(\mathcal{U}\) and since \(i_{U,V}\) is continuous, the \(C^{\omega}\) topology on \(\mathcal{A}\) does _not depend_ on the choice of \(\mathcal{U}\). It is often convenient to consider the collection \(\mathcal{U}\) of open sets of the form \(\Omega_{a}\) and to define \(\mathcal{A}_{a}:=\mathcal{A}_{\Omega_{a}}\). For later use, let \(\mathcal{A}_{U}(f,R)\) be the \(R\)-ball around \(f\) in \(\mathcal{A}_{U}\), i.e., the set of functions \(g\in\mathcal{A}_{U}\) so that \(||g-f||_{U}:=\sup_{U}|g(z)-f(z)|\leq R\). The above topology is Hausdorff. The next lemma is part of [Ly1, Lemma 11.4] (and is a special case of general results for the inverse limit topology), and gives some desirable properties of this topology: **Lemma C.1**.: __ * _For_ \(f_{n},f\in\mathcal{A}\) _we have_ \(f_{n}\to f\) _in the_ \(C^{\omega}\) _topology if and only if there exists_ \(U\in\mathcal{U}\) _so that_ \(f_{n},f\in\mathcal{A}_{U}\) _and_ \(f_{n}\to f\) _on_ \(\overline{U}\)_._ * _If_ \(\mathcal{K}\subset\mathcal{A}\) _is compact then there exists_ \(U\in\mathcal{U}\) _so that_ \(\mathcal{K}\subset\mathcal{A}_{U}\)_._ Proof.: Let us prove this for the collection \(\mathcal{U}\) of open sets \(\Omega_{a}\), \(a>0\). Assume \(f_{n}\to f\) and assume (by contradiction) there exist \(a_{n}\downarrow 0\) so that \(f_{n}\in\mathcal{A}_{a_{n}}\setminus\mathcal{A}_{a_{n-1}}\). We may assume that \(f_{n}\neq f\) for all \(n\). Then \(\mathcal{O}=\mathcal{A}\setminus\{f_{1},f_{2},\dots\}\) is open neighbourhood of \(f\), which gives a contradiction. So let us prove the 2nd assertion: if \(\mathcal{K}\) is compact, each infinite sequence \(f_{n}\in\mathcal{K}\) must have a cluster point. However, if the claimed assertion does not hold, then there exists \(f_{n}\) and \(a_{n}\downarrow 0\) so that \(f_{n}\in\mathcal{A}\setminus\mathcal{A}_{a_{n}}\). By the first part of the lemma such a sequence can't have a cluster point. On the other hand, **Lemma C.2**.: * _Let_ \(U\subsetneq V\)_. Then_ \(i_{U,V}\mathcal{A}_{V}\) _is not an open subset of_ \(\mathcal{A}_{U}\)_._ * _For any_ \(f\in\mathcal{A}_{V}\)_, any_ \(U\subsetneq V\) _and any open set_ \(\mathcal{O}\ni f\) _in the real analytic topology, there exists_ \(g\in\mathcal{O}\setminus\mathcal{A}_{U}\)_._ Proof.: If \(f\in\mathcal{A}_{V}\) then \(\tilde{f}=i_{U,V}f\in\mathcal{A}_{U}\). Each open neighbourhood of \(\tilde{f}\) in \(\mathcal{A}_{U}\) contains maps which are not analytic on \(V\). Hence \(i_{U,V}\mathcal{A}_{V}\) is not open in \(\mathcal{A}_{U}\) (and so not open in \(\mathcal{A}\)). The 2nd statement follows from this: since \(f\in\mathcal{A}_{U}\) implies \(f\in\mathcal{A}_{W}\) for any \(W\subsetneq U\subsetneq V\), the previous assertion implies that there exists a map \(g\) in \(\mathcal{O}\) which is not contained in \(\mathcal{A}_{U}\). Obviously when \(U\subsetneq V\) are open subsets, then the following properties hold, cf. Properties C1-C2, P1-P3 in [Ly1, p412, p416 and p342]: C1. The image of \(i_{U,V}\mathcal{A}_{V}\) is dense in \(\mathcal{A}_{U}\). C2. the map \(i_{U,V}\) is compact, i.e. \(i_{U,V}\mathcal{A}_{V}(f,R)\) pre-compact in \(\mathcal{A}_{U}\) for any \(R>0\). The following result is implicitly contained in more general results on inverse limit topology, see [Eng]. **Lemma C.3**.: \(\mathcal{A}\) _is not locally compact, not metrizable and not Baire. However it is regular, paracompact, Lindelof and admits a partition of unity._ We recall that a topological space is called _locally compact_ if each point \(x\) has a compact neighbourhood, i.e. there exists an open set \(U\) and a compact set \(K\) so that \(x\in U\subset K\). It is _Baire_ if the countable intersection of open and dense sets is dense. We say that a topological space is _regular_ (or \(T_{3}\)) if any point and closed set can be separated by open sets. It is _paracompact_ if each cover contains a subcover which is locally finite. It is _Lindelof_ if each cover contains a countable subcover. Proof.: According to the previous lemma, for each open set \(\mathcal{O}\subset\mathcal{A}\) containing \(f\) and each \(a>0\) there exists \(f_{n}\in\mathcal{O}\setminus\mathcal{A}_{a}\). By Lemma C.1 this implies that no set \(K\supset U\) can be compact. To see that \(\mathcal{A}\) is not metrizable, assume that \(d\) is a metric on \(\mathcal{A}\). Take \(f\in\mathcal{A}_{a}\) and a sequence of open balls \(\mathcal{O}_{n}\ni f\) with diameter going to zero. Then, by the previous lemma, \(\mathcal{O}_{n}\) contains some \(f_{n}\in\mathcal{A}\setminus\mathcal{A}_{1/n}\). So \(d(f_{n},f)\to 0\) but there exists no \(\epsilon>0\) so that every \(f_{n}\) is contained in \(\mathcal{A}_{\epsilon}\), contradicting Lemma C.1. To see that \(\mathcal{A}\) is not Baire, choose \(a_{n}\downarrow 0\) and \(f_{n}\in\mathcal{A}_{a_{n}}\setminus\mathcal{A}_{a_{n-1}}\) with \(f_{n}\neq 0\). Then the set \(\mathcal{O}_{n}=\{f\in\mathcal{A};f\neq f_{n}\}\) is open and dense in the real analytic topology. On the other hand, \(\cap\mathcal{O}_{n}\) is not dense, because \(\mathcal{O}=\mathcal{A}\setminus\{f_{1},f_{2},\dots\}\) is an open set (containing \(f\equiv 0\)) which is disjoint from each \(\mathcal{O}_{n}\). To see that \(\mathcal{A}\) is regular, take \(f\in\mathcal{A}_{a_{0}}\) and let \(K_{1}=\{f\}\) and some closed set \(K_{2}\) not containing \(f\). Since \(f\notin K_{2}\), \(K_{2}\) is contained in a closed set of the form \(K_{2}^{\prime}=\cup_{0<a<a_{0}}\{g\in\mathcal{A}_{a};\sup_{z\in\Omega_{a}}|g(z) -f(x)|\geq t(a)\}\) where \(t(a)>0\). Then \(U_{1}=\cup_{0<a<a_{0}}\{g\in\mathcal{A}_{a};\sup|g(z)-f(z)|<(1/4)t(a)\}\) and \(U_{2}=\cup_{0<a<a_{0}}\{g\in\mathcal{A}_{a};\sup|g(z)-f(z)|\geq(3/4)t(a)\}\) are open disjoint sets containing \(K_{1}\) and \(K_{2}\), proving that \(\mathcal{A}\) is regular. Let us next prove that \(\mathcal{A}\) is the countable union of compact subsets. Indeed, for each \(f\in\mathcal{A}\) there exists \(n,k\in\mathbb{N}\) so that \(f\in\mathcal{A}_{1/n}(0,k)\) and so in particular \(f\in i_{\Omega_{1/2n},\Omega_{1/n}}\mathcal{A}_{1/n}(0,k)\). It follows that \(\mathcal{A}=\bigcup_{k,n}i_{\Omega_{1/2n},\Omega_{1/n}}\mathcal{A}_{1/n}(0,k)\). By Property C1 the set \(i_{\Omega_{1/2n},\Omega_{1/n}}\mathcal{A}_{1/n}(0,k)\) is a precompact subset of \(\mathcal{A}_{1/2n}\). So the closure \(C_{n,k}\) of \(i_{\Omega_{1/2n},\Omega_{1/n}}\mathcal{A}_{1/n}(0,k)\) in \(\mathcal{A}_{1/2n}\) is a compact subset of \(\mathcal{A}_{1/2n}\). Let \(\mathcal{W}\) be a union of open subsets of \(\mathcal{A}\) covering \(C_{n,k}\). Then by definition of the topology on \(\mathcal{A}\), the sets \(W\cap\mathcal{A}_{1/2n}\), \(W\in\mathcal{W}\) are open in \(\mathcal{A}_{1/2n}\) and cover \(C_{n,k}\). It follows that a finite collection in \(\mathcal{W}\) already covers \(C_{n,k}\subset\mathcal{A}\), and so \(C_{n,k}\) is also compact as a subset of \(\mathcal{A}\). It follows that \(\mathcal{A}\) is the countable union of compact subsets. Because of this and since \(\mathcal{A}\) is regular, it is a Lindelof space, [Eng, page 192]. Hence it is paracompact, see [Eng, Theorem 5.1.1 on page 300]. Therefore any open cover of \(\mathcal{A}\) has a partition of unity subordinate to it, see [Eng, Theorem 5.1.9 on page 301]. **Lemma C.4**.: _The \(C^{\omega}\) topology on \(\mathcal{A}\) is finer than the \(C^{\infty}\) topology on \(\mathcal{A}\)._ Proof.: It is sufficient to show that there exists an open set \(\mathcal{O}\) in the real analytic topology which is contained in the set \(\{g;\sup_{x\in I}|D^{k}g(x)|<\epsilon\}\). So choose \(\epsilon(a)>0\) so small that if \(g\in\mathcal{A}_{a}\) is so that \(|g(x)|<\epsilon(a)\) on \(\overline{\Omega}_{a}\) then \(\{g;\sup_{x\in I}|D^{k}g(x)|<\epsilon\}\). It follows that the open set \(\mathcal{O}=\cup_{a}\{g\in\mathcal{A}_{a};\sup_{x\in\Omega_{a}}|g(x)|<\epsilon (a)\}\) in the real analytic topology has the desired properties, **Definition C.3**.: \(\mathcal{M}\subset\mathcal{A}\) is a _real analytic manifold modelled on a family of Banach spaces_, or simply _real analytic manifold_, if \(\mathcal{M}\) is the union of the image of a family of injections \(j_{V}\colon\mathcal{O}_{U}\to\mathcal{M}\) where \(U\in\mathcal{U}\) and where \(\mathcal{O}_{U}\) is an open subset of the Banach space \(\mathcal{A}_{U}\). The set \(j_{V}(\mathcal{O}_{U})\) is called a _Banach slice_ of \(\mathcal{M}\). In this paper we will choose in this definition for \(\mathcal{U}\) one of the following collections of open sets: 1. all sets of the form \(\Omega_{a}\), or 2. domains \(U\) of pruned polynomial-like extensions of some real analytic \(f\colon I\to\mathbb{R}\). **Definition C.4**.: A set \(\mathcal{X}\subset\mathcal{A}\) is called _an immersed submanifold_ if there exists a real analytic manifold \(\mathcal{M}\) and a map \(i\colon\mathcal{M}\to\mathcal{A}\) so that \(Di(m)\) is a linear homeomorphism onto its range, and if \(\mathcal{X}=i(\mathcal{M})\). We say that \(\mathcal{X}\) is a _embedded manifold_ if \(i\colon\mathcal{M}\to\mathcal{X}\) is a homeomorphism with the topology on \(\mathcal{X}\) coming from the one on \(\mathcal{A}\). _Remark C.2_.: The inclusion map \(i\colon\mathcal{M}\to\mathcal{A}\) has the properties P1-P3 from [Ly1, Appendix 2]. This implies that \(\mathcal{X}\) can be given an intrinsic manifold structure and that the tangent space and codimension on \(\mathcal{X}\) is well-defined, see [Ly1]. _Remark C.3_.: Lemma C.1 implies that \(f_{\lambda}\), \(\lambda\in\mathbb{R}^{k}\) is a family of maps in \(\mathcal{A}\) which depending continuously on \(\lambda\) which is contained an immersed submanifold \(\mathcal{X}\) then for \(\lambda\approx\lambda_{0}\) we have that \(f_{\lambda}\) is contained in a real analytic Banach manifold. ## Appendix D Summary of notation and definitions In general, sets in \(\mathbb{C}\) are denoted by symbols such as \(U,U^{\prime},V,V,O\) whereas functions spaces are denoted by symbols such as \(\mathcal{A},\mathcal{B},\mathcal{T},\mathcal{H},\mathcal{U}\) etc. \begin{tabular}{|l|l|} \hline \(\mathcal{A}^{\underline{\nu}}\), \(\mathcal{A}^{\underline{\nu}}_{a}\), \(\Omega_{a}\) & p. 3 \\ \hline topological conjugacy class \(\mathcal{T}_{f}\) & p. 3 \\ \hline pruned polynomial-like & \\ mapping \(F\colon U\to U^{\prime}\), \(\Gamma_{f}\) & p. 7 \\ \hline pruned filled Julia set \(K_{f}\) & p. 7 \\ \hline notation \(f\colon I\to I\), \(F\colon U\to U^{\prime}\) & \\ vs. \(\hat{f}_{X}\colon\partial\mathbb{D}\to\partial\mathbb{D}\), \(\hat{F}_{X}\colon E\to E^{\prime}\) & p. 9 \\ \hline \(X_{0},X\) and pruning intervals \(J_{i}\) & p. 10 \\ \hline pruned Julia set \(K_{X}(f)\) & p. 10 \\ \hline external map \(\hat{f}_{X}\colon\mathbb{D}\to\mathbb{D}\), \(\hat{X}\) & p. 15 \\ \hline sign \(\epsilon(f)\) & p. 16 \\ \hline \(\hat{X}^{*}\), \(Y\), \(\hat{Y}\) and \(J_{i}^{*}\) & p. 19 \\ \hline \(\Lambda_{N}\), \(\Lambda^{\prime}_{\infty}\) & p. 20,20 \\ \hline expanding Markov structure & \\ \(\hat{F}_{X}\colon\hat{E}\to\hat{E}^{\prime}\) & p. 22 \\ \hline rays \& roofs/equipotential & p. 22 \\ \hline admissible pruning set \(Q\) & p. 24 \\ \hline pruning data and equivalence of & \\ pruned polynomial-like mappings & p. 24 \\ \hline \end{tabular} \begin{tabular}{|l|l|} \hline attracting structure \(F\colon B\to B^{\prime}\) & p. 25 \\ \hline global pruned polynomial-like maps & \\ with attractors & p. 26 \\ \hline \(\mathcal{B}^{\underline{\nu}}_{a}\) & p. 29 \\ \hline simple parabolic point & p. 30 \\ \hline hybrid conjugacy class \(\mathcal{H}^{\underline{k}}_{f}\) and \(\mathcal{H}_{F}\) & p. 34,34 \\ \hline Real analytic topology & p. 38 \\ \hline Caratheodory topology & p. 39 \\ \hline manifold structure on \(\mathcal{A}^{\underline{\nu}}\) & p. 38,71 \\ \hline hyperbolic, semi-hyperbolic map & p. 40 \\ \hline transversal vector field to & \\ topological conjugacy class & p. 40 \\ of a hyperbolic map & \\ \hline quasiconformal vector field & p. 51 \\ \hline horizontal vector field & p. 52 \\ \hline vertical and transversal vector field & p. 53 \\ \hline quasiconformal motion & p. 61 \\ \hline Real-analytic manifolds & p. 38 \\ \hline \end{tabular}
2307.08757
First Results From Nanoindentation of Vapor Diffused Nb3Sn Films on Nb
The mechanical vulnerability of the Nb3Sn-coated cavities is identified as one of the significant technical hurdles toward deploying them in practical accelerator applications in the not-so-distant future. It is crucial to characterize the material's mechanical properties in ways to address such vulnerability. Nanoindentation is a widely used technique for measuring the mechanical properties of thin films that involves indenting the film with a small diamond tip and measuring the force-displacement response to calculate the film's elastic modulus, hardness, and other mechanical properties. The nanoindentation analysis was performed on multiple vapor-diffused Nb3Sn samples coated at Jefferson Lab and Fermilab coating facilities for the first time. This contribution will discuss the first results obtained from the nanoindentation of Nb3Sn-coated Nb samples prepared via the Sn vapor diffusion technique.
U. Pudasaini, G. V. Eremeev, S. Cheban
2023-07-17T18:01:47Z
http://arxiv.org/abs/2307.08757v1
# First Results From Nanoindentation of Vapor Diffused Nb\({}_{3}\)Sn Films on Nb ###### Abstract The mechanical vulnerability of the Nb\({}_{3}\)Sn-coated cavities is identified as one of the significant technical hurdles toward deploying them in practical accelerator applications in the not-so-distant future. It is crucial to characterize the material's mechanical properties in ways to address such vulnerability. Nanoindentation is a widely used technique for measuring the mechanical properties of thin films that involves indenting the film with a small diamond tip and measuring the force-displacement response to calculate the film's elastic modulus, hardness, and other mechanical properties. The nanoindentation analysis was performed on multiple vapor-diffused Nb3Sn samples coated at Jefferson Lab and Fermilab coating facilities for the first time. This contribution will discuss the first results obtained from the nanoindentation of Nb\({}_{3}\)Sn-coated Nb samples prepared via the Sn vapor diffusion technique. ## 1 Introduction Nb\({}_{3}\)Sn, with a superconducting transition temperature of \(\sim\)18.2 K and a superheating field of \(\sim\)400 mT, is a leading alternative material to replace niobium in SRF accelerator cavities [1]. Accordingly, it promises a higher accelerating gradient, quality factor, and operation temperature than traditional bulk Nb. Operating Nb\({}_{3}\)Sn SRF cavities at 4.3 K can deliver similar performance to Nb cavities at 2 K, resulting in enormous cost savings for SRF accelerators. That means these cavities can be operated with atmospheric liquid helium or cryocoolers, simplifying and reducing the cost of cryogenic facilities. The successful deployment of Nb\({}_{3}\)Sn technology will be transformational, significantly benefiting numerous SRF accelerators and enabling new classes of SRF accelerator applications. Since Nb\({}_{3}\)Sn is a very brittle material with a significantly lower thermal conductivity than Nb, it should be grown as a thin film for application. Several alternate coating techniques are being pursued at multiple labs to grow and optimize Nb\({}_{3}\)Sn thin film on metallic structures. Still, the Sn vapor diffusion process is yet the more mature technique for conformality and the only one thus far that has produced rf results for Nb\({}_{3}\)Sn-coated Nb cavities. The state-of-the-art single-cell Nb\({}_{3}\)Sn cavity frequently attains accelerating gradients of \(\geq\) 15MV/m with a quality factor \(\geq\) 10\({}^{10}\)[2-5]. Several Nb\({}_{3}\)Sn-coated multi-cell cavities have reached \(\sim\)15 MV with a quality factor of \(\sim\)10\({}^{10}\)[4, 6]. A significant improvement has been made in the performance of Nb\({}_{3}\)Sn-coated cavities over the last decade; these cavities are already suitable for some accelerator applications. Several projects in different laboratories are considering Nb\({}_{3}\)Sn-coated cavities for small accelerator applications. The construction of a quarter module using two CEBAF-style C75 cavities is in the final stage at Jefferson Lab. The quarter cryomodule will be installed in the upgraded injector test facility (UITF) to accelerate an electron beam up to 10 MeV [7]. If successful, the facility can use a cryomodule with Nb\({}_{3}\)Sn-coated cavities to run low-energy nuclear physics experiments at 4 K. Nb\({}_{3}\)Sn cavities have the potential to enable further and significantly simplify widespread use of SRF technology in light-source storage rings, FELs, and other compact accelerators. There have been successful tests of Nb\({}_{3}\)Sn cavities operating in conduction-cooled setups as demonstrations suitable for industrial accelerator applications at Fermilab (650 MHz single cell cavity), JLab (1.5 GHz and 952 MHz single cell), and Cornell (2.6 GHz) [8-10]. Detailed plans have been published for a medium-energy, high average-power superconducting e-beam accelerator for environmental applications at Fermilab [11] and a CW, low-energy, high-power superconducting linac for environmental applications by researchers at JLab [12]. Because of the material's brittleness, the mechanical vulnerability is identified as a significant technical challenge in deploying the Nb\({}_{3}\)Sn-coated cavities in practical accelerators. The performance degradation of a Nb\({}_{3}\)Sn-coated cavity resulting from the tuning of \(\sim\)300 KHz at room temperatures has been demonstrated [13]. To address this challenge, it is essential to understand the mechanical properties and behavior of vapor-diffused Nb\({}_{3}\)Sn thin film. So far, per the authors' knowledge, no such studies have been reported before; we used the nanoindentation technique to obtain fundamental mechanical properties such as elastic modulus, hardness, and yield stress. In this contribution, the first results from nanoindentation of vapor-diffused Nb\({}_{3}\)Sn coatings on differently prepared Nb substrates coated in Fermilab and Jefferson Lab coating facilities. ## 2 Experimental ### Sample Preparation The substrate samples used here were 30 mm \(\times\) 30 mm niobium coupons produced by electro-discharge machining (EDM) cutting 3 mm thick, RRR\(>\)300 sheet material of the type used for cavity fabrication. These samples received 100-150 um bulk material removal using buffer chemical polishing (BCP) or electropolishing (EP) to remove the damaged layers from the surface. Each sample was treated at 800 \({}^{\circ}\)C for 2-3 hours. Samples then received the final removal of 25 \(\upmu\)m via EP or BCP. One sample was mechanically polished for the smoothest surface that followed 15 \(\upmu\)m EP removal. Nb\({}_{3}\)Sn thin films were then grown on these samples following a typical coating procedure at Jefferson Lab or Fermilab following typical coating procedures. In this study, we used five samples: * MC01 (BCP'ed substrate, coated in FNAL) * MC07 (Mechanical polishing (MP) \(>\) EP'ed substrate, coated in JLab) * GE70 (EP'ed substrate, coated in FNAL) * GE71 (EP'ed substrate, coated in JLab) ### Nanoindentation Nanoindentation is a widely used technique for measuring the mechanical properties of thin films [14-16]. This technique typically involves indenting the film with a small diamond tip and continuously recording the displacement and load. Nanoindentation equipment allows precise load or displacement control during measurement with small applied forces in nN scales. Fig. 1 illustrates a typical nanoindentation measurement that consists of a three-step process; loading, holding, and unloading. During loading, the load increases with indentation depth consisting of deformation and plastic deformation. In the unloading stage, elastic deformation can be recovered during the unloading that can be used to obtain the film's elastic modulus, hardness, and other mechanical properties. Nanoindentation measurements were performed on each sample using a Nano Test Vantage instrument (Micro Materials, Wrexham, UK) equipped with a Berkovich diamond indenter at MechAction Lab. The instrument was calibrated before conducting measurements on the Nb\({}_{3}\)Sn/Nb samples to ensure the lowest noise floor and thermal drift rate. The system and the indenter tip were also validated using fused silica and tungsten reference samples per the ISO 14577 standard. The applied maximum load for each indentation was set to 10 mN for the maximum indent depth below 1/10\({}^{\text{th}}\) of Nb\({}_{3}\)sn coating thickness to avoid severe substrate effects. A total of 30-50 indentations were performed on each sample, with an indent spacing of 10 \(\upmu\)m between adjacent indentations. The loading, holding, and unloading times were set to 5, 2, and 5 s, respectively. The testing parameters and methods followed ASTM E2546 and ISO 14577 standards to ensure the accuracy and reliability of the measurements. Because of the surface roughness of the Nb\({}_{3}\)Sn surface, see Fig. 2; we only reported 40-60% of the total indentation with consistent results. In the first batch of testing, MC01 and GE70, both coated at the FNAL facility, were tested with \(\sim\)30 indentations in each sample, out of which \(\sim\) 12 were used for the analysis. The other two samples were indented in \(>\)50 spots for better statistics, where \(\sim\)25 indentations were considered for analysis. During the P-h curve measurement, the indenter is driven into the material producing an impression with a projected area (A\({}_{\text{p}}\)). The indentation hardness, which measures resistance to plastic deformation, can be estimated as H\({}_{\text{IT}}\) = P\({}_{\text{max}}\)/A\({}_{\text{p}}\), where P\({}_{\text{max}}\) is the maximal load. The Vicker's hardness is defined as H\({}_{\text{v}}\) = 94.5 \(\times\) H\({}_{\text{IT}}\), where H\({}_{\text{IT}}\) and H\({}_{\text{v}}\) are in GPa and Vickers, respectively. The estimation of Young's modulus, E\({}_{\text{t}}\) is obtained from the Hertzian theory of contact mechanics [17], which uses the slope of the unload at the maximum displacement point h\({}_{\text{max}}\) (S), A\({}_{\text{p}}\), modulus and Poisson ratio of the indentor, and Poisson's ratio of the sample. Our calculation is based on the assumption of a Poisson's ratio (v) of 0.4 for typical Nb\({}_{3}\)Sn material. Please, see the reference for more details on estimating the modulus value. It should be noted that Young's modulus may vary slightly depending on the assumed Poisson ratio value. Like most metallic materials, yield stress (\(\sigma_{y}\)) values are estimated as 1/3\({}^{\text{rd}}\) of the indentation hardness H\({}_{\text{IT}}\). Figure 1: Schematic of the load-displacement curve during a typical nanoindentation measurement. Figure 2: Topography of vapor diffused Nb\({}_{3}\)Sn from the sample MC07. Note that the roughness is about 1 \(\upmu\)m. ## 4 Results Ensembles of P-h curves obtained from the indentations of each sample are shown in Fig. 3. Almost all the curves of each sample have shown "pop-in" events on the loading side. Only occasionally, "pop-outs" or "elbows" were observed during the unloading. The mechanical properties estimated for each sample are tabulated in Table 1. The estimated average among all the samples for H\({}_{\text{IT}}\), H\({}_{\text{x}}\), E\({}_{\text{t}}\), and oy are 11.98\(\pm\)1.98, 1135.63\(\pm\)183.83, 169\(\pm\)22.49, and 4.00\(\pm\)0.65 GPa, respectively, where the errors are standard deviation for average estimated values for each sample in Table 1. A SRF cavity grade Nb sample was also characterized; see the P-h indentation curve in Fig.4 using the same measurement instrument to validate the technique. Unlike Nb\({}_{\text{S}}\)Sn, each P-h curve for Nb is more consistent and shows no "pop-in" event, as expected for the soft material. The estimated values for each mechanical parameter are also tabulated in Table 1. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Sample & Substrate Preparation & Indentation Hardness & Vickers HHT(GPa) & Young’s Modulus H\({}_{\text{v}}\)(Vickers) & Yield Stress E\({}_{\text{t}}\)(GPa) & Coating Facility \\ \hline MC01 & BCP & 10.36 \(\pm\)1.65 & 979.10\(\pm\)155.80 & 150.06\(\pm\)14.22 & 3.45\(\pm\)0.55 & Fermilab \\ \hline MC07 & MP \(>\) EP & 10.50 \(\pm\)2.28 & 991.9 \(\pm\) 215.3 & 164.99\(\pm\)25.71 & 3.50\(\pm\)0.76 & JLab \\ \hline GE070 & EP & 14.40\(\pm\)3.29 & 1360.4 \(\pm\) 310.9 & 161.2 \(\pm\) 27.70 & 4.80\(\pm\)1.10 & Fermilab \\ \hline GE071 & EP & 12.82 \(\pm\)4.55 & 1211.1\(\pm\) 430.0 & 201.92\(\pm\)56.91 & 4.27\(\pm\)1.52 & JLab \\ \hline C-29 (Nb) & BCP & 1.2 \(\pm\)0.09 & 114.9\(\pm\)8.1 & 116.02\(\pm\)7.35 & 0.41\(\pm\)0.03 & - \\ \hline \end{tabular} \end{table} Table 1: Mechanical properties of vapor diffused Nb\({}_{\text{S}}\)Sn on Nb Figure 4: Load-displacement (P-h) curves obtained from nanoindentation of a Nb sample. Figure 3: Load-displacement (P-h) curves obtained from nanoindentation of each Nb\({}_{\text{S}}\)Sn-coated sample. Note ”pop-in” events in each sample characterized by a distinct drop in the indentation load and an associated discontinuity in the depth of the indenter. ## 5 Discussion The nanoindentation technique differed from the usual tensile tests used to analyze the mechanical characteristics of SRF cavity materials. The measurement was done on a Nb sample to validate the technique. The obtained values for the hardness (1.2\(\pm\)0.09 GPa) and Young's modulus (116.02\(\pm\)7.35 GPa) are within the values typically found in the literature 0.87-1.3 GPa and 105-124 GPa and [18], respectively. The yield strength can be as low as 35-70 MPa for well-annealed Nb to some 100s of MPa for heavily deformed samples [19]. Since the measured Nb sample was not annealed and was not subjected to bulk removal, the indentation was performed on the deformed/ damaged surface layer, likely resulting in a higher value of yield strength. The observed data show the distinction between soft Nb vs. hard Nb\({}_{3}\)Sn. The 'pop-in' events observed in Nb\({}_{3}\)Sn are most likely because of the generation of micro-cracks during the loading. Similar 'pop-ins' were observed experimentally and linked to the fracture of the brittle film in several studies. Note that we have not observed multiple 'pop-ins' in our experiments but observed single 'pop-ins' as shown in Figure 5. A comparison of the number of 'pop-in' events relative to different loading forces of two samples coated in identical conditions is shown in a histogram in Figure 6, and does not indicate a common correlation between the loading and pop-in in different samples. Note that the measurement values for the hardness from MC01 and MC07 coated in the two different facilities are very similar that is similar to GE070 and GE071. Since each pair of these samples was fabricated from a different batch of materials, more studies are required to see if that has any correlation in resulting in mechanical parameters. ## 6 Summary and Outlook A set of vapor-diffused Nb\({}_{3}\)Sn thin films coated on Nb at JLab and Fermilab coating facilities were examined with the nanoindentation technique, and preliminary data were presented. The 'pop-in' events resulting from the material's cracking show the hard and brittle nature of the material. as these events likely resulted from the initiation and propagation of micro cracks. Despite the surface roughness, we have estimated average mechanical parameters among all the Nb\({}_{3}\)Sn samples for H\({}_{\rm IT}\), Hv, E\({}_{\rm l}\), and cy are 11.98\(\pm\)1.98, 1135.63\(\pm\)183.83, 169\(\pm\)22.49, and 4.00\(\pm\)0.65 GPa, respectively. These preliminary values are expected to be valuable in understanding the mechanical limitations for tuning Nb\({}_{3}\)Sn-coated cavities. These values will be used to simulate the tuning of the Nb\({}_{3}\)Sn-coated Nb cavity in the near future. We look forward to using the nanoindentation technique to study the effect of different coating characteristics, such as thickness, grain size, orientation, and grain boundaries while improving the accuracy of the measurement. ## 7 Acknowledgments We thank Bo Zhou from MechAction, Inc. for providing measuring our samples and for helpful discussions. Thanks to Eric Lechner and Carrie Baxley for the help with polishing some substrates. We are grateful to Olga Trifimova for her help with the AFM analysis. AFM measurement was done at the Applied Research Center Core Labs, College of Willam & Mary.
2304.11756
Introducing the Perturbative Solution of the Inter-Channel Stimulated Raman Scattering in Single-Mode Optical Fibers
The continuously increasing IP data traffic demand, with geometrical growth rate exceeding 26%, requires a large transmission capacity increment from the fiber optical infrastructure. As the deploy of new fiber cables requires extensive investments, the development of multi-band amplifiers and transceivers, already available as prototypes, is progressively considered towards the entire low-loss single-mode bandwidth beyond the 5 THz C-band. In this perspective, an adequate handling of the variations along the frequency of the fiber physical features becomes crucial for the fiber propagation modeling in multi-band wavelength division multiplexing (WDM) channel comb transmission scenarios. In particular, the inter-channel stimulated Raman scattering (SRS) is the fundamental inter-band effect in this context. The SRS effect on the WDM comb propagated through a single-mode optical fiber is described by a set of ordinary differential equations (ODEs). To date, an exact solution of the SRS ODEs has not been proposed, and in the literature numerical solutions or approximations have been considered in order to take into account this effect. In this work, a perturbative solution of the SRS ODEs is presented enabling an efficient trade-off between the target accuracy and the computational time. Considering a C+L+S transmission scenario, the perturbative expansion up to the 2nd order ensures an excellent accuracy. Whereas, in an U-to-E transmission scenario, the 3rd order is required in order to reach an equivalent accuracy.
Andrea D'Amico, Giacomo Borraccini, Vittorio Curri
2023-04-23T21:56:44Z
http://arxiv.org/abs/2304.11756v1
Introducing the Perturbative Solution of the Inter-Channel Stimulated Raman Scattering in Single-Mode Optical Fibers ###### Abstract The continuously increasing IP data traffic demand, with geometrical growth rate exceeding 26%, requires a large transmission capacity increment from the fiber optical infrastructure. As the deploy of new fiber cables requires extensive investments, the development of multi-band amplifiers and transceivers, already available as prototypes, is progressively considered towards the entire low-loss single-mode bandwidth beyond the 5 THz C-band. In this perspective, an adequate handling of the variations along the frequency of the fiber physical features becomes crucial for the fiber propagation modeling in multi-band wavelength division multiplexing (WDM) channel comb transmission scenarios. In particular, the inter-channel stimulated Raman scattering (SRS) is the fundamental inter-band effect in this context. The SRS effect on the WDM comb propagated through a single-mode optical fiber is described by a set of ordinary differential equations (ODEs). To date, an exact solution of the SRS ODEs has not been proposed, and in the literature numerical solutions or approximations have been considered in order to take into account this effect. In this work, a perturbative solution of the SRS ODEs is presented enabling an efficient trade-off between the target accuracy and the computational time. Considering a C+L+S transmission scenario, the perturbative expansion up to the 2\({}^{nd}\) order ensures an excellent accuracy. Whereas, in an U-to-E transmission scenario, the 3\({}^{rd}\) order is required in order to reach an equivalent accuracy. Inter-channel stimulated Raman scattering single-mode optical fiber perturbation theory ## 1 Introduction The demand for IP data traffic is ever increasing and medium term authoritative forecasts envision a geometrical growth exceeding 26% as compound annual growth rate (CAGR) on the average, with a much larger figure for some network segments [1]. To support data transport, Wavelength Division Multiplexed (WDM) fiber optics transmission and networking using dual-polarization coherent optical technologies is expanding from core- and metro-networks to the access, 5G \(x\)-hauling [2] and inter- and intra-datacenter connections. In this perspective, the fiber optical infrastructure must progressively support the continuously increasing data transport. In the telecommunication framework, the deployment of new infrastructures requires large CAPEX investments, and in optical networks installing new cables is particularly expensive [3]. The largest portion of installed and under deployment fiber variety is the standard single mode fiber (SSMF) made of purified glass (ITU-T G.652D fiber) [4], that is characterized by low loss profile, below 0.4 dB/km, in the single-mode spectral region, in absence of the water absorption peaks. In particular, the overall available transmission bandwidth of already installed cables exceeds 50 THz: the U, L, C, S, E and O bands. Consequently, the exploitation of the entire transmission bandwidth represents an interesting solution for the total capacity increasing, maximizing returns from CAPEX investments [5, 6]. Currently, most of commercial systems are based exclusively on the use of the C-band, occupying a bandwidth of roughly 5 THz corresponding both to the minimum of the fiber loss profile, and to the amplification bandwidth of the erbium-doped fiber amplifiers (EDFA). C+L multi-band transmission is already present in commercial systems [7], both including Raman amplification and recently commercially developed EDFAs extended to the L-band, so enabling the exploitation of an additional 5 THz transmission bandwidth. Using other rare-earths than Erbium, prototype amplifier implementations have been proposed for the amplification of other bands, potentially assisted by Raman amplification, enabling the full exploitation of the entire U-to-O overall available transmission bandwidth [8]. Planning and control of multi-band network require the extension of fiber transmission model for WDM lightpaths enabling full optimization and possible infrastructure sharing by software defined control based on the open physical layer abstraction [9]. The modeling extension needs to include the variation with frequency of the fiber parameters, i.e., fiber loss, chromatic dispersion and the effective area that modifies the strengths of fiber nonlinear effects [10]: the Kerr effect, which generates the nonlinear interference (NLI) noise, and the stimulated Raman scattering (SRS), which induces a power transfer from higher to lower frequencies. In particular, the SRS is the principal effect introducing multi-band interactions as the inter-band SRS-induced power transfer has a higher impact on the transmission performance than the Kerr effect [11]. Therefore, the SRS effect must be accurately evaluated in order to properly optimize the working point of the amplifiers in the optical line system (OLS) in each utilized transmission band [12]. An accurate evaluation of the SRS effect on the WDM channel comb is required with respect to both the frequency and the propagation axis, \(z\) as the SRS-induced modification of the fiber loss/gain profile vs. \(z\) for each frequency, \(f\), with respect to the intrinsic fiber loss profile \(\exp[-\alpha(f)z]\), significantly affects the amount of NLI noise. It is lower in case of SRS-_depleted_ channels - higher frequencies - and stronger in SRS-_pumped_ channels - lower frequencies [13, 14, 6]. Finally, it has been experimentally demonstrated that the approximation of transparent fiber propagation impairment on dual polarization coherent optical technologies as additive Gaussian noise is accurate also for low dispersion values [15, 16]. Therefore, the perturbative models evaluating the accumulated NLI noise can be extended to the entire U-to-E band and to a portion of the O-band [5]. Limiting the analysis to the C-band, the SRS effect can be accurately modeled as a spectral tilt [17], nevertheless, this approximation is less accurate extending the transmission bandwidth, and becomes totally inaccurate when the spectral occupation exceeds the SRS efficiency peak, roughly at 13 THz [6]. Thus, in general, the set of ordinary differential equations (ODEs), which are the accurate mathematical model of the SRS [18, 19], must be solved numerically [20]. This solution requires a non-negligible computational overhead that can be an issue due to the computation required by the transmission model for the quality of transmission estimation within the optical controller. In this work, a perturbative solution of the SRS ODEs is proposed and validated in the optimized C+L+S and U-to-E-band transmission scenarios, where the perturbative expansions up to the 2-nd and 3-rd order, respectively, provide a high level of accuracy for the evaluation of the overall loss/gain profile along \(z\) and \(f\), with a maximum absolute error lower than 0.1 dB. In conclusion, the numerical and perturbative solutions are compared in terms of accuracy and computational time with a variable launch power and an increasing total transmission bandwidth. The article is divided into the following sections. In Sec. 2, the physical layer parameters involved in single mode optical fiber propagation in a multi-band context are described. In Sec. 3, the perturbative solution inherent to the inter-channel SRS problem is reported, also introducing metrics to evaluate its accuracy. In Sec. 4, the entire simulation system is described, including: the architecture of the network elements that make up an optical line system, the choices made in terms of launch power optimization and the different scenarios considered according to the parameters of the physical layer. In Sec. 5, the results obtained in terms of accuracy and computational time of the proposed perturbation solution with respect to the reference numerical method are reported and commented for each simulation. In Sec. 6, the conclusions of this work are summarized. ## 2 Physical Layer Parameters For the sake of completeness and ease of reading, the following section is dedicated to the presentation of the physical layer parameters involved in the SRS in single-mode optical fibers, in particular highlighting their dependency with respect to the frequency. The latter aspect is functional to have a model that accurately represents the phenomenon during optical propagation in a generic wideband transmission scenario. In order to provide a reference for the fiber parameters descriptions, Tab. 1 reports the frequency bounds of the each band composing the entire multi-band scenario considered in this work. A complete description of each physical layer parameter is provided in [18] and a summary focused on a wideband transmission scenario is given in [10]. ### Loss Coefficient Function The power loss impairing the optical signal propagation through a fiber is taken into account by the fiber loss coefficient, \(\alpha\). The propagating signal wavelength determines the fiber attenuation [21], and depends on the fiber composition and manufacturing process, From a phenomenological perspective, the contributions in the wavelength range between 1.2 and 1.7 \(\mu\)m are the Rayleigh scattering, the violet and infrared absorption, the maxima of the OH-ion absorption at around 1.25 and 1.39 \(\mu\)m, and the absorption caused by phosphorous in the fiber core. [22] proposed a parametric model of the loss coefficient function with regard to each phenomenological component. The loss coefficient profile may be written as follows with regard to the optical signal wavelength, \(\lambda\), and all terms written in logarithmic units (dB/km): \[\alpha(\lambda)\simeq\alpha_{\mathrm{S}}(\lambda)+\alpha_{\mathrm{UV}}( \lambda)+\alpha_{\mathrm{IR}}(\lambda)+\alpha_{13}(\lambda)+\alpha_{12}( \lambda)+\alpha_{\mathrm{POH}}(\lambda)\:, \tag{1}\] where: \[\alpha_{\mathrm{S}}(\lambda) = A\lambda^{-4}+B\:,\] \[\alpha_{\mathrm{UV}}(\lambda) = K_{\mathrm{UV}}e^{C_{\mathrm{UV}}/\lambda}\:,\] \[\alpha_{\mathrm{IR}}(\lambda) = K_{\mathrm{IR}}e^{-C_{\mathrm{IR}}/\lambda}\:,\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline **BAND** & **Lowest Central Frequency [THz]** & **Highest Central Frequency [THz]** & **Bandwidth [THz]** \\ \hline **U** & **180.710** & **185.510** & **4.800** \\ \hline **L** & **186.010** & **190.810** & **4.800** \\ \hline **C** & **191.310** & **196.110** & **4.800** \\ \hline **S** & **196.610** & **206.210** & **9.600** \\ \hline **E** & **206.810** & **221.210** & **14.400** \\ \hline \end{tabular} \end{table} Table 1: Definition of the frequency bounds of the each band composing the considered wideband scenario, 75 GHz fixed grid. Figure 1: SSMF wideband loss coefficient profile, \(\alpha(f)\). \[\alpha_{13}(\lambda) = A_{1}\left(\frac{A_{\rm a}}{A_{1}}e^{\frac{-(\lambda-\lambda_{\rm a })^{2}}{2\sigma_{\rm PO}^{2}}}+\frac{1}{A_{1}}\sum_{i=1}^{3}A_{i}e^{\frac{-( \lambda-\lambda_{\rm i})^{2}}{2\sigma_{\rm PO}^{2}}}\right),\] \[\alpha_{12}(\lambda) = A_{1}\left(\frac{1}{A_{1}}\sum_{i=4}^{5}A_{i}e^{\frac{-(\lambda- \lambda_{\rm i})^{2}}{2\sigma_{\rm PO}^{2}}}\right)\,,\] \[\alpha_{\rm POH}(\lambda) = A_{\rm POH}e^{\frac{-(\lambda-\lambda_{\rm POH})^{2}}{2\sigma_{ \rm PO}^{2}}}\,,\] in turn, stand for the contributions from the Rayleigh scattering, ultraviolet, infrared, OH\(-\) peak absorption, and (P)OH. By taking into account the important elements in the C, L, and S bands, the overall model may be made simpler while ignoring the contributions from the OH-ion absorption peak at 1.25 \(\mu m\) and phosphorus. Additionally, within the interest band, the UV absorption exhibits consistent broadband behavior. With these presumptions, it is possible to define 5 parameters: \(A\), \(B\), \(K_{\rm IR}\), \(A_{1}\) and \(K_{\rm UV}\) which take into account the effects of each phenomenological contribution. In this work, a loss coefficient function retrieved from experimental measurements upon a standard single-mode fiber (SSMF) has been used (see Fig. 1). Leaving unchanged the other parameters with respect to [22], a fitting procedure can be applied to the measured loss coefficient function, obtaining the following set of parameters \(A=0.9192\) dB \(\cdot\mu\)m\({}^{4}\)/ km, \(B=0.0147\) dB / km, \(K_{\rm IR}=5.0\cdot 10^{11}\) dB / km, \(A_{1}=0.0043\cdot 10^{-3}\), \(K_{\rm UV}=1.4655\cdot 10^{-16}\) dB / km. ### Effective Area The effective area may be calculated as \(A_{eff}=\pi\,w^{2}\), where \(w\) is the mode radius, which depends on the central pulse wavelength and the fiber geometry, when the mode profile of the pulse is well approximated by a Gaussian function. In more details, the mode radius is denoted by \(w=a\,/\sqrt{\ln V}\), where \(a\) represents the fiber core radius and \(V\) is the normalized frequency. In the event of a minor relative index step at the core-cladding interface, \(\Delta\approx(n_{1}-n_{c})\,/\,n_{1}\), this may be stated as: \[V(\lambda)=\frac{2\pi}{\lambda}\,a\,n_{1}\sqrt{2\Delta}\, \tag{2}\] where \(n_{1}\) is the refractive index of the core and \(n_{c}\) is the refractive index of the cladding. In this work, the manufacturing fiber parameters of common SSMF values are assumed, as \(a=4.2\)\(\mu\)m and \(n_{2}=2.6\cdot 10^{-20}\) m\({}^{2}\)/ W. In addition, the cladding refractive index and the refractive index difference with respect to the core are fixed at 1.45 and \(0.31\%\), respectively. ### Raman Gain Coefficient The SRS is the prominent broadband nonlinear phenomena that occurs during the transmission of a WDM channel comb [19]. The propagating electromagnetic field and the fiber's dielectric medium interact to create the SRS. Since the interaction in this scenario is exclusively caused by the various channels within the spectrum, the SRS caused by the transmission of a WDM comb is sometimes referred to as Raman cross-talk in optical fiber communications. The Raman gain coefficient, \(g_{R}\), which quantifies the coupling between a specific pair of channels with a frequency shift of \(\Delta f=f_{p}-f_{s}\), where \(p\) and \(s\) are the indexes of the channel at higher (pump) and lower (Stokes wave) frequencies, respectively, is the fundamental parameter that describes the regulation of the power transfer between channels during fiber propagation. The kind and concentration of dopants in the fiber core, the reciprocal polarization state, the mode overlap between the pump and the Stokes wave, the absolute frequency of the pump, and other characteristics of the fiber and propagating channel modes all affect this coefficient. Utilizing a reference pump at the frequency \(f_{ref}\), it is feasible to determine the Raman gain coefficient profile for a single fiber [23]. In terms of optical power, the following curve may be described: \[g_{0}(\Delta f,f_{ref})=\frac{\gamma_{R}(\Delta f,f_{ref})}{A_{eff}^{ov}( \Delta f,f_{ref})}\, \tag{3}\] where \(\gamma_{R}\) is the Raman gain coefficient in terms of mode intensity (expressed in \(\rm m\,/\,W\)) and \(A_{eff}^{ov}(\Delta f,f_{ref})\) is the effective area considering the effective area overlap between the pump and the Stokes wave. By averaging the effective areas at the single pump and Stokes wave frequencies and assuming a Gaussian mode intensity distribution, the effective area can be calculated [24]. The whole Raman gain coefficient may be modeled using the following equation in order to completely mimic optical fiber propagation and take SRS effects into account: \[g_{R}(\Delta f,f_{p})=k_{pol}^{ps}\,g_{0}(\Delta f,f_{ref})\frac{f_{p}}{f_{ ref}}\,\frac{A_{eff}^{ov}(\Delta f,f_{ref})}{A_{eff}^{ov}(\Delta f,f_{p})}\, \tag{4}\] where the ratios between the frequencies and effective areas take into consideration the scaling of the pump and the effective area, whereas \(k_{pol}^{ps}\) accounts for the reciprocal polarization state between the pump and the Stokes wave. With regard to germanosilicate fibers, namely SSMF, the concentration of germanium in the core fiber is remarkably low, resulting in a refractive index variation of just a few hundredths of a percentage point. In this work, the fused silica Raman gain coefficient curve reported in Fig. 2 is used, with a reference frequency of 206.185 THz. Adding also the contribution of the vibrational (phonon) loss for higher-frequency channels with respect to the considered channel (negative half of the curve) [19], the Raman efficiency profile experienced by each frequency within the wideband scenario is represented considering all the effective area scaling contributions described in the Eq. 4. Furthermore, as the propagating channels within the WDM comb are generally depolarized, a unitary polarization coefficient \(k_{pol}\) is assumed. In the following, for a matter of simplicity, the notation of the Raman gain coefficient is: \[g_{R}(\Delta f,f_{ref})=g_{R}(f,f^{\prime}) \tag{5}\] where \(f\) is the frequency of the channel under investigation and \(f^{\prime}\) is the interfering channel. ## 3 SRS Perturbative Solution The first order differential equation describing the Raman effect is defined on the spectral power density, \(\mathcal{G}(z,f)\), as follows: \[\frac{\mathrm{d}}{\mathrm{d}z}\mathcal{G}(z,f)=\left[-\alpha(f)+\int\mathrm{d }f^{\prime}g_{R}(f,f^{\prime})\mathcal{G}(z,f^{\prime})\right]\mathcal{G}(z,f )\;. \tag{6}\] The general solution of Eq. 6 can be decomposed as the product of the solution of the linear operator, \(\mathcal{L}(z,f)\), and a nonlinear term, \(\chi(z,f)\): \[\mathcal{G}(z,f)=\mathcal{L}(z,f)\chi(z,f)\;, \tag{7}\] given the boundary conditions: \[\mathcal{G}(z,f)\big{|}_{z=0}=\mathcal{G}_{0}(f)\Rightarrow\left.\chi(z,f) \right|_{z=0}=0\;. \tag{8}\] In particular, the solution of the linear operator is defined by the following expression: \[\left(\frac{\mathrm{d}}{\mathrm{d}z}+\alpha(f)\right)\mathcal{L}(z,f)=0\quad \Rightarrow\quad\mathcal{L}(z,f)=\mathcal{G}(z=0,f)e^{-\alpha(f)z}=\mathcal{ G}_{0}(f)e^{-\alpha(f)z}=\mathcal{G}_{0}(f)\frac{\mathrm{d}}{\mathrm{d}z} \Lambda(z,f)\;, \tag{9}\] where the effective length, \(\Lambda(z,f)\), is defined as the integral along \(z\) of the intrinsic fiber loss: \[\Lambda(z,f)=\frac{1-e^{-\alpha(f)z}}{\alpha(f)} \tag{10}\] On the other hand, the nonlinear term must satisfy Eq. 11: \[\frac{\mathrm{d}}{\mathrm{d}z}\chi(z,f)=\chi(z,f)\int\mathrm{d}f^{\prime}g_{ R}(f,f^{\prime})\mathcal{L}(z,f^{\prime})\chi(z,f^{\prime})\;. \tag{11}\] Figure 2: SSMF Raman gain coefficient profile, \(g_{R}(\Delta f,f_{ref})\), for each frequency of the considered wideband scenario. A well-known exact solution of Eq. 11, see [25, 26] for the details, can be derived considering a flat intrinsic loss coefficient, \(\alpha(f)=\alpha\Rightarrow\Lambda(z,f)=\Lambda(z)\), and a linear Raman gain coefficient, \(g_{R}(f,f^{\prime})=-(f-f^{\prime})K_{R}\). By means of these simplifications, the solution of Eq. 11 is: \[\chi(z,f)=\frac{P\,e^{-f\,K_{R}P\Lambda(z)}}{\int\mathrm{d}f^{\prime}\mathcal{G }_{0}(f^{\prime})e^{-f^{\prime}\,K_{R}P\Lambda(z)}}, \tag{12}\] where \(P=\int\mathrm{d}f\mathcal{G}_{0}(f)\) is the total launch power. In general, as shown in Fig. 1 and Fig. 2, both the assumptions, a flat loss coefficient and a linear Raman gain coefficient, are increasingly inaccurate when the total bandwidth exceeds roughly 15 THz. In [27], a correction of Eq. 12 is proposed considering the triangular approximation of the Raman coefficient profile [28] and an interpolation of the intrinsic fiber loss coefficient. In this work, a perturbative approach is defined, validated and analysed. The advantage of this approach is that, when the numerical series defined by the perturbative expansion converges, a truncated solution can be defined with an arbitrary level of accuracy, depending on the order of the truncation. Moreover, the solution of the perturbative expansion provides a straightforward expression of the correlation between the system parameters and the final result. First, by means of the substitution \(\Gamma(z,f)=\ln\left(\chi(z,f)\right)\), Eq. 11 can be written as follows: \[\frac{\mathrm{d}\Gamma(z,f)}{\mathrm{d}z} = \int\mathrm{d}f^{\prime}g_{R}(f,f^{\prime})\mathcal{L}(z,f^{ \prime})e^{\Gamma(z,f^{\prime})} \tag{13}\] \[\Rightarrow\Gamma(z,f) = \int_{0}^{z}\mathrm{d}z^{\prime}\int\mathrm{d}f^{\prime}g_{R}(f, f^{\prime})\mathcal{L}(z^{\prime},f^{\prime})e^{\Gamma(z^{\prime},f^{\prime})}\;. \tag{14}\] In terms of the perturbative expansion, \(\Gamma(z,f)\) can be formally defined as an infinite sum: \[\Gamma(z,f)=\sum_{k=1}^{\infty}\Gamma^{(k)}(z,f)=\Gamma^{(1)}(z,f)+\Gamma^{(2) }(z,f)+\Gamma^{(3)}(z,f)+\cdots\;, \tag{15}\] where the \(k\)-th order term \(\Gamma^{k}(z,f)\) is proportional to the \(k\)-th power of the perturbative parameter. By mean of this expansion, Eq.14 becomes: \[\Gamma(z,f) = \int_{0}^{z}\mathrm{d}z^{\prime}\int\mathrm{d}f^{\prime}g_{R}(f, f^{\prime})\mathcal{L}(z^{\prime},f^{\prime})\prod_{k=1}^{\infty}e^{\Gamma^{(k)} (z,f)} \tag{16}\] \[= \int_{0}^{z}\mathrm{d}z^{\prime}\int\mathrm{d}f^{\prime}g_{R}(f, f^{\prime})\mathcal{L}_{0}(z^{\prime},f^{\prime})\prod_{k=1}^{\infty}\sum_{n=0}^{ \infty}\frac{1}{n!}\left(\Gamma^{(k)}(z^{\prime},f^{\prime})\right)^{n}\;,\] and the \(k\)-th can be expressed as follows: \[\Gamma^{(k)}(z,f) = \int_{0}^{z}\mathrm{d}z^{\prime}\int\mathrm{d}f^{\prime}g_{R}(f, f^{\prime})\mathcal{L}(z^{\prime},f^{\prime})\sum_{\{n_{j}\}}\prod_{j=1}^{k-1} \frac{1}{n_{j}!}\left(\Gamma^{(k_{j})}(z^{\prime},f^{\prime})\right)^{n_{j}}\;,\] \[\forall\{n_{j}\}\quad\mathrm{such\;that}\quad\sum_{j=1}^{k-1}k_{ j}\,n_{j}=k-1\;.\] Given Eq. 17, successive orders can be evaluated knowing previous orders. In particular, the first four orders are: \[\Gamma^{(1)}(z,f) = \int_{0}^{z}\!\mathrm{d}z^{\prime}\!\!\int\mathrm{d}f^{\prime}g_{ R}(f,f^{\prime})\mathcal{L}(z^{\prime},f^{\prime})=\int\mathrm{d}f^{\prime}g_{R}(f, f^{\prime})P_{0}(f^{\prime})\Lambda(z,f^{\prime})\;, \tag{18}\] \[\Gamma^{(2)}(z,f) = \int_{0}^{z}\!\mathrm{d}z^{\prime}\!\!\int\mathrm{d}f^{\prime}g_{ R}(f,f^{\prime})\mathcal{L}(z^{\prime},f^{\prime})\left[\Gamma^{(1)}(z^{\prime},f^{ \prime})\right]\;,\] (19) \[\Gamma^{(3)}(z,f) = \int_{0}^{z}\!\mathrm{d}z^{\prime}\!\!\int\mathrm{d}f^{\prime}g_{ R}(f,f^{\prime})\mathcal{L}(z^{\prime},f^{\prime})\left[\Gamma^{(2)}(z^{\prime},f^{ \prime})+\frac{1}{2}\left(\Gamma^{(1)}(z^{\prime},f^{\prime})\right)^{2} \right]\;,\] (20) \[\Gamma^{(4)}(z,f) = \int_{0}^{z}\!\mathrm{d}z^{\prime}\!\!\int\mathrm{d}f^{\prime}g_{ R}(f,f^{\prime})\mathcal{L}(z^{\prime},f^{\prime})\left[\Gamma^{(3)}(z^{\prime},f^{ \prime})+\Gamma^{(1)}(z^{\prime},f^{\prime})\Gamma^{(2)}(z^{\prime},f^{\prime})+ \frac{1}{3!}\left(\Gamma^{(1)}(z^{\prime},f^{\prime})\right)^{3}\right]\!. \tag{21}\] Beyond the first order, the integration in \(z^{\prime}\) can be analytically solved for any other orders, obtaining an expression that depends only on the system parameters and input. As an example, the integrated solution for the second order: \[\Gamma^{(2)}(z,f) = \int\mathrm{d}f^{\prime}g_{R}(f,f^{\prime})P_{0}(f^{\prime})\int \mathrm{d}f^{\prime\prime}g_{R}(f^{\prime},f^{\prime\prime})P_{0}(f^{\prime \prime}) \tag{22}\] \[\frac{1}{2}\left[\Lambda(z,f^{\prime})\Lambda(z,f^{\prime\prime}) +\left(\frac{\alpha(f^{\prime})-\alpha(f^{\prime\prime})}{\alpha(f^{\prime}) \alpha(f^{\prime\prime})}\right)\frac{1-e^{-\left[\alpha(f^{\prime})+\alpha(f^ {\prime\prime})\right]z}}{\alpha(f^{\prime})+\alpha(f^{\prime\prime})}\right]\;.\] It is worth to notice that, depending on the system characteristics and the specific software implementation, it may be convenient, in terms of computational cost, to perform the analytical integration in \(z^{\prime}\) or, instead, perform a numerical integration, maintaining an explicit expression of the previous order. As a matter of fact, looking at Eq.22 it can be observed that the analytical integration removes the dependency of the solution on multiple distances, \(z^{\prime}\), required for a numerical integration, but it implies additional integrals in the frequency space. In conclusion, considering the perturbative expansion Eq. 15 up to the \(k\)-th order, the truncated solution of Eq. 7 is: \[\mathcal{G}^{(k)}(z,f)=\mathcal{L}(z,f)\exp\left[\sum_{j=1}^{k}\Gamma^{(j)}(z, f)\right]\;. \tag{23}\] Considering a total number of channels, \(N_{ch}\), combined in a WDM spectrum propagating through a single fiber span, Eq. 23 can be used to evaluate the corresponding \(k\)-th order power profile truncated solution: \[\mathcal{P}^{(k)}_{ch}(z)=\int_{B_{ch}}\mathrm{d}f\;\mathcal{G}^{(k)}(z,f)=P_ {ch}\;e^{-\alpha_{ch}z}\exp\left[\sum_{j=1}^{k}\Gamma^{(j)}_{ch}(z)\right]\;, \tag{24}\] where \(ch\in[1,\cdots,N_{ch}]\) and \(B_{ch}\) the \(ch\)-th channel bandwidth. \(\alpha_{ch}\) and \(\Gamma^{(j)}_{ch}(z)\) are evaluated at the channel central frequency and considered flat within \(B_{ch}\), and \(P_{ch}=\int_{B_{ch}}\mathrm{d}f\;\mathcal{G}_{0}(f)\) is the \(ch\)-th channel launch power. In order to quantify the accuracy of the proposed methodology, the \(k\)-th order relative error can be defined in logarithmic units as follows: \[\mathcal{E}^{(k)}_{ch}(z)=10\log_{10}\left(\frac{\mathcal{P}_{ch}(z)}{ \mathcal{P}^{(k)}_{ch}(z)}\right)=\frac{10}{\ln{(10)}}\sum_{j=k}^{\infty} \Gamma^{(j)}_{ch}(z)\;. \tag{25}\] Eq. 25 provides an explicit expression of the accuracy achieved with different orders considered in the perturbative expansion, as the estimation error is defined as the remainder left out the truncated solution of Eq. 6. ## 4 Simulation Setup ### Optical Line System Architecture In order to validate and analyze the results of the proposed methodology, in this section, a realistic multi-band optical transmission system is defined and a few spectrum configuration considered. In particular, the truncated solution of the Figure 3: Sketch of the considered optical line system architecture. perturbative expansion is verified at the optimum launch power profile for each considered multi-band trannsission scenario. In a partially disaggregated optical network context, in which the independent routing of WDM signal is enabled by the deployment of reconfigurable optical add & drop multiplexers (ROADMs) [29], a ROADM-to-ROADM multi-band transmission OLS is investigated, where each band is amplified by a separate and independent optical amplifiers (Fig. 3). The common used metric for estimating the quality of transmission (QoT) associated to a certain lightpath (LP) is the generalized signal-to-noise ratio, \(\mathrm{GSNR}\). The nonlinear signal-to-noise ratio, \(\mathrm{SNR}_{\mathrm{NL}}\), which includes the effect of the nonlinear interference noise, and the optical signal-to-noise ratio, \(\mathrm{OSNR}\), which includes amplified spontaneous emission noise generated by the optical amplifiers, are the two major contributions of the GSNR. Namely, assuming each LP as an additive and white Gaussian noise (AWGN) channel, the GSNR for a specific wavelength can be expressed as: \[\mathrm{GSNR}=\left(\mathrm{OSNR}^{-1}+\mathrm{SNR}_{\mathrm{NL}}^{-1}\right) ^{-1}\, \tag{26}\] where \(\mathrm{OSNR}^{-1}\) and \(\mathrm{SNR}_{\mathrm{NL}}^{-1}\) can be written as the sum of the separate noise contributions generated in each crossed span. ### Launch Power Optimization The optical line controller responsible for the OLS operation defines the working points of each amplifier to optimizing the GSNR of the OLS. Specifically, a convenient optimization strategy can be defined enforcing a maximized and uniform GSNR spectral distribution on each transmission band. As a result, the chosen optimization criteria in this study is based on the definition of the launch power profile that simultaneously provides the highest average per-band GSNR value and is sufficiently flat on each single transmission band and, in general, throughout the entire transmitted spectrum. The proposed optimization approach does not need any extra hardware because the optimal launch power is achieved by adjusting the gains (or output power) and tilt settings of each optical amplifier. In conclusion, the number of variables to be optimized is twice the number of bands, \(N_{B}\), of the considered multi-band transmission scenario (the pair of gain/tilt values for each \(n\)-th amplifier along the OLS), and the objective function to be maximized is: \[\max\left(\frac{1}{N_{B}}\left[\,\sum_{n=1}^{N_{B}}\left(\overline{\mathrm{ GSNR}_{n}}-\sigma_{\mathrm{GSNR}_{n}}\right)\,\right]-\sigma_{\{\overline{ \mathrm{GSNR}_{1}},\ldots\overline{\mathrm{GSNR}_{N_{B}}}\}}\,\right) \tag{27}\] where \(\overline{\mathrm{GSNR}_{n}}\) is the average GSNR value of the \(n\)-th band, \(\sigma_{\mathrm{GSNR}_{n}}\) is the GSNR standard deviation of the \(n\)-th band and \(\sigma_{\{\overline{\mathrm{GSNR}_{1}},\ldots\overline{\mathrm{GSNR}_{N_{B}}}\}}\) is the standard deviation computed on the set of all GSNR average values. A stochastic heuristic optimization method based on an evolutionary approach has been leveraged in to solve this optimization problem; the covariance matrix adaptation evolution strategy (CMA-ES) is applied as optimization algorithm. ### Analysed Scenarios For the purpose of this work, a periodic multi-band OLS of 10 spans is considered. The assumed values of noise figure, \(\mathrm{NF}\), for each optical amplifier type are reported in Tab. 2 according to the corresponding band [5]. The fiber spans are 70 km long and are characterized by the realistic wideband parameter description reported in Sec. 2. The transmitted signal is implemented according to the 400G standard: Each channel carries a dual polarizatio (DP) 16-QAM (quadrature amplitude modulated) signal with a symbol rate of 64 GBaud and a slot width of 75 GHz. In this framework, a cutting-edge C+L+S-band transmission scenario is considered along with a more future looking U-to-E-band transmission scenario in order to perform a solid validation of the proposed methodology. In both cases, the optimal launch power has been evaluated with the procedure described in Sec. 4.2. Additionally, an ideal flat loss coefficient profile at 0.2 dB / km is considered in order to separately analyze the effect of the SRS effect on the power profile along the fiber. In this case, a flat launch power of -1 dBm per channel is considered. The impact of the real and ideal fiber loss coefficient profiles in terms of total attenuation along a single fiber span \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & **U** & **L** & **C** & **S** & **E** \\ \hline **NF [dB]** & **6.0** & **6.0** & **5.5** & **7.0** & **7.0** \\ \hline \end{tabular} \end{table} Table 2: Optical amplifier noise figure values used in the considered wideband scenario. is shown in Figs. 3(a) and 3(b). In these figures, also the effect of the effective area scaling is shown; its contribution produces variation within each transmission band in a range of 0.5 dB, suggesting that it is necessary to consider this phenomenon mainly for the OSNR estimation especially after the propagation of a WDM comb through a considerable number of sections. Even if this effect turns out to be of secondary importance, in a wideband context its accumulation across the various spans can generate additional inaccuracy. ## 5 Results First, for all the transmission scenarios, the solution of Eq. 6 has been solved using the numerical integration defined in [20], which provides an accurate reference if the position increments are small enough; for the validation purpose, a constant step of 0.8 m has been chosen, providing an accuracy of the evaluated power profile along the fiber of 0.001 dB, for all the frequencies of the transmitted spectra. Then, the reference is compared with a truncated solution of the perturbative expansion presented in Sec. 3. Given the optimal launch powers of all the considered transmission scenarios, the perturbative expansion Eq. 15 is convergent. In particular, the successive orders are monotonously decreasing and, moreover, considering the \(k\)-th order truncated solution, the infinite sum of the remainder orders converges. Therefore, an arbitrarily small relative error, \(\mathcal{E}_{ch}^{(k)}(z)\), for all the channels of the propagated spectrum can be achieved considering the proper \(k\)-th order of the solution. In Fig. 5-b, the \(4\)-th truncated solution of Eq. 24 is compared with the reference evaluation in the case flat fiber loss profile and \(P_{ch}\) (Fig. 5-a), at the fiber termination, \(L_{s}\). In Fig. 5-c, it is shown that increasing the order of the solution, an increasing accuracy is obtained, until the arbitrary tolerance of 0.1 dB is achieved. Finally, in Fig. 5-d, the normalized error \(\mathcal{E}_{ch}^{(k)}(L_{s})/\max\left(|\mathcal{E}_{ch}^{(k)}(L_{s})|\right)\) up to the 4-th order is shown. As expected, it can be observed that the \(k\)-th relative error has exactly the symmetry in frequency of the \(k+1\)-th order, which it is its most significant term. Therefore, odd orders have an even error function in frequency and _vice versa_, demonstrating that the proposed perturbative expansion is exact at every order. Further analysis on the formal expansion Eq. 15 are out of the scope of this study and will be investigated in future publications. A heuristic effective and conservative estimation of the proper order required to achieve at least a given tolerance, \(\tau\), has been validated for a set of increasing U-to-E transmission bandwidth and an increasing flat launch power per channel, \(P_{ch}\in[-4,2]\) dBm, resulting in a total power at the fiber input for the full U-to-E scenario of 23.1 and 29.1 dBm, respectively; this evaluation provides the correct order or at most the successive, guaranteeing the required accuracy. Due to the complex spectral shape of the Raman coefficient profile, distinct orders have different interactions with the power profile along the fiber, therefore, the proposed estimation procedure has to be perform after the calculation of each order and it is expressed by the following inequality: \[\left|\mathcal{E}_{ch}^{(k)}(z)\right|\leq\frac{10}{\ln(10)}\left[\exp\left( \theta^{(k)}\right)-\sum_{j=0}^{k}\frac{\left(\theta^{(k)}\right)^{j}}{j!} \right]\leq\tau\;,\quad\mathrm{with}\quad\theta^{(k)}=\sqrt[k!\ \max\left(|\Gamma_{ch}^{k}(z)|\right)\;. \tag{28}\] Using Eq. 28, with a defined tolerance of 0.1 dB, the truncated solution at the proper order of Eq 24 has been evaluated for all the investigated realistic transmission scenarios; for both cases the required tolerance is achieved at the exact order evaluated by means of Eq. 28. In Fig. 6 and Fig. 7, the launch power profile, the reference and evaluated power profile at the fiber termination and the relative error are reported for the C+L+S-band and U-to-E transmission scenarios, respectively. Finally, the proposed methodology enables a faster implementation of the SRS solver for all the transmission scenarios with respect to the numerical integration method, given a certain tolerance value. As an example, the integral solution of each order, as Eq. 18- 21, can be integrated numerically in order to obtain the perturbative solution without evaluating an explicit form, as Eq. 22, for each perturbative order. Remarkably, the position increment required to achieved a given accuracy spatially integrating the perturbative orders, roughly tens of km if \(\tau=0.1\) dB, is significantly larger then the position increment required by the numerical solution to achieve the very same accuracy, roughly 0.1-1 km. Therefore, the perturbative solution required a significantly lower computational effort to achieve the same result of the numerical solution. Regarding the explicit expression of each order, as anticipated in Sec. 3, it is not always convenient in terms of computational cost. This is due to the high number of channels involved in a wideband scenario and can be overcome considering a lower number of equivalent macro channels in place of the real propagated channels, assuming that the variations of the intrinsic fiber loss, the Raman coefficient profile and the power spectral density are negligible within the macro channel bandwidths. Further consideration on this aspects will be addressed in a future publication. In this work, an increasing total transmission bandwidth is considered from 2.5 to 40 THz starting from the first portion of the U-band to the last portion of the E-band, with a step of 2.5 THz. For all this spectra, a fixed power per channel of -1 dBm has been set and the proper order and position increment for the spacial integration has been evaluated for both the perturbative and numerical solution in order to obtain a fixed 0.1 dB tolerance. The resulting computational times are reported in Fig. 8, where it can be observed that the perturbative solution perform at least one order of magnitude better than the numerical solution. Figure 5: Ideal flat fiber loss profile simulations. In particular, (a) Flat launch power at -1 dBm per channel. (b) Comparison of the numerical reference and the 4-th order perturbative solution. (c) Relative error up to the 4-th order. (d) Normalized relative error up to the 4-th order. Figure 6: Realistic fiber parameters simulations, C+L+S-band transmission scenario. In particular, (a) Optimal launch power. (b) Comparison of the numerical reference and the 2-nd order perturbative solution. (c) Relative error up to the 2-nd order. Figure 7: Realistic fiber parameters simulations, U-to-E-band transmission scenario. In particular, (a) Optimal launch power. (b) Comparison of the numerical reference and the 3-rd order perturbative solution. (c) Relative error up to the 3-rd order. ## 6 Conclusion In this study, a perturbative expansion describing the inter-channel SRS is presented and analysed. The proposed methodology enables the estimation of an arbitrary accurate solution when the proper perturbative order is evaluated, and it provides a direct insight of the relation between the SRS effect and the fiber or spectrum parameters. The proposed perturbative solution is validated in wideband transmission scenarios including the C+L+S-band and U-to-E-band transmission scenario. Additionally, a heuristic effective and conservative procedure for the estimation of the proper perturbative order required to achieve a given tolerance is provided and discussed. Finally, the benefit in terms of computational time of the proposed methodology is demonstrated on increasing total bandwidth transmission scenarios. In future publications, the convergence of the perturbative expansion will be investigated to further extent, along with an optimal software implementation of the proposed methodology. Also, the explicit expressions of the first order solutions of Eq. 6 will be considered for an effective definition of the generalized Gaussian noise model estimating the nonlinear interference noise in wideband transmission scenarios.
2306.16898
Whole-Body Ergodic Exploration with a Manipulator Using Diffusion
This paper presents a whole-body robot control method for exploring and probing a given region of interest. The ergodic control formalism behind such an exploration behavior consists of matching the time-averaged statistics of a robot trajectory with the spatial statistics of the target distribution. Most existing ergodic control approaches assume the robots/sensors as individual point agents moving in space. We introduce an approach that decomposes the whole-body of a robotic manipulator into multiple kinematically constrained agents. Then, we generate control actions by calculating a consensus among the agents. To do so, we use an ergodic control formulation called heat equation-driven area coverage (HEDAC) and slow the diffusion using the non-stationary heat equation. Our approach extends HEDAC to applications where robots have multiple sensors on the whole-body (such as tactile skin) and use all sensors to optimally explore the given region. We show that our approach increases the exploration performance in terms of ergodicity and scales well to real-world problems. We compare our method in kinematic simulations with the state-of-the-art and demonstrate the applicability of an online exploration task with a 7-axis Franka Emika robot. Additional material available at https://sites.google.com/view/w-ee-d/
Cem Bilaloglu, Tobias Löw, Sylvain Calinon
2023-06-29T12:39:25Z
http://arxiv.org/abs/2306.16898v2
# Whole-Body Exploration with a Manipulator Using Heat Equation ###### Abstract This paper presents a whole-body robot control method for exploring and probing a given region of interest. The ergodic control formalism behind such an exploration behavior consists of matching the time-averaged statistics of a robot trajectory with the spatial statistics of the target distribution. Most existing ergodic control approaches assume the robots/sensors as individual point agents moving in space. We introduce an approach exploiting multiple kinematically constrained agents on the whole-body of a robotic manipulator, where a consensus among the agents is found for generating control actions. To do so, we exploit an existing ergodic control formulation called heat equation-driven area coverage (HEDAC), combining local and global exploration on a potential field resulting from heat diffusion. Our approach extends HEDAC to applications where robots have multiple sensors on the whole-body (such as tactile skin) and use all sensors to optimally explore the given region. We show that our approach increases the exploration performance in terms of ergodicity and scales well to real-world problems using agents distributed on multiple robot links. We compare our method with HEDAC in kinematic simulation and demonstrate the applicability of an online exploration task with a 7-axis Franka Emika robot. ## I Introduction Exploration is an indefinite terminal time, information-gathering behavior aimed at reducing uncertainty [20]. A variety of autonomous exploration tasks require physical interaction to collect information due to contact requirements or sensory occlusion. Existing work in contact-based exploration ranges from object shape reconstruction with tactile skins [5], and probing for stiffness mapping [4], to cleaning residues of a part by pressurized air [8] and exploring wrench space of an articulated object [14]. Although it is possible to formulate these tasks as entropy minimization from an information theory perspective, setting up the optimization objective and computing the entropy update is challenging. Fortunately, for a subset of problems, one can formulate the autonomous exploration as a coverage of a region (target distribution). Existing work in spatial coverage focuses on multi-agent systems with high-range sensors mounted on drones and ground robots (camera, LiDAR, time of flight sensor arrays [6]) for increasing coverage speed. Applications of these methods explore vast regions such as crop fields [10] or bridges [11]. Robotic manipulators can also be used for exploring small but complex target distributions with physical interaction. Existing methods focusing on multi-agent systems do not consider intricate sensor/tool geometries, which might span multiple links in the case of a whole-body tactile skin. Accordingly, available approaches hinder potential performance gains of using whole-body for exploration. We argue that, as the accessibility of anthropomorphic robotic hands [22, 25], and arms equipped with joint force-torque sensors and tactile skins [2, 9] increases, so will the need for whole-body exploration. Therefore, in this letter, we propose a whole-body exploration method for robotic manipulators formulated as spatial coverage control. We summarize our approach in Figure 1. Our contributions are as follows: * proposing an approach for whole-body exploration using multiple links of the manipulator; * introducing kinematically constrained virtual agents to consider intricate tool/sensor footprint for exploration; * proposing weighting strategies to control the robot with consensus among virtual agents and links. Fig. 1: Whole-body exploration of a target distribution using the last three links of the robot manipulator. In kinematic simulation, the exploration target is given in red. In real-world experiment, the robot explores the cube region in dashed lines for localizing a target object (tennis ball whose location is unknown). Blue, turquoise, and purple spheres are the virtual agents constrained to the 5-th, 6-th, and 7-th links, respectively. The green and yellow arrows show the net virtual force and torque acting on each link’s center of mass calculated by our agent weighing strategy. We further weight the net wrenches acting on the active links by their manipulability index to generate the consensus control action of the robot. ## II Related Work The common challenge in whole-bodyexploration with a robotic manipulator is the inherent curse of dimensionality. Indeed, a naive exploration based on random actions quickly becomes infeasible and instead requires intelligent strategies leveraging prior information [3]. In settings with minimal uncertainty about the task priors, the best approach is to use coverage path planning (CPP) [7, 23] or informative path planning (IPP) if the information is unevenly distributed [19, 26]. Nevertheless, planning methods boil down to solving a trajectory optimization problem that is intractable for the most general case. Control methods are robust to uncertainty compared to planning approaches and can be used when the terminal time for planning is unknown [18]. In [17], researchers proposed to use ergodicity--a concept that originated from statistical physics providing a metric that measures the difference between the target distribution and time-averaged statistics of agent trajectories [1] for control. This particular method, called _spectral multiscale coverage_ (SMC), minimizes the ergodic metric by matching the Fourier series weights of the target distribution and the reconstructed distribution from the agent trajectories (coverage). Although the original formulation uses the Dirac delta function as the agent's footprint to simplify the computation of coverage, Ayvali _et al._[3] proposed using KL-divergence for arbitrary agent footprints, which also alleviates the need for approximating the target distribution by a finite number of basis functions. Later, researchers used ergodic exploration based on KL-divergence for active learning of equilibrium policies for dynamical systems [1]. However, these techniques based on KL-divergence are sampling-based planners, and they were not tested in online control settings. In [21], Shetty _et al._ proposed an online ergodic exploration technique for peg-in-hole tasks. They extended the SMC algorithm by a low-rank approximation called tensor train factorization to scale the computation of Fourier series weights in the 6-D pose space describing the end-effector location. Despite these various extensions, the approaches based on the SMC method are still impractical for online tasks with dynamic target distributions. A recent alternative to the SMC method is _heat equation-driven area coverage_ (HEDAC) [12]. HEDAC encodes the target distribution as a virtual heat source and calculates the potential field resulting from diffusion, modeled as heat conduction in a uniform medium. Virtual heat conduction provides a model to smooth the gradient field and propagate information about unexplored regions to the agents. Since the method is based on a discrete temperature field, it is possible to include arbitrary target distributions and sensor footprints and to change them on-the-fly since the coverage values can be imposed on the new distribution. This technique is based on the heat equation (HE), a fundamental partial differential equation (PDE). Because solving PDEs on different domains such as mesh surfaces, point clouds with explicit or implicit time stepping schemes [16] is a well-studied subject in various fields and has readily available tools for efficient computation. Additionally, it is possible to introduce internal domain boundaries where no heat conduction is allowed to encode exploration with embedded obstacle avoidance behavior. For instance, Ivic _et al._ adopted a finite element method to solve HE on a planar domain with obstacles modeled as internal boundaries [13] using the Neumann boundary conditions (BC). They later extended this approach to a three-dimensional setting [11]. To the best of our knowledge, existing work in HEDAC mostly focuses on multi-agent systems, with the only exception being drozBot [15], a robot manipulator drawing artistic portraits. Nevertheless, drozBot only considers the tip of the pen for the coverage problem, whereas we consider the whole-body of the manipulator corresponding to a sensor/tool spanning multiple links of the manipulator, where we consider the intricate geometry of each link. To do so, we exploit the unique property of HEDAC to perform first local and then global exploration behavior. This property enables discretizing a sensor/tool first to individual links and then to individual agents composing the links to compute a consensus control action for the whole-body of the robot manipulator. If we used any other method that performs first global then local exploration, such as SMC, even neighboring agents would move in different directions, and we would not be able to find a consensus direction. ## III Whole-Body Consensus Control using Kinematically Constrained Virtual Agents ### _Potential Field as the Solution of the Heat Equation_ We extend the state-of-the-art ergodic control technique HEDAC [12] to obtain the potential field guiding the exploration behavior. As the first step, HEDAC introduces a virtual heat source term encoding the target of the coverage task \[\tilde{s}(\mathbf{x},t)=\max\left(e(\mathbf{x},t),0\right)^{2}, \tag{1}\] where the scalar field \(e(\mathbf{x},t)\) is the spatial distribution of the coverage residual, computed as the difference between the target distribution \(p(\mathbf{x})\) and the time-dependent coverage \(c(\mathbf{x},t)\), namely \[e(\mathbf{x},t)=p(\mathbf{x})-c(\mathbf{x},t). \tag{2}\] We further normalize the source over the domain for proper scaling, using \[s(\mathbf{x},t)=\frac{\tilde{s}(\mathbf{x},t)}{\int_{\Omega}\tilde{s}(\mathbf{x},t)dx}. \tag{3}\] Next, we diffuse the source term over the domain to propagate the information of unexplored regions to the agents. For that purpose, we use the heat equation: a second-order partial differential equation that models heat conduction by relating spatial and time derivatives of a scalar field with \[\frac{du(\mathbf{x},t)}{dt}=\alpha\Delta u(\mathbf{x},t)+\beta s(\mathbf{x},t), \tag{4}\] where \(u(\mathbf{x},t)\) corresponds to the temperature field, \(\alpha\) is the thermal diffusivity, \(\beta\) is the source strength and \(\Delta\) is the Laplacian. As we will further explain in the next section, we omit the sink term responsible for collision avoidance between the agents in the original HEDAC formulation because we constrain the agents to move together. Next, we impose Neumann boundary conditions (BC) corresponding to thermal insulation, thus no information propagation over the domain boundary \[\frac{\partial u}{\partial\mathbf{n}}=0,\quad\text{on}\quad\partial\Omega, \tag{5}\] where \(\Omega\) is a \(n\)-dimensional domain with Lipschitz continuous boundary. ### _Kinematically Constrained Virtual Agents_ We define _virtual agents_ as atomic particles that compose a rigid body and interact with the potential field. Depending on the task, virtual agents abstract a sensor/tool used for physical interaction during exploration. For instance, if we use a tactile sensor array, each virtual agent abstracts an individual tactile sensor. Additionally, we use the term _whole-body_ if the sensor/tool spans multiple bodies on different links of the manipulator. Keeping the same analogy, we call a tactile skin composed of multiple tactile sensor arrays on different links of the manipulator a whole-body sensor. By this construction, we can locate each agent of the whole-body in the potential field using the forward kinematics function \(f_{\text{kin}}\) of the robot \[\mathbf{x}_{i}=\mathbf{f}_{\text{kin}}(\mathbf{q},i,j)\quad\forall i=1,\dots,n\quad\forall j =1,\dots,m, \tag{6}\] where \(n\) is the number of virtual agents, \(\mathbf{x}_{i}\) is the position the \(i\)-th agent on the \(j\)-th link and \(\mathbf{q}\) is the vector of joint variables. Notably, all the virtual agents are kinematically constrained on the same robot and share the joint variables. Next, in each iteration of the algorithm, we compute the coverage of the agents. Virtual agents cool the temperature field and mark the regions they cover by updating the coverage distribution \(c_{i}(\mathbf{x},t)\). For the \(i\)-th agent, coverage is computed by the relation \[c_{i}(\mathbf{x},t)=\int_{0}^{t}\phi(\mathbf{x}-\mathbf{x}_{i}(\tau))d\tau, \tag{7}\] where \(\phi(\mathbf{x}_{i})\) is the footprint or shape of the virtual agent. Despite it is possible to choose arbitrary shapes, in this work we preferred to use Gaussian radial basis functions (RBF) to comply with the implementations given in [12, 15]. RBF with the adjustable shape parameter \(\epsilon\) is given by \[\phi(\mathbf{x})=e^{-(\mathbf{x}\mathbf{x})^{2}}. \tag{8}\] Next, we compute the total coverage containing the exploration effort of all the agents \[\tilde{c}(\mathbf{x},t)=\frac{1}{Nt}\sum_{i=1}^{N}c_{i}(\mathbf{x},t), \tag{9}\] and we further normalize the total coverage \(\tilde{c}(\mathbf{x},t)\) over the domain using (3). Instead of decomposing a body into kinematically constrained virtual agents, we can equivalently use a single agent with the shape of the body. However, such an approach, although equivalent for encoding coverage, is computationally more expensive (includes many zero entries to match matrix dimensions), and would not extend to the active agent concept that we will introduce next. ### _Active Agents and Local Weighting_ We call_passive_ agents the virtual agents that do not contribute to the control action and whose only effect is to cool down the temperature field. Our motivation for introducing passive agents over the robot body is to include their exploration effort in the total coverage in order to represent the whole-body. We use the term _passive_ because these agents explore regions indirectly as a secondary effect of the robot's primary goal. _Active_ agents, on the other hand, contribute to the control command of the robot with their local information regarding the exploration. According to our model, the potential field exerts a fictitious force on each active agent based on the gradient of the temperature field and we multiply this force by a weight \(w_{i}\) \[\mathbf{f}_{i}=w_{i}\nabla u(\mathbf{x}_{i}(t),t). \tag{10}\] The naive method of computing the agent weights is to use a uniform weighting strategy, thus assigning equal importance to every agent. However, this is suboptimal since the significance of agents differs depending on their position in the potential field and the current state of the potential field itself. The value of the potential field at a given point (temperature) encodes how much this particular region is underexplored. Accordingly, we embed this information by using the local temperature sensed by the agent as its weight. Hence, agents that are on the frontier of exploration (the ones closer to the underexplored regions) will have a higher weight compared to the agents that are on the overexplored regions. Thus, we set the weight of the \(i\)-th agent as \[\tilde{w}_{i}=u(\mathbf{x}_{i}(t),t), \tag{11}\] which we call _local weighting strategy_. Note that, the local weight is a function of the potential field, i.e. both space and time, and is therefore computed online. Next, we normalize the weights to make them independent of the number of agents \[w_{i}=\frac{\tilde{w}_{i}}{\sum_{j=1}^{N}\tilde{w}_{j}}. \tag{12}\] We show the difference between local and uniform weighting strategies in Figure 2. ### _Active Links and Manipulability Weighting_ Similarly to active agents, we call a rigid body composed of active agents _active link_ and compute the net force and moment acting on it by all the agents \[\mathbf{F}_{\text{net}}=\sum_{i=1}^{N}\mathbf{f}_{i},\quad\mathbf{M}_{\text{net}}=\sum_{i= 1}^{N}\mathbf{r}_{i}\times\mathbf{f}_{i}, \tag{13}\] where \(r_{i}\) is the displacement vector connecting the active agent and the body's center of mass. We stack force and moment into a net wrench acting on the \(j\)-th link of the manipulator. For the simplest kinematic control strategy, we set the desired twist of the body \(\mathbf{V}_{\text{des}}\) equal to the net wrench acting on the body \[\mathbf{V}_{\text{des}}=\left[\begin{array}{cc}\mathbf{F}_{\text{net}}&\mathbf{M}_{\text{ net}}\end{array}\right]^{\top}, \tag{14}\] corresponding to having identity inertia. We show the active and passive agents with corresponding net wrenches/desired twists in Figure 3. Here we generate meaningful consensus twist commands for the active link since the gradient field exerts similar forces to neighboring agents. This is possible because the gradient (force) field resulting from spatial diffusion by virtual heat conduction is smooth, and we perform first local exploration (moving based on the force field) and then global exploration (by the propagation of information as diffusion). Moreover, we ensure that the potential field exerts similar forces to neighboring agents by setting the local cooling term used for collision avoidance in HEDAC to zero. We propose to weight each active link contribution because different links do not have the same _manipulability_[24] and volume \(\nu\), thus they do not have the same volumetric coverage rate. For that purpose, we compute the link weights using the scalar manipulability index \(\mu\) \[w =\nu\mu, \tag{15}\] \[\text{with}\quad\mu =\sqrt{\det(\mathbf{J}(\mathbf{q})\mathbf{J}(\mathbf{q})^{\top})}, \tag{16}\] where \(\mathbf{J}(\mathbf{q})\) is the Jacobian of the active link computed at its center of mass. ### _Consensus Control for Whole-Body_ Our setting consists of multiple weight-prioritized tasks, encoded as task velocities with corresponding Jacobians, and we would like to perform them in the least square optimal sense by exploiting the redundancy of our robot, namely \[\hat{\mathbf{q}}_{\text{des}}=\operatorname*{arg\,min}_{\hat{\mathbf{q}}}\|\mathbf{W}^{1/ 2}\bar{\mathbf{V}}_{\text{des}}-\bar{\mathbf{J}}\hat{\mathbf{q}}\|_{2} \tag{17}\] where \(\bar{\cdot}\) corresponds to either horizontally or vertically stacked vectors to match dimensions for matrix multiplication and \(\mathbf{W}=\text{diag}(w_{1},w_{2},\dots,w_{m})\) is the diagonal matrix of active link weights computed by (16). We enforce task priorities by using the weighted pseudoinverse matrix \[\bar{\mathbf{J}}^{\dagger\mathbf{W}}=\left(\bar{\mathbf{J}}^{\top}\mathbf{W}\bar{\mathbf{J}} \right)^{-1}\bar{\mathbf{J}}^{\top}\mathbf{W}, \tag{18}\] then we compute desired joint velocities as \[\hat{\mathbf{q}}_{\text{des}}=\bar{\mathbf{J}}^{\dagger\mathbf{W}}\bar{\mathbf{V}}_{\text{des}}. \tag{19}\] Although in this work we have used weight-prioritized tasks, in future work we plan to investigate exploiting nullspace projections for hierarchical prioritization. Next, we use desired joint velocity to either kinematically simulate the robot \(\mathbf{q}_{t+1}=\mathbf{q}_{t}+\hat{\mathbf{q}}_{\text{des}}\Delta t\) or as a desired joint velocity for an impedance controller to use with a torque controlled robot. Then, we clamp the desired joint positions as the simplest strategy to comply with joint limits. We give the full procedure for robot control in Algorithm 1. ## IV Whole-Body Exploration ### _Simulated Experiments_ We performed kinematic simulations in order to measure the exploration performance. We used the normalized ergodicity over the target distribution as the exploration metric \[\varepsilon=\frac{\|\max\left(e(\mathbf{x},t),0\right)\|_{2}}{\int_{\Omega}p( \mathbf{x})d\mathbf{x}}, \tag{20}\] where \(p(\mathbf{x})\) is the target distribution, \(e(\mathbf{x},t)\) is the residual given by (2). Here, lower values show higher coverage. Fig. 3: Comparison of using active and passive agents to compose a body for exploration. The grid is the potential field where small arrows are the temperature gradients guiding the agents. The exploration target is the green square where blue dots are the virtual agents and large red arrows are the net wrench acting on the body. The top left figure shows the initial setup both for active and passive agents. Passive agents move to the center of the target whereas active agents also align themselves to the target Fig. 2: Comparison of uniform and local temperature weighting. The green square is the exploration target and small arrows show the temperature gradient. Blue dots and arrows show active agents and the force exerted on each agent after weighting. #### Iv-B1 Planar Experiments We first present the results of planar simulations since it is possible to qualitatively show the exploration performance using image colormaps. We tested the coverage performance in three different virtual agent configurations: (i) _single_ virtual agent corresponding to an independent HEDAC agent, (ii) _passive_ configuration with agents distributed on the final link of the serial manipulator and (iii) _active_ configuration, using locally weighted agents distributed on the final link. We plotted the trajectories and the target distribution in Figure 4. Figure 4 shows the effect of different agent configurations on the exploration trajectory. Although the passive configuration increases the explored area compared to the single agent, most of this exploration is not in line with the target, making it irrelevant in terms of exploration performance. Active agents align themselves with the target distribution for most of their trajectories, and they only get misaligned when rotating for re-alignment, thus increasing the exploration performance. Next, we performed experiments using two different target distributions: (i) a discretized Gaussian mixture-model (GMM) and (ii) a hand-sketched 'X' shape (shown in Figure 5 top right corners). For this experiment, we randomly sampled 100 valid initial joint configurations from a uniform distribution. Then, we simulated each virtual agent configuration and target distribution starting from these initial configurations. We stopped the simulation after 800 timesteps, although the exploration behavior would continue indefinitely, and we calculated the mean and standard deviation of the normalized ergodic metric for each setup and plotted the results in Figure 5. We chose a GMM as the first target distribution in order to simulate the settings where we only have a rough prior on the exploration target. Figure 4(a) shows an evident performance increase in this scenario when using multiple agents (passive, active) instead of a single agent, in line with our motivation to use whole-body for exploration. Next, we designed a much more restrictive task using an 'X' shape as the target distribution, which better showcases the performance difference between passive and active agent configurations. Figure 4 shows that, when using passive agents, only a few agents stay on the target distribution. This comes from the agents not being aligned with the shape, whereas locally weighted active agents align themselves to the target and use all the agents effectively. Accordingly, we observe a significant performance increase in active agent configuration in Figure 4(b). Moreover, after a certain timestep (\(\approx 400\)), even using a single agent configuration surpasses the performance of the passive configuration because passive agents, in spite of not helping with the exploration, also disturb the diffused heat coming from the top left of 'X'. As a result, passive agents do not cover the top left of the distribution, as seen in Figure 4(b). #### Iv-B2 Three Dimensional Experiments In planar experiments, we presented the performance gain of using active agents compared to the original HEDAC. In the 3D experiments, we measure the additional benefits of introducing Fig. 4: Planar exploration by virtual agents in single, passive, and active configurations. The black shape is the target distribution, colored lines are agent trajectories, and dashed lines on the right figure show configuration of the planar manipulator at equally spaced timesteps moving from red to blue. multiple active links on the exploration performance. We assumed no prior for the exploration target, hence we used a uniform distribution. We placed the target cube in front of a 7-axis Franka Emika robot and sampled active agents on links 5, 6, and 7 using Poisson-disk sampling. Similarly to the planar experiments, we used pre-sampled joint configurations and ran the experiments starting from the same initial configuration for each virtual agent configuration and plotted the results in Figure 6. During the experiments, we attained a control frequency of \(200\) Hz for a single agent and \(20\) Hz for configuration with \(800\) active agents on links \(5\), \(6\), and \(7\) on a laptop processor. As Figure 6 clearly shows, using a single point for exploring a volume is extremely time-inefficient. On the other hand, using multiple agents on multiple links increases as expected the exploration performance drastically. Using the last three links perform significantly better than using a single link. However, this additional gain diminishes as we move on to the links with less manipulability. Consequently, we observed a negligible performance gain after the \(5\)-th link in our experiments. ### _Real-world Experiment_ For the real-world experiments, we used a 7-axis Franka Emika robot in an object localization task. We placed a tennis ball inside the target distribution as the target object (whose location is unkown to the robot) and used the same target distribution and initial configurations as the ones in three-dimensional kinematic simulations. We ran the experiment until one of the links made contact with the target object. We registered contact events using the joint torque sensors of the robot. We provide the experiment setup in Figure 7, and the full video is available as supplementary material. The real-world experiment demonstrates the applicability of the method in a realistic scenario and it shows the validity of the results given in Figure 6. Fig. 5: Coverage performance given by the normalized ergodic metric for different virtual agent configurations. Target distributions for the coverage task are given on the top right Fig. 6: Coverage performance given by the normalized ergodic metric for different virtual agent configurations. The target distribution is a cube discretized on a \(50\times 50\times 50\) grid where each point corresponds to \(1\) cm Fig. 7: Real-world experiment of the robot exploring the cube in dashed lines using its last three links, until either b) link \(5\), c) link \(6\) and d) link \(7\) contacts the target object. ## V Discussion and Conclusion In this letter, we presented a robot control method for efficiently exploring a target distribution using a robotic manipulator's whole-body. Our method extends the heat equation-driven area coverage, a state-of-the-art coverage method that uses spatial diffusion of a virtual heat source to encode coverage tasks. Unlike existing HEDAC approaches using independent agents to abstract a multi-agent system, we used kinematically constrained agents on the links of a robotic manipulator, and we introduced active agents, enabling us to consider the shape and kinematic relations of the whole-body sensor/tool for the exploration. We used active agents to compose active links by a weighting strategy exploiting the exploration information embedded into the potential field. Next, we composed active links into a whole-body by incorporating the volumetric coverage rate of each link using the active link's manipulability index and volume. Lastly, we measured the performance of our method in terms of ergodicity in kinematic simulations and demonstrated its applicability in physical scenarios using the 7-axis Franka Emika in an object localization task. A potential limitation of our approach is not explicitly considering joint limits for the exploration and only clamping the desired joint positions. Although we showcased the method in this paper using a robot arm with joint torque sensors, other potential use cases are using multi-finger anthropomorphic hands and manipulators with tactile sensors and/or mobile base. In future work, we aim to extend the method to informative planning tasks by considering the joint limits and recursive target distribution updates. By doing so, we intend to use the manipulator's whole-body or sensor geometry to reconstruct or localize the physical properties of target objects that can only be measured through contacts, such as deformability, articulation, or friction.
2304.07134
Pool Inference Attacks on Local Differential Privacy: Quantifying the Privacy Guarantees of Apple's Count Mean Sketch in Practice
Behavioral data generated by users' devices, ranging from emoji use to pages visited, are collected at scale to improve apps and services. These data, however, contain fine-grained records and can reveal sensitive information about individual users. Local differential privacy has been used by companies as a solution to collect data from users while preserving privacy. We here first introduce pool inference attacks, where an adversary has access to a user's obfuscated data, defines pools of objects, and exploits the user's polarized behavior in multiple data collections to infer the user's preferred pool. Second, we instantiate this attack against Count Mean Sketch, a local differential privacy mechanism proposed by Apple and deployed in iOS and Mac OS devices, using a Bayesian model. Using Apple's parameters for the privacy loss $\varepsilon$, we then consider two specific attacks: one in the emojis setting -- where an adversary aims at inferring a user's preferred skin tone for emojis -- and one against visited websites -- where an adversary wants to learn the political orientation of a user from the news websites they visit. In both cases, we show the attack to be much more effective than a random guess when the adversary collects enough data. We find that users with high polarization and relevant interest are significantly more vulnerable, and we show that our attack is well-calibrated, allowing the adversary to target such vulnerable users. We finally validate our results for the emojis setting using user data from Twitter. Taken together, our results show that pool inference attacks are a concern for data protected by local differential privacy mechanisms with a large $\varepsilon$, emphasizing the need for additional technical safeguards and the need for more research on how to apply local differential privacy for multiple collections.
Andrea Gadotti, Florimond Houssiau, Meenatchi Sundaram Muthu Selva Annamalai, Yves-Alexandre de Montjoye
2023-04-14T13:52:09Z
http://arxiv.org/abs/2304.07134v1
Pool Inference Attacks on Local Differential Privacy: Quantifying the Privacy Guarantees of Apple's Count Mean Sketch in Practice ###### Abstract Behavioral data generated by users' devices, ranging from emoji use to pages visited, are collected at scale to improve apps and services. These data, however, contain fine-grained records and can reveal sensitive information about individual users. Local differential privacy has been used by companies as a solution to collect data from users while preserving privacy. We here first introduce pool inference attacks, where an adversary has access to a user's obfuscated data, defines pools of objects, and exploits the user's polarized behavior in multiple data collections to infer the user's preferred pool. Second, we instantiate this attack against Count Mean Sketch, a local differential privacy mechanism proposed by Apple and deployed in iOS and Mac OS devices, using a Bayesian model. Using Apple's parameters for the privacy loss \(\epsilon\), we then consider two specific attacks: one in the emojis setting -- where an adversary aims at inferring a user's preferred skin tone for emojis -- and one against visited websites -- where an adversary wants to learn the political orientation of a user from the news websites they visit. In both cases, we show the attack to be much more effective than a random guess when the adversary collects enough data. We find that users with high polarization and relevant interest are significantly more vulnerable, and we show that our attack is well-calibrated, allowing the adversary to target such vulnerable users. We finally validate our results for the emojis setting using user data from Twitter. Taken together, our results show that pool inference attacks are a concern for data protected by local differential privacy mechanisms with a large \(\epsilon\), emphasizing the need for additional technical safeguards and the need for more research on how to apply local differential privacy for multiple collections. ## 1 Introduction User's behavioral data, ranging from words typed to processes running on the phone, are collected by operating systems, apps, and services. This data allows companies to better understand user behavior, detect issues, and ultimately improve services. For instance, iOS and Mac OS devices keep track of websites that the user visits using the Safari browser, together with the user's preferences on videos that play automatically when the page is loaded [4]. Aggregated over millions of users, this data allows Apple to learn on which websites the users generally want videos to play automatically and to set default auto-play policies in Safari [4]. Local differential privacy, a variation of differential privacy, is among the main solutions for such data collection. Mechanisms satisfying local differential privacy avoid users having to trust anyone, including the data curator. The mechanism takes as input the original data recorded on the user device Figure 1: Example of pools defined on a universe \(\Omega\) consisting of emojis, when the adversary is interested in determining the skin tone that is most often selected by the user. In this case, Usr’s preferred pool is the one containing medium-light skin tone emojis. (original objects) and shares with the curator a randomized version (obfuscated objects) which should not reveal (almost) anything about the original information [13, 21, 41]. A large literature exists on mechanisms satisfying local differential privacy [41] and some mechanisms have been deployed at scale by Google [15], Microsoft [11], and Apple [4]. One of these mechanisms is Count Mean Sketch (CMS). CMS is used on iOS and Mac OS devices to report both emojis used and websites visited to Apple. Featured in Apple's keynote, local differential privacy allows the company to "help discover the usage patterns of a large number of users without compromising individual privacy" [17]. This implementation, and in particular Apple's choice of the parameter \(\epsilon\), has come under criticism from privacy researchers. It is generally believed that \(\epsilon\) -- which controls the _privacy loss_ incurred by the user -- should typically not exceed \(\ln(3)\) (\(\approx 1.10\)) [14]. Soon after the technology was deployed, it was found that Apple's implementation uses \(\epsilon=4\) when collecting emoji usage data and \(\epsilon=8\) when collecting web domain data [34]. Apple's choice to only consider the privacy loss per submission -- once a day for both the web domain and emoji data -- instead of a total privacy loss \(\epsilon_{\text{tot}}\) (after which objects would no longer be collected from the user [2]) has similarly raised concerns on theoretical ground. While Apple states that they remove user identifiers and IP addresses after the obfuscated objects are received by their server [4], this is a measure that relies on trust and hence conflicts with local differential privacy's purpose of protecting against an untrusted curator1. It is indeed well-known that the mathematical guarantees offered by local differential privacy degrade as multiple objects are collected from the same user, something that can be quantified with an upper bound using the Composition Theorem [13] (\(\epsilon_{\text{tot}}\leq\epsilon_{1}+\ldots+\epsilon_{n}\)). Regardless of how revealing the user's original data may be, a low \(\epsilon_{\text{tot}}\) would guarantee that the obfuscated data will never leak much information. However, \(\epsilon_{\text{tot}}\) is a worst-case theoretical measure: it is unclear the extent to which collecting multiple objects and using a large \(\epsilon\) for each object open the door to attacks in practice. Footnote 1: If one assumes that Apple removes any identifier — so that objects from the same user are not linked together and cannot be linked back to individual users —, then local differential privacy would be mostly unnecessary in the first place. Collecting the original non-obfuscated objects and removing any identifier would already preserve privacy in most settings. **Pool inference attack.** In this paper we propose the first -- to the best of our knowledge -- quantification of the practical privacy guarantees provided by a deployed local differential privacy mechanism. We design a novel attack against CMS -- which we call _pool inference attack_ -- that works as follows: first, the adversary receives a sequence of obfuscated objects from a user; second, the adversary defines pools of interest for the attack (i.e. disjoint groups of objects); third, the adversary runs the attack to determine the user's preferred pool -- i.e. the pool whose objects are most likely to be selected by the user -- along with a confidence score for the inference. In our first use case, the adversary defines the pools to be groups of emojis divided by skin tone (see Figure 1), the goal of the attack being then to infer which is the emoji skin tone used most frequently by the user. **Contributions.** We make the following contributions: _(i)_ We propose pool inference attacks, a new class of attacks aiming at quantifying the sensitive information leaked by local differential privacy mechanisms in practice. We formalize the attack model as a game which can be applied to any mechanism that obfuscates objects independently. _(ii)_ We propose a general Bayesian model for pool inference attacks that can be adapted to most local differential privacy mechanisms. The attack uses a hierarchical probability model that simultaneously encodes properties of the user's behavior, the obfuscation of the mechanism, and auxiliary information that may be available to the adversary. _(iii)_ We instantiate the attack against synthetic users in two practical settings where the adversary's goal is to infer user preferences (1) for emoji skin tone or (2) political news website. We study the impact that properties of user behavior -- such as polarization -- have on the attack's effectiveness, and show that our attack can estimate the probability that its output is correct. We also show that, in some cases, CMS provides little protection compared to a scenario where the user simply submits the true object without any local differential privacy. _(iv)_ We simulate the attack in the emojis setting using data from Twitter, and find it to be very effective on users who frequently select emojis supporting skin tones. _(v)_ We discuss potential solutions and mitigation strategies that may prevent our attack or make it less effective. ## 2 Background We now define local differential privacy and the CMS algorithm, introducing the notation that will be used in the paper. **Local differential privacy [22].** A local differential privacy mechanism is a randomized algorithm that takes as input an _original object_ from a set \(\Omega\) and returns an _obfuscated object_ from a set \(\mathcal{Y}\). For example, \(\Omega\) could be the set of all emojis and \(\mathcal{Y}\) could be the set of binary vectors of a fixed length. We call \(\Omega\) the _universe of (original) objects_ and \(\mathcal{Y}\) the _space of obfuscated objects_. Intuitively, the algorithm enforces local differential privacy if the probability that an input produces a certain output is roughly equal for all inputs. Formally: _Let \(\mathcal{A}\colon\Omega\to\mathcal{Y}\) be a randomized mechanism. \(\mathcal{A}\) satisfies \(\epsilon\)-local differential privacy if \(e^{-\epsilon}\Pr[\mathcal{A}(x^{\prime})=y]\leq\Pr[\mathcal{A}(x)=y]\leq e^{ \epsilon}\Pr[\mathcal{A}(x^{\prime})=y]\) for any inputs \(x,x^{\prime}\in\Omega\) and output \(y\in\mathcal{Y}\)._ We abbreviate the obfuscated object \(\mathcal{A}(x)\) with \(\widetilde{x}\). **Count Mean Sketch [4].** CMS takes as input objects in the universe \(\Omega\) that the user has selected (e.g. emojis inserted while typing a message) and returns a binary vector of length \(m\) (together with an index), where \(m\) is typically much smaller than \(|\Omega|\). It uses a family \(\mathcal{H}=\{h_{1},\ldots,h_{|\mathcal{H}|}\}\) of hash functions that map each object in \(\Omega\) to an integer in \(\{1,\ldots,m\}\) Given an original object \(x\in\Omega\), CMS samples uniformly at random a hash function \(h_{j}\in\mathcal{H}\) and produces the one-hot vector \(v_{x}^{h_{j}}\) of size \(m\) which is \(1\) at position \(h_{j}(x)\) and \(0\) in all other entries. The vector \(v_{x}^{h_{j}}\) can be seen as a compressed version of \(x\). Each bit of \(v_{x}^{h_{j}}\) is then randomly flipped with probability \(1/(1+e^{\epsilon/2})\) or left unchanged with the remaining probability \(e^{\epsilon/2}/(1+e^{\epsilon/2})\), obtaining the obfuscated vector \(\tilde{v}_{x}^{h_{j}}\). The output of CMS consists of the obfuscated vector and the index of the hash function used to compute it: \[\widetilde{x}=\text{CMS}(x;\,\epsilon,m,\mathcal{H})=(\tilde{v}_{x}^{h_{j}},j)\] CMS satisfies \(\epsilon\)-local differential privacy for any \(\epsilon>0\)[4]. The parameters \(\epsilon\), \(m\) and \(\mathcal{H}\) used by CMS on users' devices are typically set by the data curator. In particular, smaller \(\epsilon\) yield lower accuracy, but give better privacy guarantees. Moreover, the hash functions satisfy some technical properties that ensure their behavior is tractable with probabilistic methods -- see Appendix A.1 for this and other details on CMS. We note that the use of hash functions is not necessary to achieve local differential privacy, but they make CMS more space-efficient and offer additional privacy protection due to hash collisions2. In fact, even if no bits are flipped, there are often many original objects producing the same one-hot vector, with the exact number depending on \(m\) and on the hash function. Collisions make it impossible to infer the original object from the obfuscated object. However, if the user is likely to select most objects from a specific set (pool), after multiple observations this fact can be inferred despite hash collisions. This is the intuition behind our attack. Footnote 2: We note that the additional protection coming from collisions is not captured by the privacy loss \(\epsilon\), and hence requires practical attacks like ours to be quantified. ## 3 Pool inference attacks against local differential privacy We define a new general attack model against local differential privacy mechanisms, that we call _pool inference attack model_. We then propose an attack for this attack model, which we call the Bayesian Pool Inference Attack (BPIA). ### Formalizing the pool inference attack model We consider an attack where objects are semantically grouped in _pools_ (e.g. skin tone of emojis, political orientation of news websites), and the adversary tries to infer which pool a target user samples from most frequently (their _preferred pool_). Formally, we define the pool inference attack model as a game between an adversary Adv and a target user Usr who obfuscates their data with a mechanism \(\mathcal{A}\). We model the user behavior as a probability distribution \(\Phi_{\text{Usr}}\) over the universe \(\Omega\), reflecting the target user's preferences for the objects in \(\Omega\) -- i.e. the probability that Usr selects a certain object. **Pool Inference Game.** * _Step 1._ Usr samples \(n\) original objects \(x_{1},\dots,x_{n}\) independently according to \(\Phi_{\text{Usr}}\). Then, Usr runs \(\mathcal{A}(x_{t})\) independently on each \(x_{t}\), producing the obfuscated objects \(\widetilde{x}_{1},\dots,\widetilde{x}_{n}\). * _Step 2._ Adv selects \(k\)_pools of interest_\(P_{1},\dots,P_{k}\subseteq\Omega\), which are pairwise disjoint subsets of \(\Omega\) that can have arbitrary and different sizes. * _Step 3._ Usr sends \(\widetilde{x}_{1},\dots,\widetilde{x}_{n}\) to Adv. * _Step 4._ Adv runs an attack that returns one pool \(\widehat{P}_{\text{Usr}}\in\{P_{1},\dots,P_{k}\}\), which we call Adv's _estimated preferred pool_. Adv wins the game if \(\widehat{P}_{\text{Usr}}=P_{\text{Usr}}\), where \[P_{\text{Usr}}\stackrel{{\text{def}}}{{=}}\operatorname*{arg\,max} _{P_{1},\dots,P_{k}}\Phi_{\text{Usr}}(P_{i})\] is the user's (true) _preferred pool among_\(P_{1},\dots,P_{k}\). Without loss of generality, we always assume that the preferred pool \(P_{\text{Usr}}\) is unique, i.e. \(\Phi_{\text{Usr}}(P_{i})<\Phi_{\text{Usr}}(P_{\text{Usr}})\) for all \(P_{i}\neq P_{\text{Usr}}\). We refer to all the pools in \(\{P_{i}\colon\,P_{i}\neq P_{\text{Usr}}\}\) as _al \begin{table} \begin{tabular}{r l l} \hline \hline **Symbol** & **Description** & **Known to Adv** \\ \hline Adv & Adversary (runs the attack) & \\ Usr & User (target of the attack) & \\ \hline \(\Omega\) & Universe of (original) objects & Yes \\ \(\mathcal{Y}\) & Space of obfuscated objects & Yes \\ \(\mathcal{A}\) & Mechanism & Yes \\ CMS & Count Mean Sketch mechanism & Yes \\ \(\epsilon\) & Privacy loss parameter & Yes \\ \(\mathcal{H}\) & Family of hash functions & Yes \\ \(m\) & Length of obfuscated vector & Yes \\ \hline \(n\) & Number of observations & Yes \\ \(x_{1},\dots,x_{n}\) & Original objects & No \\ \(\Phi_{\text{Usr}}\) & Usr’s behavior & No \\ \(P_{\text{Usr}}\) & Usr’s preferred pool & No \\ \(\{P_{i}\colon P_{i}\neq P_{\text{Usr}}\}\) & Usr’s alternative pools & No \\ \(\widetilde{\Phi}_{\text{Usr}}\) & Usr’s relevant interest & No \\ \(\widetilde{\Phi}_{\text{Usr}}\) & Usr’s polarization & No \\ \(p_{0}\) & True object popularity & No \\ \hline \(\widetilde{x}_{1},\dots,\widetilde{x}_{n}\) & Obtained objects (or observations) & Yes \\ \(P_{1},\dots,P_{k}\) & Adv’s pools of interest & Yes \\ \(\Omega\setminus\cup_{i=1}^{k}P_{2}\) & Neutral pool & Yes \\ \(\mathcal{Z}\) & Adv’s auxiliary information & Yes \\ score(\(P_{i}\)) & Adv’s score for pool \(P_{i}\) & Yes \\ \(\widehat{P}_{\text{Usr}}\) & Adv’s estimated preferred pool & Yes \\ \(\text{conf}(\widehat{P}_{\text{Usr}})\) & Adv’s confidence value & Yes \\ \(\widehat{\Phi}_{\text{Usr}}\) & Adv’s user representation & Yes \\ \(\widehat{\rho}_{\text{Usr}}\) & Adv’s estimated object popularity & Yes\({}^{3}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Notation and definitions. We indicate which elements are known to the adversary according to the pool inference attack model. ternative pools_. We also define the _neutral pool_ as the set \(\Omega\setminus\cup_{i=1}^{t}P_{i}\) of all objects not in any pool, and call its elements _neutral objects_. Figure 1 provides an illustration of these definitions where the pools are defined in a universe of emojis grouped by skin tone. We note that, in practice, Adv could repeat the attack in Step 4 using the same obfuscated objects received in Step 3, but using different pools. For example, the attack could be first run using pools for skin tone, and then again using pools grouped by gender. While this may be a likely case in a real-world setting where the adversary may try to infer as much sensitive information as possible, in this paper we limit our analysis to the case when the attack is run only once for each user (i.e. for each instance of the game). **Adversary's knowledge.** No information is shared from Usr to Adv or vice versa, except in Step 3, where Usr sends the obfuscated objects to Adv. The pools defined by the adversary do not depend on the objects sampled by the user (and vice versa). The only information that Adv knows about Usr are the obfuscated objects \(\widetilde{x}_{1},\ldots,\widetilde{x}_{n}\), which we call _observations_. We also admit the possibility that Adv has access to some auxiliary information \(\mathcal{I}\), which represents general knowledge about the population (not about Usr specifically) that can be used in the attack in Step 4. Finally, we assume that Adv knows the universe \(\Omega\), the privacy loss \(\epsilon\), and any other internal parameter used when applying \(\mathcal{A}\) -- a standard assumption for attacks, where the specifications of the system are assumed to be public. Table 1 summarizes the notation and what is known to the adversary. **Behavioral parameters.** Usr's behavior determines how vulnerable they are to a pool inference attack: Usr might mostly use objects in the neutral pool, or their preference for their preferred pool might not be strong. For example, Usr might use skin-toned emojis only very rarely; moreover, regardless of the relevant interest, it might be that Usr selects the medium-light skin tone more frequently, but actually selects other skin tones often as well. To capture these properties of Usr's behavior, we define two _behavioral parameters_: the _relevant interest_\(\gamma_{\text{Usr}}\) (how often Usr samples from pools of interest) and the _polarization_\(\delta_{\text{Usr}}\) (among pools of interest, how often Usr samples from their preferred pool). Formally: \[\gamma_{\text{Usr}}\overset{\text{def}}{=}\Phi_{\text{Usr}}\left(\bigcup_{i= 1}^{k}P_{i}\right)\quad\text{and}\quad\delta_{\text{Usr}}\overset{\text{def}} {=}\frac{1}{\gamma_{\text{Usr}}}\Phi_{\text{Usr}}(P_{\text{Usr}})\] which satisfy \(0<\gamma_{\text{Usr}}\leq 1\) and \(\frac{1}{k}<\delta_{\text{Usr}}\leq 1\) (since \(P_{\text{Usr}}\) is maximal with respect to \(\Phi_{\text{Usr}}\)). While these parameters are unknown to Adv, they are useful to describe each game instance and characterize the user's behavior. In section 4, we show that these parameters capture how vulnerable the target user is _with respect to the specific set of pools chosen by Adv_. ### BPIA: A Bayesian pool inference attack We propose an attack using Bayesian inference for the pool inference attack model, that we call BPIA (_Bayesian Pool Inference Attack_). We first summarize the intuition behind the attack. Given Usr's obfuscated objects \(\widetilde{x}_{1},\ldots,\widetilde{x}_{n}\), BPIA uses Bayesian inference to compute, for each pool \(P_{i}\), the a posteriori probability that \(P_{i}\) is Usr's preferred pool: \[\Pr[P_{\text{Usr}}=P_{i}\mid\widetilde{x}_{1},\ldots,\widetilde{x}_{n}] \tag{1}\] To compute this probability, BPIA must take into account (1) the uncertainty on Usr's preferred pool and behavioral parameters, (2) the randomness of Usr's behavior, and (3) the randomness of the mechanism \(\mathcal{A}\). To do this, BPIA uses a hierarchical model that combines the three types of uncertainty. In particular, for (2), BPIA would ideally use the user behavior \(\Phi_{\text{Usr}}\), but this is unknown to Adv. Instead, BPIA uses a function \(\overline{\Phi}\) -- that we call _user representation_ -- parameterized by three parameters \(\gamma,\delta\) and \(\iota\), which models a simple user behavior. We now give the details of the hierarchical model, the user representation and BPIA's output. **Hierarchical model.** We propose a general hierarchical model \(\mathcal{M}=(\mathcal{A},\overline{\Phi},\mathcal{I})\), where \(\mathcal{A}\) is the obfuscation mechanism, \(\overline{\Phi}\) is a _user representation_ of the (unknown) user behavior, and \(\mathcal{I}\) is some additional auxiliary information that contains general facts about the population (see below). Figure 2: Diagram summarizing the Bayesian Pool Inference Attack (BPIA). The user representation is a distribution \(\Phi(x\mid\mathtt{t},\gamma,\delta,\mathcal{I})\), parameterized by \(\mathtt{t}\in\{1,\ldots,k\}\) (the preferred pool), \(\gamma\in(0,1]\), \(\delta\in(1/k,1]\) (behavioral parameters), and the auxiliary information \(\mathcal{I}\). The function \(\overline{\Phi}(x\mid\mathtt{t},\gamma,\delta,\mathcal{I})\) gives the (assumed) probability of choosing an original object \(x\) if the user has \(P_{\mathtt{t}}\) as their preferred pool, behavioral parameters \(\gamma\) and \(\delta\), and subject to additional auxiliary information \(\mathcal{I}\). Intuitively, \(\mathcal{M}\) models a user who is first assigned the preferred pool \(P_{\mathtt{t}}\), the relevant interest \(\gamma\) and the polarization \(\delta\) uniformly at random; then, the user samples \(n\) original objects independently according to \(\overline{\Phi}(\cdot\mid\mathtt{t},\gamma,\delta,\mathcal{I})\); and finally obfuscates them using \(\mathcal{A}\). Formally, \(\mathcal{M}\) is given by three hyperparameters \(\mathtt{t}\), \(\gamma\), \(\delta\), the random variable \((X_{1},\ldots,X_{n})\) representing the sampling of the original objects, and the random variable \((\widetilde{X}_{1},\ldots,\widetilde{X}_{n})\) denoting its randomly obfuscated version, with the auxiliary information \(\mathcal{I}\) being treated as a fixed parameter: \[\mathtt{t} \sim\text{Uniform}(\{1,\ldots,k\})\] \[\gamma \sim\text{Uniform}((0,1])\] \[\delta \sim\text{Uniform}((1/k,1])\] \[X_{t}\mid\mathtt{t},\gamma,\delta \sim\overline{\Phi}(\cdot\mid\mathtt{t},\gamma,\delta,\mathcal{I })\quad\forall t\in\{1,\ldots,n\}\] \[\widetilde{X}_{1},\ldots,\widetilde{X}_{n}\mid X_{1},\ldots,X_{n} \sim\mathcal{A}(X_{1}),\ldots,\mathcal{A}(X_{n})\] Using this model, the adversary is able to compute the probabilities in eq. 1: \(\Pr_{\mathcal{M}}[P_{\mathtt{Usr}}=P_{\mathtt{t}}\mid\widetilde{x}_{1}, \ldots,\widetilde{x}_{n}]\). While in this model the hyperparameters \(\mathtt{t}\), \(\gamma\), and \(\delta\) are uniformly distributed -- reflecting an adversary who has no informative prior on \(\Pr_{\mathtt{Usr}}\), \(\gamma_{\mathtt{Usr}}\), and \(\delta_{\mathtt{Usr}}\) -- this could likely be improved in practical settings where the adversary may have access to additional sources of information (see Appendix A.10). User representation.Our user representation \(\overline{\Phi}(x\mid\mathtt{t},\gamma,\delta,\hat{p}_{\Omega})\) models the user assuming the following behavior: the user first chooses a pool (the neutral pool with probability \(1-\gamma\), their preferred pool \(P_{\mathtt{t}}\) with probability \(\gamma\delta\), or any of the alternative pools with equal probability \(\frac{1}{k-1}\gamma(1-\delta)\)), then samples an object from the selected pool according to some _estimated object popularity_\(\hat{p}_{\Omega}\). This object popularity is a distribution over \(\Omega\) that -- intuitively -- captures the differences in likelihood for objects _within the same pool_. For example, \(\hat{p}_{\Omega}\) can capture the fact that, among emojis with the same skin tone, the thumb-up emoji is much more popular across users than most of the others. We assume that the adversary has access to this estimated object popularity as additional auxiliary information: \(\mathcal{I}=\hat{p}_{\Omega}\). In section 6 we discuss how an adversary could acquire the object popularity from external sources or even estimate it from obfuscated objects collected from other users. Furthermore, when the adversary does not have any auxiliary information, Adv can use an _uninformative_ object popularity \(\hat{p}_{\Omega}\), such as the uniform distribution on \(\Omega\). Formally, the representation \(\overline{\Phi}\) is defined as follows: \[\overline{\Phi}(x\mid\mathtt{t},\gamma,\delta,\hat{p}_{\Omega})=\begin{cases} \gamma\delta\frac{\hat{p}_{\Omega}(x)}{\hat{p}_{\Omega}(R)}&\text{if }x\in P_{ \mathtt{t}}\\ \frac{1}{k-1}\gamma(1-\delta)\frac{\hat{p}_{\Omega}(x)}{\hat{p}_{\Omega}(P_{ \mathtt{t}})}&\text{if }x\in P_{\mathtt{t}},i\neq 1\\ (1-\gamma)\frac{\hat{p}_{\Omega}(x)}{\hat{p}_{\Omega}(\Omega\setminus\bigcup_{i =1}^{k}P_{\mathtt{t}})}&\text{if }x\in\Omega\setminus\bigcup_{i=1}^{k}P_{ \mathtt{t}}\end{cases} \tag{2}\] We note that in the equation, \(\hat{p}_{\Omega}(x)\) is always normalized by the total popularity of the pool that \(x\) belongs to. In other words, \(\hat{p}_{\Omega}(x)\) is used exclusively to differentiate the probability of different objects within the same pool -- it has no effect on the overall probability that \(\overline{\Phi}\) assigns to a pool (and hence to the pool's score, see next paragraph). We emphasize that the user representation \(\overline{\Phi}(x\mid\mathtt{t},\gamma,\delta,\hat{p}_{\Omega})\) is a simple _model_ for the user's behavior: Adv does not know whether the representation correctly describes the actual user behavior \(\Phi_{\mathtt{Usr}}\), and does not know the exact value of \(P_{\mathtt{Usr}}\), \(\gamma_{\mathtt{Usr}}\), and \(\delta_{\mathtt{Usr}}\). In particular, our user representation does not account for (1) individual preferences within pools differing from \(\hat{p}_{\Omega}\), and (2) preferences between non-preferred pools (since our model assumes that the user selects among alternative pools uniformly at random). In Appendix A.4 we present some results that quantify how the correctness of the user representation affects the effectiveness of the attack. **Maximum a posteriori estimate.** The attack attempts to find the user's preferred pool from their obfuscated objects by computing the posterior probability of each pool. For each pool \(P_{\mathtt{t}}\), the adversary computes a _score_ proportional to the conditional probability that \(P_{\mathtt{Usr}}=P_{\mathtt{t}}\) under the model \(\mathcal{M}\): \[\text{score}(P_{\mathtt{t}})\propto\Pr_{\mathcal{M}}[P_{\mathtt{Usr}}=P_{ \mathtt{t}}\mid\widetilde{x}_{1},\ldots,\widetilde{x}_{n}] \tag{3}\] The adversary then selects the _maximum a posteriori estimate_ for the user's preferred pool, as the pool with maximal score: \[\widehat{P}_{\mathtt{Usr}}=\operatorname*{arg\,max}_{P_{1},\ldots,P_{\mathtt{ k}}}\text{score}(P_{\mathtt{t}})\] If several pools have maximal score, the estimate is selected uniformly at random from these. The attack also computes a confidence value \(\text{conf}(\widehat{P}_{\mathtt{Usr}})\) quantifying the probability (under the model \(\mathcal{M}\)) that the estimate is correct: \[\text{conf}(\widehat{P}_{\mathtt{Usr}})\stackrel{{\text{def}}}{{=}} \Pr_{\mathcal{M}}[\widehat{P}_{\mathtt{Usr}}=P_{\mathtt{Usr}}\mid\widetilde{x}_{1},\ldots,\widetilde{x}_{n}]=\frac{\text{score}(\widehat{P}_{\mathtt{Usr}})}{ \sum_{i=1}^{k}\text{score}(P_{\mathtt{t}})}\] For an arbitrary confidence threshold \(\mathtt{\tau}\) defined by the adversary, the attack outputs \(\widehat{P}_{\mathtt{Usr}}\) if \(\text{conf}(\widehat{P}_{\mathtt{Usr}})\geq\mathtt{\tau}\) and _null_ otherwise. The threshold \(\mathtt{\tau}\) hence allows the adversary to set the minimum level of confidence that they require to trust the attack's estimate \(\widehat{P}_{\mathtt{Usr}}\). The attack is successful if the estimate is correct, i.e. \(\widehat{P}_{\mathtt{Usr}}=P_{\mathtt{Usr}}\). **Score computation.** Under the model \(\mathcal{M}\), the scores defined in eq. 3 are computed as the probability that \(P_{\mathtt{Usr}}=P_{\mathtt{t}}\) after observing \(\widetilde{x}_{1},\ldots,\widetilde{x}_{n}\), obtained by integrating the conditional distribution over \(\gamma\) and \(\delta\) and applying Bayes's law: \[\text{score}(P_{i})\propto\int_{0}^{1}\int_{1}^{1}\prod_{t=1}^{n}\sum_{z\in \Omega}\Pr_{\mathcal{A}}[\widetilde{x_{t}}\mid z]\;\overline{\Phi}(z\mid i, \gamma,\delta,\hat{p}_{\Omega})\;d\delta\;d\gamma \tag{4}\] The term \(\Pr_{\mathcal{A}}[\widetilde{x_{t}}\mid z]\) is the probability that the output of \(\mathcal{A}(z)\) is the observation \(\widetilde{x_{t}}\). We give a formal proof of correctness in Appendix A.9. We next show how to compute this for CMS. **Attacking CMS.** To execute BPIA against the mechanism \(\mathcal{A}=\text{CMS}\), we need to determine the value of \(\Pr_{\text{CMS}}[\widetilde{x}\mid z]\) for any \(\widetilde{x}\) and any \(z\). First of all, we note that \[\Pr_{\text{CMS}}[\widetilde{x}\mid z]=\Pr_{\text{CMS}}[(\tilde{v}_{x}^{h_{j} },j)\mid z]=\Pr[\tilde{v}_{x}^{h_{j}}\mid j,z]\Pr[j\mid z]\] Since \(j\) is selected uniformly at random, we have that \(\Pr[j\mid z]=\Pr[j]\) is constant for any \(z\) and can be moved outside of the integral in eq. 4. Hence, this is a multiplicative value that is constant across pools and can be ignored. \(\Pr[\tilde{v}_{x}^{h_{j}}\mid j,z]\) is the probability of obtaining the obfuscated vector \(\tilde{v}_{x}^{h_{j}}\) when the original object is \(z\) and the selected hash function is \(h_{j}\). Since Adv knows all CMS parameters -- including the hash functions in \(\mathcal{H}\) -- they can compute the one-hot vector \(v_{z}^{h_{j}}\). The probability is then derived by observing how many bits need to be flipped in order to obtain \(\tilde{v}_{x}^{h_{j}}\) from \(v_{z}^{h_{j}}\), i.e. their Hamming distance. Let \(\xi=1/(1+e^{\xi/2})\) be the probability of flipping one bit and let \(\|\cdot\|_{1}\) denote the \(L_{1}\) norm. We obtain: \[\Pr[\tilde{v}_{x}^{h_{j}}\mid j,z]=\xi\|_{v_{z}^{h_{j}}-\tilde{v}_{x}^{h_{j}} }^{h_{j}}\|_{1}(1-\xi)^{m-\|v_{z}^{h_{j}}-\tilde{v}_{x}^{h_{j}}\|_{1}} \tag{5}\] We note that eq. 5, when used to compute the score in eq. 4, automatically captures the uncertainty coming from the random flipping of bits and from hash collisions as well. For example, if two objects in different pools share the same hash value, this would make it impossible to distinguish which of them (if any) was Usr's original object. The attack takes this fact into account when computing the scores for those pools. ## 4 Experiments on synthetic users In this section we empirically validate our BPIA attack against CMS for synthetic users. For each user, we define the behavior \(\Phi_{\text{Usr}}\) and we then use it to sample the original objects. This allows us to evaluate the attack for different user profiles (relevant interest and polarization) and compare the results across different settings. ### Experiment design We simulate BPIA in various experiment scenarios. Each _experiment scenario_ is defined by the following parameters: 1. the universe \(\Omega\); 2. the CMS parameters \(\epsilon\), \(m\), \(\mathcal{H}\) (see section 2); 3. the pools of interest \(P_{1},\ldots,P_{k}\subseteq\Omega\) picked by Adv for the attack; 4. the _true object popularity_\(p_{\Omega}\), a distribution on \(\Omega\) (not known to Adv); 5. the _estimated object popularity_\(\hat{p}_{\Omega}\) (known to Adv); 6. the number of observations \(n\) that Adv has access to. Using these parameters, we run 150,000 independent instances of the pool inference game, with one (independent) synthetic user per instance. For each user Usr, we sample the user's relevant interest \(\gamma_{\text{Usr}}\) and polarization \(\delta_{\text{Usr}}\) uniformly at random from \((0,1]\) and \((1/k,1]\), respectively. As will become clear from the results, these behavioral parameters strongly impact the success rate of BPIA. Sampling the parameters uniformly allows us to study the effectiveness of the attack on users with different levels of vulnerability. **User behavior.** We select Usr's preferred pool \(P_{\text{Usr}}\) uniformly at random from \(\{P_{1},\ldots,P_{k}\}\). For each instance of the game, we use the randomly sampled \(\gamma_{\text{Usr}}\), \(\delta_{\text{Usr}}\), and \(P_{\text{Usr}}\) to define Usr's behavior \(\Phi_{\text{Usr}}\), as follows: \[\Phi_{\text{Usr}}(x)\stackrel{{\text{def}}}{{=}}\begin{cases} \gamma_{\text{Usr}}\delta_{\text{Usr}}\frac{p_{\Omega}(x)}{p_{\Omega}(P_{ \text{Usr}})}&\text{if }x\in P_{\text{Usr}}\\ \frac{1}{k-1}\gamma_{\text{Usr}}(1-\delta_{\text{Usr}})\frac{p_{\Omega}(x)}{p_{ \Omega}(P_{\text{U}})}&\text{if }x\in P_{i}\neq P_{\text{Usr}}\\ (1-\gamma_{\text{Usr}})\frac{p_{\Omega}(x)}{p_{\Omega}(\Omega\setminus_{i-1}^{ \perp}P_{i})}&\text{if }x\in\Omega\setminus_{i=1}^{k}P_{i}\end{cases} \tag{6}\] This means that to sample each original object, the user first selects \(P_{\text{Usr}}\) with probability \(\gamma_{\text{Usr}}\delta_{\text{Usr}}\), any other pool of interest with probability \(\frac{1}{k-1}\gamma_{\text{Usr}}(1-\delta_{\text{Usr}})\) and the neutral pool with probability \(1-\gamma_{\text{Usr}}\). Once one pool has been selected, the original object is sampled within that pool according to the object popularity \(p_{\Omega}\). For an instance of the game, we sample \(n\) objects from \(\Phi_{\text{Usr}}\) and obfuscate them with \(\text{CMS}(\cdot\,;\epsilon,m,\mathcal{H})\). Note here that \(\Phi_{\text{Usr}}\) corresponds to Adv's user representation \(\overline{\Phi}\) in eq. 2 but using \(p_{\Omega}\) (as Adv does not know the true popularity \(p_{\Omega}\)). The robustness results we report in Appendix A.4 quantify the effectiveness of the attack when Usr uses a noisy version of \(p_{\Omega}\) instead of the exact one. **Non-private scenario.** To understand the protection provided by CMS against BPIA, we also report results for an idealized scenario where the mechanism \(\mathcal{A}\) simply reveals the original object \(x\) (i.e. \(\mathcal{A}\) is the identity function), and hence Adv has access to the original objects \(x_{1},\ldots,x_{n}\). We refer to this as the _non-private_ scenario. In the non-private scenario, BPIA works in the same way as for CMS but the score in eq. 3 is computed by setting \(\Pr_{\mathcal{A}}[\widetilde{x_{t}}\mid z]=1\) if \(x_{t}=z\) and \(\Pr_{\mathcal{A}}[\widetilde{x_{t}}\mid z]=0\) if \(x_{t}\neq z\). **Baseline.** For each scenario, we report as baseline the attack that always makes a guess (i.e. has fixed confidence score \(\text{conf}=1\)) and returns one of the pools \(P_{1},\ldots,P_{k}\) uniformly at random. Since we select the user's preferred pool uniformly at random in the experiments, the baseline attack is correct with probability \(1/k\). **Types of adversary.** We simulate two types of adversaries: \(\text{Adv}_{weak}\) and \(\text{Adv}_{strong}\). \(\text{Adv}_{strong}\) has access to auxiliary information on objects' popularity \(\hat{p}_{\Omega}\) that approximates \(p_{\Omega}\), while \(\text{Adv}_{weak}\) uses a uniform \(\hat{p}_{\Omega}\). We consider \(\text{Adv}_{strong}\) to represent a realistic scenario for a typical deployment of local differential privacy (see Discussion). Indeed \(\hat{p}_{\Omega}\) can be estimated from auxiliary information derived from an external dataset \(\widetilde{\mathcal{D}}_{ext}\) of CMS-obfuscated objects collected from other users. We here simulate \(\text{Adv}_{strong}\)'s estimation of \(\hat{p}_{\Omega}\) by independently sampling \(N=10^{6}\) original objects from \(p_{\Omega}\) obtaining \(\mathcal{D}_{ext}=\{y_{1},\ldots,y_{N}\}\). We then obfuscate them with CMS, obtaining the external dataset \(\widetilde{\mathcal{D}}_{\text{ext}}=\{\tilde{y}_{1},\ldots,\tilde{y}_{N}\}\) which would typically be available to \(\text{Adv}_{strong}\). Using Apple's algorithm [4] the adversary approximates the frequencies of objects of the original dataset \(\mathcal{D}_{ext}\), then projects these frequencies to the probability simplex using alternating projection [6] in order to obtain the estimated object popularity \(\hat{p}_{\Omega}\) (which approximates \(p_{\Omega}\) well when the number of users \(N\) is sufficiently large). The estimated object popularity \(\hat{p}_{\Omega}\) is the only difference between \(\text{Adv}_{weak}\) and \(\text{Adv}_{strong}\). Both adversaries use the same hierarchical model \(\mathcal{M}\) with the same hyperparameters (in particular, they both always integrate over uniformly distributed \(\gamma\) and \(\delta\) when computing the pools' scores). Importantly, we note that the effectiveness of the attack in the non-private scenario is the same for \(\text{Adv}_{strong}\) and \(\text{Adv}_{weak}\). In the non-private scenario, there is no uncertainty regarding the original input -- as the output and the input objects coincide -- and hence knowing the object popularity does not bring any advantage.4 Footnote 4: This fact can be proved formally by noticing that in the non-private scenario the sum in eq. 4 reduces to one single term, so that the object popularity for each observation can be moved outside of the integral and is constant across each pool’s score. **Metrics.** For a given threshold \(\tau\), we call _null users_ all the users for which the attack does not make a guess (\(\text{conf}(\widehat{\mathcal{H}}_{\text{I}\text{N}})<\tau\)). We then use the following three metrics to measure the effectiveness of our attack in a given scenario: 1. The _null rate_ is the fraction of null users (out of all the 150,000 users) for a given value of \(\tau\); 2. The _precision_ is the success rate of the attack for all non-null users, i.e. the fraction of non-null users such that \(\widehat{\mathcal{H}}_{\text{I}\text{N}}=P_{\text{I}\text{N}}\). That is, the fraction of users for which the attack's guess is correct, out of the users for which a guess is made (which depends on the threshold \(\tau\)); 3. The _area under the precision-null rate curve_ (AUC-PN) is the area under the curve obtained by plotting the precision vs the null rate for all possible threshold values between 0 and 1. Since the threshold \(\tau\) can be adapted by the adversary to adjust the tradeoff between precision and null rate, the AUC-PN captures the overall effectiveness of the attack (in the specific scenario). **Settings.** In this paper, we focus on two specific use cases of CMS implemented by Apple in iOS and Mac OS [4]: **Setting 1: Emojis.** In this use case, the device keeps track of which emojis -- the original objects -- are selected by the user when typing. These are obfuscated by CMS and submitted to Apple. The universe of objects contains 2600 emojis, i.e. \(|\Omega|=2600\). **Setting 2: Web domains.** For this setting, the original objects are the web domains that the user visits using the built-in browser (together with preferences regarding videos autoplay). The implementation of CMS keeps track of 250,000 web domains, i.e. \(|\Omega|=250000\). Apple's implementation sets \(m=1024\) and \(|\mathcal{H}|=65536\), with \(\epsilon=8\) for web domains and \(\epsilon=4\) for emojis. ## Setting 1: Emojis We consider an adversary Adv that runs our BPIA attack with the goal of inferring Usr's preferred emoji skin tone (see Figure 1). To this end, Adv defines six pools of size 228, corresponding to the six skin tones supported for 228 emojis in the Unicode Emoji v11.0 standard [36]. We define the true object popularity \(p_{\Omega}\) as a mixture of Zipfian distributions -- reflecting the fact that a few emojis are much more popular than others [15] (we discuss this choice in more detail in Appendix A.7). Formally, we consider the partition of \(\Omega\) given by the pools \(P_{1},\ldots,P_{k}\) and the neutral pool \(Q=\Omega\setminus\cup_{i=1}^{k}P_{i}\). For each \(P_{i}=\{x_{1}^{i},\ldots,x_{|P_{i}|}^{i}\}\), we take the Zipfian probability mass function given by: \[f_{P_{i}}(x_{j}^{i})=\frac{1/j^{1.2}}{\sum_{c=1}^{|P_{i}|}1/c^{1.2}} \tag{7}\] and similarly for the neutral pool. Finally we define: \[p_{\Omega}(x)\overset{\text{def}}{\propto}\begin{cases}f_{P_{i}}(x)&\text{if }x\in P_{i}, \quad i=1,\ldots,k\\ f_{Q}(x)&\text{if }x\in Q\end{cases} \tag{8}\] **Results for \(\text{Adv}_{weak}\) and \(\text{Adv}_{strong}\).** We simulate the attack with \(n=7,30,90,180\) observations. Since Apple collects one obfuscated object per day [2], these correspond to about 1 week, 1 month, 3 months, and 6 months, respectively. While 6 months may seem a long time, most users are likely to keep their iOS and Mac OS devices --and submit obfuscated objects -- for much longer than that [32]. Table 2 shows that our attack performs well for \(\text{Adv}_{strong}\), already reaching an AUC-PN of 0.8 after \(n=90\) observations. Figure 3 shows the attack's full precision-null rate curves. Here again, we see that \(\text{Adv}_{strong}\) performs much better than the baseline, reaching a precision of 0.29 after only 7 observations, and 0.64 after 180 observations when making a guess for all users (null rate \(=0\)). Restricting the attack to only users for which the attack is more confident (higher thresholds) allows the adversary to considerably increase the precision while making predictions on a significant number of users. For instance, for \(n=90\), the attack reaches a precision of 1 for a null rate of 0.95. This means that the attack makes no mistake when executed on the top 5% users (i.e. the users whose confidence score is in the top 5%). Even with a week of observations (\(n=7\)), the attack reaches 48% precision (2.9 times better than the baseline) when focusing on the top 10% of users. The results for \(\mathrm{Adv}_{weak}\), while significantly better than the baseline, are not as good. When making a guess for all the users, \(\mathrm{Adv}_{weak}\) only reaches a precision of 0.19 for \(n=7\) (as opposed to 0.29 for \(\mathrm{Adv}_{strong}\)). Even after \(n=180\) observations, the precision only increases to 0.31 for null rate \(=0\), and to 0.53 when focusing on the top 10% users. These results emphasize the importance of the adversary using auxiliary information during the attack. The reason \(\mathrm{Adv}_{strong}\) achieves much better results compared to \(\mathrm{Adv}_{weak}\) can be intuitively explained as follows: BPIA uses the object popularity to reduce the indistinguishability of the obfuscated objects. In principle, each obfuscated object may be the output of CMS run on any original object. However, if the attack knows that some of these objects are less likely to be picked (compared to others _in the same pool_), the posterior probability that one of them was the actual input can be reduced accordingly. The score defined in eq. 4 captures this fact to compute each pool's posterior probability. We provide additional results on this point in Appendix A.3. **Results in the non-private scenario.** In order to context \begin{table} \begin{tabular}{c c c c c} \hline \hline & \(n=7\) & \(n=30\) & \(n=90\) & \(n=180\) \\ \hline \(\mathrm{Adv}_{weak}\) & 0.20 & 0.24 & 0.32 & 0.40 \\ \(\mathrm{Adv}_{strong}\) & 0.37 & 0.61 & 0.80 & 0.88 \\ Non-private & 0.86 & 0.96 & 0.99 & 0.99 \\ \hline \hline \end{tabular} \end{table} Table 2: AUC-PN values in the emojis setting. Figure 4: Precision depending on \(\gamma_{\mathrm{Usr}}\) and \(\delta_{\mathrm{Usr}}\) for \(\mathrm{Adv}_{strong}\) in the emojis setting when the attack always makes a guess (null rate \(=0\)). The attack is more efficient when the user’s relevant interest and polarization are higher. The figure is generated by computing, for each value of \(\gamma_{\mathrm{Usr}}\) and \(\delta_{\mathrm{Usr}}\), the precision of the attack on users with (approximately) those relevant interest and polarization values. We note that \(\delta_{\mathrm{Usr}}\) is always greater than \(1/(6-1)=0.2\) by definition (see section 3). Figure 3: Precision-null rate curves in the emojis setting for \(\mathrm{Adv}_{weak}\) and \(\mathrm{Adv}_{strong}\). The results for the non-private scenario are the same for \(\mathrm{Adv}_{weak}\) and \(\mathrm{Adv}_{strong}\). alize our results, we also measure the accuracy of BPIA in the non-private scenario, when the adversary has access to the user's original objects \(x_{1},\ldots,x_{n}\) (i.e. without hashing nor obfuscation). This gives an upper bound to the attack: even when the adversary observes the original objects, they can still make mistakes when estimating the user's preferred pool. This is due to the stochastic nature of the user behavior \(\Phi_{\text{Usr}}\). For example, a user might use emojis with a certain skin tone most of the times but, for most users, there is a non-zero and possibly significant probability that the user selects emojis with a different skin tone (alternative pool) or even an emoji with no skin tone (neutral pool). Hence, even in the non-private scenario the attack might not be 100% effective. Figure 3 shows the attack to be highly effective in the non-private scenario, although not perfect. While the difference in effectiveness between Adv\({}_{weak}\) and non-private remains large for any number of observations, the protection offered by CMS decreases as \(n\) increases. **Impact of the behavioral parameters.** Figure 4 shows how the precision of the attack increases with both behavioral parameters \(\delta_{\text{Usr}}\) and \(\gamma_{\text{Usr}}\) for Adv\({}_{strong}\) when the null rate is 0, i.e. when the attack makes a guess for all users. Users with larger polarization and relevant interest tend to be, on average, much more vulnerable than other users. For instance for \(n=90\) and for Adv\({}_{strong}\), the precision of the attack on a user with \(\gamma_{\text{Usr}}=0.2\) and \(\delta_{\text{Usr}}=0.17\) is lower than 20%, while it already increases to more than 80% for a user with \(\gamma_{\text{Usr}}=0.6\) and \(\delta_{\text{Usr}}=0.67\). Overall, Adv\({}_{strong}\) performs well over a large range of values of \(\gamma_{\text{Usr}}\) and \(\delta_{\text{Usr}}\) for \(n\geq 90\). ## Setting 2: Web domains We here consider the case of an adversary attempting to infer the target user's potential political orientation from news sites that they visit. In this hypothetical setting, the adversary assumes that users are more likely to visit news websites whose political orientation is aligned with their own political views [19, 33]. The adversary hence defines the pools as sets of news websites grouped by political orientation. We here use AllSides's Media Bias rating for 60 major English-language news websites [1]. The Chart divides media into five groups: left, lean left, center, lean right, right, which contain respectively 14, 13, 13, 10, and 10 unique news websites5 (see Figure 5). In this experiment we randomly assign a popularity to all websites in the universe. For each object \(x\in\Omega\), \(p_{\Omega}(x)\) is sampled uniformly at random from \([0,1]\) (and then rescaled to ensure that \(p_{\Omega}\) has total mass adding up to 1). To reduce the computational time required to run the attack on 150,000 users, we run the experiments with a universe of size \(|\Omega|=2000\) (instead of the original 250,000). We show in Appendix A.5 that this has no impact on the estimated effectiveness of the attack. Footnote 5: In a few cases, the chart by AllSides has two entries for the same website – e.g., for The Wall Street Journal, the _news only_ section is rated center and the _opinion_ section is rated lean right. As these share the same web domain, for simplicity we include just the _news only_ entries in the pools of interest. Here again, we simulate two adversaries: Adv\({}_{weak}\) who uses an uninformative (uniform) object popularity \(\hat{p}_{\Omega}\), and Adv\({}_{strong}\) who uses \(N=10^{6}\) obfuscated objects from an external population to derive the estimated popularity \(\hat{p}_{\Omega}\). **Results for Adv\({}_{weak}\) and Adv\({}_{strong}\).** Table 3 reports the AUC-PN of the attack (computed on all users, for any relevant interest and polarization), and shows that both adversaries are very effective. Adv\({}_{weak}\) and Adv\({}_{strong}\) reach high AUC-PN with few observations. For example, they both obtain AUC-PN \(\geq 0.95\) with \(n=90\) observations. Interestingly, the effectiveness of Adv\({}_{strong}\) in this scenario is very similar to the one of Adv\({}_{weak}\) -- a stark difference from the emojis setting (Table 3 and Figure 6). This can be explained by the comparatively much smaller pools in this use case compared to the emojis setting (average pool size of 12, compared to 228 in the emoji setting). Indeed as pools get smaller, both the risk of hash collisions between two objects of different pools and the uncertainty introduced by the randomized obfuscation increase (see Appendix A.3). The small difference in both AUC-PN and precision, between both adversaries and the non-private scenario further confirms that, when pools are small, CMS provides little ad Figure 5: Pools for the web domains setting. Each pool groups together websites for 60 major news outlets according to their political orientation from the 2021 AllSides Media Bias Chart (left, lean left, center, lean right, right). In this case, Usr visits most frequently news websites in the left pool. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \(n=7\) & \(n=30\) & \(n=90\) & \(n=180\) \\ \hline Adv\({}_{weak}\) & 0.72 & 0.89 & 0.95 & 0.97 \\ Adv\({}_{strong}\) & 0.74 & 0.90 & 0.96 & 0.98 \\ Non-private & 0.87 & 0.96 & 0.99 & 0.99 \\ \hline \hline \end{tabular} \end{table} Table 3: AUC-PN values in the web domains setting. ditional protection. **Impact of the behavioral parameters.** Figure 7 shows the precision for \(\tau=0\) as a function of the relevant interest \(\gamma_{\text{Usr}}\) -- the fraction of the time a user visits one of the 60 news websites -- and the polarization \(\delta_{\text{Usr}}\) for \(\text{Adv}_{strong}\). We omit the results for \(\text{Adv}_{weak}\) as they are almost identical. Similarly to the emojis setting (Figure 4), we find that a user's behavioral parameters strongly affect how vulnerable they are. For instance, for \(\text{Adv}_{strong}\) and \(n=90\), the attack will be correct 91% of the time on a user who visits news websites 20% of the time (\(\gamma_{\text{Usr}}=0.2\)) and is strongly polarized (\(\delta_{\text{Usr}}=0.83\)) but would only reach 40% if instead they read diverse sources (\(\delta_{\text{Usr}}=0.33\)) or 25% if instead they only visit news websites less than 1% of the time (\(\gamma_{\text{Usr}}\leq 0.01\)). **Reliability of the confidence score.** We have shown that while our attack gives good results overall, it is particularly effective for certain users, in particular users with a high degree of polarization and relevant interest. Figure 8 shows that our attack's confidence score is well calibrated: for both adversaries, use cases, and number of observations. This makes the attack a concern in practice as it allows an adversary to estimate the probability of the attack to be successful against a specific target user Usr by looking only at Usr's obfuscated objects. ## 5 Experiments on Twitter data We now simulate the attack in the emojis setting using data collected from Twitter. Our experiments serve two purposes: first, they validate the hierarchical model \(\mathcal{M}\) (including the user representation \(\overline{\Phi}\)) in practice; second, they prove that the attack _can_ be very effective on real-world users (see the discussion in section 6). **Dataset.** We use the dataset of tweets collected by Robertson et al. [30], which contains about 18M tweets from 42K Twitter users, and derive a dataset \(\mathcal{D}\) containing only the emojis sent by each users. We then apply a random 80-20 split to \(\mathcal{D}\) and obtain: \(\mathcal{D}_{att}\), containing the users who used at least one emoji supporting skin tones, on which we simulate the attack; and \(\mathcal{D}_{ext}\), containing the external population that \(\text{Adv}_{strong}\) uses to compute the emojis estimated popularity \(\hat{p}_{\Omega}\). We simulate the BPIA attack instantiating the Pool Inference Game on each user in \(\mathcal{D}_{att}\), treating each emoji as an original object to which we apply CMS. The full details on how we produce the datasets and run the attack are given in Appendix A.2. **Results for \(\text{Adv}_{strong}\).** We instantiate the game only for \(\text{Adv}_{strong}\), since our experiments on synthetic data already showed that, in most cases, \(\text{Adv}_{weak}\) is not very effective in the emojis setting. Figure 9 (left) shows that the attack is overall very effective on the users in \(\mathcal{D}_{att}\). For any number of observations, the precision is \(>0.5\) (2.5 times better than the baseline) when the attack is run on all the users in \(\mathcal{D}_{att}\). After only \(n=7\) observations, the precision on the top 20% of the users (the 20% of the users with the highest confidence score) is above 0.61, going up to 0.825 after 180 observations. The attack however struggles to reach perfect precision: with \(n=180\), to achieve a precision of 0.95 the adversary needs to restrict the attack to the top 10% of the users. This is mostly because, contrary to the synthetic data, \(\mathcal{D}_{att}\) contains very few users who have both high \(\gamma_{\text{Usr}}\) and \(\delta_{\text{Usr}}\) (see Appendix A.2). Figure 10 shows the precision of BPIA depending on \(\gamma_{\text{Usr}}\) and \(\delta_{\text{Usr}}\) when making a guess on every user. These results are mostly consistent with those computed using the synthetic data (Figure 4). As expected, the attack is not very effective on users with low \(\gamma_{\text{Usr}}\) and \(\delta_{\text{Usr}}\), but works remarkably well on high-polarization users who have medium to high relevant interest. For example, after 90 observations, the attack achieves 0.82 to 0.9 precision on the users with polarization over 0.8 and relevant interest at least 0.4. Overall, these results validate the applicability of BPIA's model \(\mathcal{M}\). **Reliability of the confidence score.** Figure 9 (right) confirms that the confidence score computed by BPIA can be used to accurately estimate the probability that the estimated preferred Figure 6: Precision-null rate curves in the web domains setting for \(\text{Adv}_{weak}\) and \(\text{Adv}_{strong}\). The results for the non-private scenario are the same for \(\text{Adv}_{weak}\) and \(\text{Adv}_{strong}\). pool is correct. This validates the fact that BPIA can be used to distinguish and target the most vulnerable users. ## 6 Discussion In this paper we propose pool inference, a new attack model that quantifies some practical privacy risks that may affect implementations of local differential privacy mechanisms. We formalize the attack model as a game and propose a Bayesian pool inference attack (BPIA) that applies to any local differential privacy mechanism that processes each object independently. We simulate BPIA against Apple's CMS mechanism for emojis and web domains and study its effectiveness in different scenarios. We show that the attack can successfully allow an adversary to infer sensitive properties of a user's behavior. We further show that BPIA works best on users who are more polarized -- and may hence require the strongest privacy protections. To the best of our knowledge, this is the first attack designed against a real-world implementation of local differential privacy. Taken together, our results show that the BPIA attack is a practical threat for Apple's devices, where CMS is implemented with large \(\epsilon\) parameters and without limiting the cumulative privacy loss after multiple observations. **Previous criticism to Apple's implementation.** In September 2017, Tang et al. reverse engineered and analyzed Apple's implementation, for which no technical description was yet available. In particular, they found that the choice of the privacy loss \(\epsilon\) was not in line with what is deemed mathematically secure [34]. Tang et al. provided a detailed analysis of Apple's system, but did not propose attacks showing how the weakness of the theoretical guarantees could be exploited in practice. Apple disputed the findings by Tang et al., claiming that the system provides far more protection than acknowledged by the researchers [18]. Our experimental evaluation shows that, with Apple's choice of parameters, our BPIA attack could potentially lead to the disclosure of a user's preference for news websites or emoji skin tone. According to their white paper, Apple discards any user identifier when obfuscated objects are ingested by their servers, making it impossible to later link multiple observations from the same users [4]. While this would limit the attack to a single observation, this is an organizational measure that relies on trust, which is what local differential privacy is designed to avoid [13, 21, 41] (see also the discussion on mitigation strategies below). **Representativeness of the experiments.** In this paper we validate our attack using both synthetically generated data and Twitter data. The goal of our experiments is to study how the attack performs in several scenarios and to validate the user model \(\mathcal{M}\) showing that the attack works on a significant number of real-world users. On the other hand, the aim of our experiments is not to measure the fraction of users in the population who are vulnerable. Firstly, while we use Twitter Figure 8: Success rate as a function of the confidence score for both Adv\({}_{weak}\) and Adv\({}_{strong}\) and in both the emojis and web domains settings. The confidence score computed by the attack accurately estimates the probability that the attack is correct. Figure 7: Precision depending on \(\gamma_{\text{User}}\) and \(\delta_{\text{User}}\) for Adv\({}_{strong}\) in the web domains setting when the attack always makes a guess. data as a representation of users' usage of emojis, we do not have access to datasets that record such usage across apps or web browsing data. Secondly, our work focuses only on two attack goals (i.e. sets of pools): determining the preferred emoji skin tone and the political orientation of the most visited news website. As mentioned in section 3, the adversary can run BPIA as many times as they want _on the same data_ using any pools they wish. Users who are not vulnerable to the attack with certain pools might be vulnerable with respect to another set of pools. Moreover, the confidence score can be used to reliably estimate which inferences are likely to be correct. Future work may select other privacy-sensitive pools and use our attack to assess different privacy risks. **Using auxiliary population-level knowledge to estimate the object popularity.** Our experiments with \(\text{Adv}_{strong}\) show that an adversary with access to auxiliary information on the overall popularity of objects among the population may be much more effective. The adversary may obtain access to such auxiliary information from a variety of sources, such as social media or studies that report summary statistics on popularity of emojis aggregated over many users. Furthermore, they can be typically estimated by the data curator. In fact, CMS is designed precisely with this scope in mind: estimating the popularity of objects across many users. Hence, if the adversary is the curator themselves, users' privacy is even more at risk.6 In particular, the method used to estimate the popularity (see section 3) does not require that the adversary knows which objects are collected from which user. \(\text{Adv}_{strong}\) could be a curator who has never acted maliciously before, always discarding the identifiers that would allow to link objects coming from the same user, but who at some point decides to keep together the observations from the same target user. Footnote 6: We note that assuming that the curator is also the adversary reflects the standard attack model applied to local differential privacy. We believe an external adversary to be less realistic in Apple’s case as the obfuscated records are transmitted from the device to Apple through an encrypted connection. **Extending the attack to other mechanisms.** While in this paper we focus on the CMS mechanism, our BPIA attack can be used against any local differential privacy mechanisms where \(\text{Pr}_{A}[\vec{x}_{i}\mid z]\) can be computed analytically or estimated empirically. In Appendix A.8 we show how to adapt the attack to run against HCMS, another mechanism proposed and deployed by Apple to identify websites that cause high usage of hardware resources (CPU and memory) [4]. HCMS is similar to CMS, but uses the Hadamard transform to reduce the size of obfuscated objects to a single bit. Despite this, the way to compute \(\text{Pr}_{\text{HCMS}}[\vec{x}_{i}\mid z]\) is similar to the one for CMS. **Solutions and mitigation strategies.** There are several possible solutions to protect against BPIA, or at least mitigate it. However, to our knowledge, these all come at a cost in terms of utility, or require significant resources to be deployed. Figure 10: Precision depending on \(\gamma_{\text{User}}\) and \(\delta_{\text{User}}\) for \(\text{Adv}_{strong}\) on Twitter data when the attack always makes a guess. Figure 9: Precision-null rate curves (_left_) and success rate depending on the confidence score (_right_) for \(\text{Adv}_{strong}\) on Twitter data. _First:_ Using a smaller \(\epsilon\) and limiting the total number of observations per user. We show in Appendix A.6 that using a smaller value of \(\epsilon\) reduces the effectiveness of the attack, but it also has a direct impact on utility. Similarly, our results in sections 4 and 5 show that BPIA is less effective when the number of observed obfuscated objects from the target user is lower, but reducing the total number of observations affects utility as well (see Appendix A.6). Moreover, limiting the number of observations might make it impossible to learn how users' preferences evolve over time. _Second:_ Using a local differential privacy mechanism that addresses the privacy loss over multiple observations. These typically use some form of heuristic memoization -- such as Google's RAPPOR [15] -- or techniques to reduce the number of observations that are collected [21]. These may offer a better (theoretical) privacy-utility tradeoff when the population-level distribution that needs to be estimated over time does not change frequently. Extending the pool inference attack model and BPIA to these mechanisms could be used to measure this tradeoff in a practical setting and compare it to the tradeoff provided by CMS. _Third:_ Adopting a different privacy model. In recent work, researchers have proposed techniques that typically go under the name of _shuffled differential privacy_[16, 5, 7, 9, 10]. This is a hybrid privacy model where the obfuscated objects are routed through an intermediary (the _shuffler_) that in turn sends them to the curator. The role of the intermediary is to shuffle the obfuscated objects to anonymize them and make them unlinkable. Shuffled differential privacy has been deployed by Apple and Google in the context of the Exposure Notification System for COVID-19 [3]. While adopting this model for CMS would protect against BPIA, it effectively moves the requirement of users' trust from the curator to the shuffler: if the two collude, the curator would be able to link the objects again [7, 10]. The technical guarantees of the model would be greatly enhanced by using a mix network as the shuffler, but these are extremely hard to deploy in practice [35]. Nevertheless, we believe that the shuffled model is a promising avenue to apply local differential privacy in practice, and we hope this paper will provide evidence of the need for its further development and adoption. **Source code.** The code to reproduce the results is available at [https://github.com/computationalprivacy/pool-inference](https://github.com/computationalprivacy/pool-inference). ## 7 Related work Our work is part of the line of research studying the guarantees of differential privacy against specific attacks. Previous research has studied the privacy protections of specific differential privacy mechanisms with respect to attacks that simulate real-world adversaries, but this line of work has so far focused on mechanisms for _central_ differential privacy -- the main variant of differential privacy which assumes a trusted curator and one or more untrusted analysts. Examples include attacks against differential privacy mechanisms to release aggregate location time-series [25, 26, 27], synthetic data [31], and machine learning models [29, 20, 24]. To the best of our knowledge, only two papers have empirically investigated the privacy guarantees of a _local_ differential privacy mechanism. Pyrgelis et al. [26] propose several attacks on aggregated location data that aim to recover individual users' locations or mobility patterns. They evaluate their attacks against SpotMe [28], a mechanism to obfuscate location data that satisfies local differential privacy [40]. Pyrgelis et al.'s work however considers a different adversarial setting than ours: their attacks apply to location time-series obtained by aggregating the obfuscated objects over multiple users, while in our pool inference attack the adversary has access to the individual obfuscated objects. Our attack could be simply adapted to the SpotMe mechanism7 in order to infer the user's preferred pool of locations among some pools of interest -- an interesting application that we leave to future work. Footnote 7: The SpotMe mechanism is quite similar to CMS, but without hashing. Hence, the probabilities \(\Pr_{\mathcal{A}}[\vec{x}\,|\,z]\) that are used by the attack (eq. 4) can be computed similarly to the ones for CMS. Chatzikokolakis et al. [8] propose the Bayes security measure, a general metric that quantifies the expected advantage over random guessing of an adversary that observes the output of a mechanism. They then apply their metric to randomized response [39] -- a simple local differential privacy mechanism originally conceived to protect privacy in survey responses. They apply randomized response to the US 1990 Census dataset and find that it gives good protection even for values of \(\epsilon\) as high as 4.8. However, their evaluation focuses on object indistinguishability -- i.e. it considers an adversary that collects an obfuscated object and whose goal is to infer the original object. This is a significantly harder objective compared to pool inference and, in fact, CMS's use of hash functions prevents this even for arbitrarily large values of \(\epsilon\). Our work shows that enforcing object indistinguishability is not enough to protect privacy in a practical setting where the adversary has access to multiple obfuscated objects from the same user. ## 8 Conclusion Apple's implementation of local differential privacy in iOS and Mac OS devices has been presented as a "technology to help discover the usage patterns of a large number of users without compromising individual privacy" [17]. Although researchers have criticized Apple's choice of \(\epsilon\) and unlimited theoretical privacy loss over multiple observations, to our knowledge no practical attacks have been proposed against the mechanisms deployed by Apple. In this paper, we proposed a Bayesian pool inference attack and we empirically evaluated it on Apple's Count Mean Sketch mechanism as configured on Apple's devices. We showed that, especially on the most vulnerable users, the attack could be used to successfully infer (1) the emoji skin tone that the user selects more frequently and (2) the political orientation of the news websites that the user is more likely to visit. Finally, we discussed how the technical privacy guarantees against our attack could be improved, and indicated where further research is necessary to evaluate the privacy/utility tradeoff of these mitigation strategies.
2305.10761
Noise-Aware Speech Separation with Contrastive Learning
Recently, speech separation (SS) task has achieved remarkable progress driven by deep learning technique. However, it is still challenging to separate target speech from noisy mixture, as the neural model is vulnerable to assign background noise to each speaker. In this paper, we propose a noise-aware SS (NASS) method, which aims to improve the speech quality for separated signals under noisy conditions. Specifically, NASS views background noise as an additional output and predicts it along with other speakers in a mask-based manner. To effectively denoise, we introduce patch-wise contrastive learning (PCL) between noise and speaker representations from the decoder input and encoder output. PCL loss aims to minimize the mutual information between predicted noise and other speakers at multiple-patch level to suppress the noise information in separated signals. Experimental results show that NASS achieves 1 to 2dB SI-SNRi or SDRi over DPRNN and Sepformer on WHAM! and LibriMix noisy datasets, with less than 0.1M parameter increase.
Zizheng Zhang, Chen Chen, Hsin-Hung Chen, Xiang Liu, Yuchen Hu, Eng Siong Chng
2023-05-18T07:06:15Z
http://arxiv.org/abs/2305.10761v3
# Noise-aware Speech Separation with Contrastive Learning ###### Abstract Recently, speech separation (SS) task has achieved remarkable progress driven by deep learning technique. However, it is still challenging to separate target signals from noisy mixture, as neural model is vulnerable to assign background noise to each speaker. In this paper, we propose a noise-aware SS method called NASS, which aims to improve the speech quality of separated signals in noisy conditions. Specifically, NASS views background noise as an independent speaker and predicts it with other speakers in a mask-based manner. Then we conduct patch-wise contrastive learning on feature level to minimize the mutual information between the predicted noise-speaker and other speakers, which suppresses the noise information in separated signals. The experimental results show that NASS effectively improves the noise-robustness for different mask-based separation backbones with less than 0.1M parameter increase. Furthermore, SI-SNRi results demonstrate that NASS achieves state-of-the-art performance on WHAM! dataset. Zizheng Zhang\({}^{1}\), Chen Chen\({}^{2}\), Xiang Liu\({}^{1}\), Yuchen Hu\({}^{2}\), Eng Siong Chng\({}^{2}\)\({}^{1}\)School of Software and Microelectronics, Peking University, China \({}^{2}\)School of Computer Science and Engineering, Nanyang Technological University, Singapore [email protected] **Index Terms**: noisy speech separation, contrastive learning ## 1 Introduction Speech separation (SS) aims to separate speech signals from the overlapping speech mixture [1], which can serve as a pre-processor for downstream speech applications [2, 3, 4, 5]. Recently, deep learning-based methods have developed various neural networks for SS [6, 7, 8, 9], and achieved remarkable performances on public datasets [10, 11] mixed by clean speech. However, it is still challenging to separate target speech from noisy mixture, e.g., _noisy speech separation_, since noise signal usually has a wide distribution on frequency domain to interfere with the human voice. For noisy SS, the mainstream mask-based method [6, 7, 8, 9] is vulnerable to assign background noise to the target speaker [12]. We also validate this phenomenon with a related experiment. One intuitive solution is utilizing the speech enhancement (SE) [13, 14, 15] technique as a pre-processor to remove noise information from the mixture in a multi-task learning manner [16, 17]. Despite the slight improvement, this method may lead to an over-suppression problem [18] -- SE module would inevitably remove some helpful information when it attempts to denoise, thus resulting in a sub-optimal performance for SS. To alleviate the influence of noise, our basic idea is to view background noise as an independent speaker, which can be simultaneously predicted along with other speakers. In addition to avoiding the over-suppression problem, the estimated noise signal can benefit the separated speech from a mutual information [19] perspective: we aim to minimize the mutual information between predicted noise and separated speech, which can prevent the noise from existing in the separated signal. In this paper, we propose a noise-aware speech separation method called NASS, which follows a typical encoder-separator-decoder pipeline [6]. Unlike previous works, NASS learns to predict the noise signal and leverage it to improve the speech quality of each speaker. Specifically, we conduct patch-wise contrastive learning [20, 21, 22] on different representations: 1) We first sample hundreds of patches from the speaker's representations. 2) The corresponding patches from ground-truth representations are viewed as positive examples, while 3) Other patches from noise representation are all viewed as negative examples. Reshaped by a two-layer MLP, the positive and negative training examples are calculated with cosine similarities. By optimizing the cross-entropy loss, we minimize the mutual information between the representation of each speaker and noise representation, which significantly suppresses the noise information from the separated speech signals. To evaluate the effectiveness of NASS, we conduct intensive experiments on noisy WHAM! [23] and LibriMix [11] datasets, and select three milestone works Conv-TasNet [6], DPRNN [7], and Sepformer [8] as the separator's backbone. The experimental results show that NASS can effectively improve the noise-robustness for all these separation models with less than 0.1M parameter increase. Furthermore, NASS achieves the state-of-the-art (SOTA) performance on WHAM! in terms of SI-SNRi [24] and SDRi [25]. ## 2 NASS Method We now introduce our proposed NASS method, as shown in Figure 1, which consists of mask-based architecture and the patch-wise contrastive learning strategy. ### Mask-based Architecture In this work, we follow the encoder-separator-decoder pipeline, where the mask of each speaker is predicted as shown in Figure 1. Since NASS is theoretically applicable to any mask-based method, we select Sepformer as an example in this part due to its remarkable performance in SS. #### 2.1.1 Encoder and Decoder The encoder takes the input noisy mixture \(x_{n}\in\mathbb{R}^{1\times T}\) in time domain and then generates a STFT-like [27] representation \(h_{x_{n}}\in\mathbb{R}^{N\times L}\), where \(T\) is the signal length, \(N\) is the number of filters and \(L\) is the number of vectors: \[h_{x_{n}}{=}\mathrm{ReLU}(\mathrm{Conv}\mathrm{1d}(x_{n})) \tag{1}\] The decoder acts as an inverse operation of the encoder, which takes all predicted representations \(h_{k}\in\mathbb{R}^{N\times L}\) and re constructs the separated signals \(\hat{y}_{k}\in\mathbb{R}^{1\times T}\) in time domain: \[\hat{y}_{k}=\mathrm{Conv1dTranspose}(h_{k}) \tag{2}\] In our work, additionally, the ground-truth speech signal \(s_{t}\in\mathbb{R}^{1\times T}\) is encoded as \(h_{s_{t}}\in\mathbb{R}^{N\times L}\), which is used together with \(h_{k}\) in the subsequent contrastive learning, where \(t\in\{1,2,\dots,C\}\) and \(C\) is the number of human speakers. #### 2.1.2 Masking Network and Noise-speaker The masking network takes \(h_{x_{n}}\) and learns a mask \(m_{k}\in\mathbb{R}^{N\times L}\) for each of \(G\) sources, then yielding \(h_{k}\): \[m_{k}=\mathrm{MaskingNet}(h_{x_{n}}) \tag{3}\] \[h_{k}=m_{k}\cdot h_{x_{n}} \tag{4}\] As shown in Figure 2. First \(h_{x_{n}}\) is processed by layer normalization [28] and a linear layer, then chunked on the time axis with 50% overlap, resulting in an output \(h^{\prime}\in\mathbb{R}^{N\times K\times S}\), where \(K\) is the chunk size and \(S\) is the number of chunks. Next \(h^{\prime}\) is sent into the processing block, which learns the local and global information by permuting. Then the output of processing block \(h^{\prime\prime}\in\mathbb{R}^{N\times K\times S}\) is processed by a PReLU [29] activation and a linear layer, denoted as \(h^{\prime\prime\prime}\in\mathbb{R}^{(N\times G)\times K\times S}\). The overlap-add operation, acting as the inverse of chunking, is employed to obtain \(h^{\prime\prime\prime}\in\mathbb{R}^{(N\times G)\times L}\) from \(h^{\prime\prime\prime}\). Finally \(h^{\prime\prime\prime}\) goes through a two-layer MLP and a ReLU [30] activation, then generating all \(m_{k}\) at once. As mentioned before, we count noise as an additional speaker, which can make use of the noise information within the existing framework. The noise-speaker has its own supervision and prediction like human speaker. From Equation 2, 3 and 4, we have the predicted noise signal \(\hat{n}\in\mathbb{R}^{1\times T}\), the noise mask \(m_{n}\in\mathbb{R}^{N\times L}\) and the predicted noise representation \(h_{h}\in\mathbb{R}^{N\times L}\), respectively. Thus far we have \(m_{k}\in\{m_{1},m_{2},\dots,m_{C},m_{n}\}\), \(h_{k}\in\{h_{s_{1}},h_{s_{2}},\dots,h_{s_{C}},h_{\hat{n}}\}\), etc., for a total of \(C+1\) sources. ### Patch-wise Contrastive Learning Based on the noise-speaker, we can further denoise the separated speech utilizing predicted noise. As shown in Figure 1, patch-wise contrastive learning is conducted on \(h_{s_{t}}\) and \(h_{k}\), where details can be found in Figure 3. #### 2.2.1 Patch Sampler We performed 256 comparisons in the contrastive learning. In each comparison, we first randomly sample a small patch on the predicted speech representation \(h_{s_{t}}\), which serves as a query example \(r_{q}^{i}\). Then the patch of corresponding position on its ground-truth representation \(h_{s_{t}}\) is sampled, which serves as a positive example \(r_{p}^{i}\). Finally other \(M\) patches randomly sampled on predicted noise representation \(h_{\hat{n}}\) serve as negative ex Figure 1: The overall pipeline (2-speaker version) of MASS. \(x_{n}\) and \(\hat{n}\) denotes the input noisy mixture and predicted noise. \(\hat{s}_{1}\) and \(\hat{s}_{2}\) are the separated speech signals while \(s_{1}\) and \(s_{2}\) are the ground-truth. \(h_{\hat{s}_{1}}\), \(h_{\hat{s}_{2}}\) and \(h_{\hat{n}}\) in dashed boxes are trainable predicted representations, while \(h_{\hat{s}_{1}}\) and \(h_{\hat{s}_{2}}\) in solid boxes are the ground-truth. “+” denotes the mutual information between separated and ground-truth speech is maximized while “-” denotes the mutual information between separated speech and noise is minimized. Utterance-level permutation-invariant training (aPIT) [26] is employed to determine the order of speakers for corresponding comparisons. Figure 3: The diagram of patch-wise contrastive learning. The query example \(r_{q}^{i}\), positive example \(r_{p}^{i}\) and negative examples \(r_{n}^{i}\) are sampled from the predicted speech representation \(h_{\hat{s}_{t}}\), ground-truth speech representation \(h_{s_{t}}\) and predicted noise representation \(h_{\hat{n}}\), respectively. The classification is based on the results of their cosine similarity. amples \(r_{q}^{j}\). The whole process is implemented by patch sampler, which learns to project query, positive and negative patches into examples in a shared 3-D embedding unit space, which prevents the space from collapsing or expanding: \[r_{q}^{i},r_{q}^{j}=\mathrm{PatchSampler}(h_{s_{t}},h_{\hat{s}_{t}},h_{\hat{n}}) \tag{5}\] where \(r_{q}^{i},r_{p}^{i},r_{\hat{n}}^{j}\in\mathbb{R}^{P\times P\times Q}\) and \(i,j\in\{1,2,\ldots,M\}\). \(P\) denotes the patch size and \(Q\) denotes the features of MLP in patch sampler. In this work, \(M\), \(P\) and \(Q\) is set to 256, 1 and 256, respectively. #### 2.2.2 Contrastive Loss From the perspective of mutual information, query examples should be similar to corresponding positive examples but dissimilar to all negative examples, which provides an \(M+1\) classification problem. By calculating their cosine similarities and optimizing the cross-entropy loss, the mutual information between predicted speech and noise would be minimized, thus suppressing the noise from separated speech. The contrastive loss is conducted from each of \(C\) human speakers, which can be formulated as: \[\mathcal{L}_{PCL}=\frac{1}{C}\sum_{t=1}^{C}\sum_{i=1}^{M}-\ln\left[\frac{e^{r_{ q}^{i}\cdot r_{p}^{i}/\tau}}{e^{r_{q}^{i}\cdot r_{p}^{i}/\tau}+\sum_{j=1}^{M}e^{r_{ q}^{i}\cdot r_{\hat{n}}^{j}/\tau}}\right] \tag{6}\] where \(\tau\) denotes the temperature parameter [31], and is set to 0.07 in this work. ### Training Objective The main separation loss \(\mathcal{L}_{si-snr}\) is to maximize SI-SNR between separated signals \(\hat{y}_{k}\) and ground-truth signals \(y_{k}\) for \(G\) sources: \[\mathcal{L}_{si-snr}=\frac{1}{G}\sum_{k=1}^{G}-10\mathrm{log}_{10}\left(\frac{ \|\frac{\hat{y}_{k}^{T}y_{k}}{\|y_{k}\|^{2}}y_{k}\|^{2}}{\|\frac{\hat{y}_{k}^{ T}y_{k}}{\|y_{k}\|^{2}}y_{k}-\hat{y}_{k}\|^{2}}\right) \tag{7}\] Thus far, the total loss of proposed NASS method is formulated as: \[\mathcal{L}_{Total}=\mathcal{L}_{si-snr}+\lambda\mathcal{L}_{PCL} \tag{8}\] where \(\lambda\) is the parameter to balance SS loss \(\mathcal{L}_{si-snr}\) and contrastive loss \(\mathcal{L}_{PCL}\), which is set to 2 or 3 in this work. The model is trained with uPIT. It is worthy noting that \(\mathcal{L}_{PCL}\) utilizes the permutation result of \(\mathcal{L}_{si-snr}\), which reduces double-counting and ensures the correct comparisons. ## 3 Experiments ### Datasets We evaluate NASS and existing methods on two common noisy datasets: WHAM! and LibriMix. **WHAM** is a noisy version of WSJ0-2Mix [10], which is added noise samples recorded in coffee shops, restaurants and bars. SNR between the loudest speaker and noise varies from -6 to +3 dB. WHAM! follows the same structure as WSJ0-2Mix, which has 119 speakers and 43 hours of speech (30h, 8h, 5h for training, validation, test, respectively). **LibriMix** contains Libri2Mix and Libri3Mix for noisy multi-speaker SS tasks. In our chosen version of LibriMix, the clean mixture is selected from LibriSpeech train-100 [32] and mixed between -25 and -33 dB of LUFS [33]. Noise samples from WHAM! are added to the mixture between -38 and -30 dB of LUFS. With 331 speakers, Libri2Mix has 80 hours of speech (58h, 11h, 11h for training, validation, test, respectively) and Libri3Mix has 62 hours (40h, 11h, 11h for training, validation, test, respectively). ### Experimental Setup To ensure reproducibility, we conduct the experiments on SpeechBrain [34], an open-source AI toolkit for SS tasks. The network configurations of existing methods are the same as WSJ0-Mix recipes on SpeechBrain, which follows the prior works where more details can be found. We train 200 epochs for all methods on NVIDIA V100 GPUs, using Adam optimizer [35] with initial learning rate of \(1.5\times 10^{-4}\) and automatic mixed precision [36]. After training 85 epochs (5 epochs for Conv-TasNet), the learning rate is halved if with no improvement of validation for 3 epochs. Speed perturbation [37] is applied for data augmentation. There's no dynamic mixing [38] in our experiments. The batch size and number of workers are set to 1 and 4. The training signal length is 4 seconds long and loss threshold is set to -30. Gradient clipping is applied to limit the \(L_{2}\) norm of gradients to 5. All the hyperparameters are adjusted on the validation set. ### Metrics and Baselines We use the results of SI-SNRi and SDRi in the test set to evaluate all experiments, which measure the degree of improvement in clarity and fidelity of these separated audios. For all metrics, higher score indicates better performance. To assess the effectiveness of proposed NASS method, we set three baselines from mask-based methods for comparisons: * ConvTasNet [6]: A fully-convolutional network without dual path but good performance in clean SS tasks. * DPRNN [7]: An RNN-based dual-path model with relatively \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline Method & Spk & NS & SI-SNRi (dB) & SDRi (dB) & Params (M) \\ \hline \multirow{3}{*}{ConvTasNet\({}^{*}\)} & 2 & \(\times\) & 8.8 & 9.4 & 6.8 \\ & & ✓ & **9.2** & **9.8** & 6.8 \\ \cline{2-6} & 3 & \(\times\) & 6.2 & 6.7 & 6.9 \\ & & ✓ & **7.3** & **7.9** & 7.0 \\ \hline \multirow{3}{*}{Sepformer\({}^{*}\)} & 2 & \(\times\) & 12.9 & 13.5 & 25.9 \\ & & ✓ & **13.3** & **13.8** & 25.9 \\ \cline{1-1} \cline{2-6} & 3 & \(\times\) & 10.0 & 10.5 & 26.0 \\ \cline{1-1} \cline{2-6} & 3 & ✓ & **10.4** & **11.0** & 26.1 \\ \hline \hline \end{tabular} \end{table} Table 1: The ablation study of noise-speaker on LibriMix. “NS” denotes the setting option of noise-speaker. “\({}^{*}\)” denotes the baseline results is self-reproduced. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline S\(\rightarrow\)N & N\(\rightarrow\)S & \(\lambda\) & M & SI-SNRi (dB) & SDRi (dB) & Params (M) \\ \hline \(\times\) & \(\times\) & - & - & 13.3 & 13.8 & 25.9 \\ ✓ & ✓ & 1 & 256 & 13.4 & 14.0 & 25.9 \\ \(\times\) & ✓ & 1 & 256 & 13.5 & 14.0 & 25.9 \\ ✓ & \(\times\) & 1 & 256 & 13.5 & 14.1 & 25.9 \\ ✓ & \(\times\) & 1 & 1 & 13.4 & 14.0 & 25.9 \\ ✓ & \(\times\) & 2 & 256 & **13.6** & **14.1** & 25.9 \\ ✓ & \(\times\) & 2 & 512 & 13.5 & 14.1 & 25.9 \\ ✓ & \(\times\) & 3 & 256 & 13.4 & 14.0 & 25.9 \\ \hline \hline \end{tabular} \end{table} Table 2: The ablation study of patch-wise contrastive learning on Libri2Mix with September. “S\(\rightarrow\)N” denotes \(\mathcal{L}_{PCL}^{\mathcal{L}_{PCL}^{\mathcal{L}_{PCL}^{\mathcal{L}_{ PCL}^{\mathcal{L}_{ PCL}^{\mathcal{L}_{ PCL}^{\mathcal{L}_{ PCL}^{\mathcal{L}_{ PCL}^{\mathcal{L}_{ PCL}^{\mathcal{L}_{ PCL}^{\mathcal{L}_{ PCL}^{\mathcal{L}_{ PCL}^{\mathcal{L}_{ PCL}_{ PCL}^{\mathcal{L}_{ PCL}_{ PCL}^{\mathcal{L}_{ PCL}_{ PCL}^{\mathcal{L}_{ PCL}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{L}_{ PCL}^{\mathcal{L}_{ PCL}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}}^{\mathcal{L}_{ PCL}}}}}}\). “\(\lambda\)” indicates the balancing parameter. “\(M\)” indicates the number of negative patches in each comparison. large receptive field for long sequence modeling. * Sepformer [8]: A Transformer-based dual-path network with relatively large size but excellent performance, allowing parallel computations. ## 4 Results ### Effect of Noise-speaker The effect of noise-speaker is shown in Table 1. Note that all the following test results of SI-SNR include human speakers only. Compared to the baselines, benefit from noise supervision, the two models with noise-speaker perform better in the separation of either 2 or 3 speakers, especially in the case of 3 speakers, and within 0.1M additional parameters. ### Effect of Patch-wise Contrastive Learning Based on noise-speaker, we study the effect of patch-wise contrastive learning in Table 2. As mentioned in Equation 6, the query example \(r_{q}^{i}\) and negative examples \(r_{n}^{j}\) come from speech and noise, respectively, denoting the loss as \(\mathcal{L}_{PCL}^{\mathcal{S}\leftrightarrow N}\). However, the query example can be sampled from noise representation and compared to the negative examples on speech representations, denoting the loss as \(\mathcal{L}_{PCL}^{\mathcal{S}\rightarrow S}\). The results show that both the choices work conflict to each other in a degree, and the best is with \(\mathcal{L}_{PCL}^{\mathcal{S}\leftrightarrow N}\) only. We also conduct the ablation study from the balancing parameter \(\lambda\) and the number of negative patches \(M\) in each comparison. The results show that the best settings are as described previously. ### Benchmark against Competitive Methods Table 3 and Table 4 show the comparison results on noisy LibriMix and WHAM!, respectively. Not only self-reproduced but other-reported results indicate NASS can effectively improve the noise-robustness of prior works in LibriMix and WHAM! datasets. It's worth pointing out that NASS can make the previous DPRNN and Sepformer surpass the current SOTA model TDANet in WHAM!, within 0.1M additional parameters. In addition, to further illustrate the effectiveness of NASS, Figure 4 visualizes the comparison results of spectrum for 2-speaker SS. We can see some spectrum of noise in (b), (c) from the Sepformer baseline. While from (d), (e) with noise-speaker, and (f), (g) with additional patch-wise contrastive learning, we can see the spectrum of noise gradually fades away in a degree, thus separating the two sources better. Besides, NASS yields the prediction of noise, which can be used in other speech tasks. In (h), (i), we can also see an improvement from the patch-wise contrastive learning to noise-speaker. ## 5 Conclusions In this paper, we propose NASS method to improve the noise-robustness of previous SS works. Specifically, noise-speaker is set up to make use of the noise supervision, and the prediction of noise can be used by other speech tasks. Patch-wise contrastive learning is conducted on predicted speech and noise representations, which minimize the mutual information between them, thus separating to each other in detail. Experimental results show that NASS can significantly suppress the noise from separated speech, with less than 0.1M additional parameters and achieves the current SOTA on WHAM! dataset. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Method & SI-SNRi (dB) & SDRi (dB) & Params (M) \\ \hline DPRNN & 13.7 & 14.1 & 2.7 \\ Sepformer & 14.4 & 15.0 & 26.0 \\ TDANet & 15.2 & 15.4 & 2.3 \\ \hline DPRNN (NASS) & **15.6** & **15.9** & 14.9 \\ Sepformer (NASS) & **15.9** & **16.2** & 25.9 \\ \hline \hline \end{tabular} \end{table} Table 4: The competitive results of proposed NASS method on WHAM!. The baseline results are reported in the original paper [9], where all can be found. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline Method & Spk & SI-SNRi (dB) & SDRi (dB) & Params (M) \\ \hline \multirow{3}{*}{Conv/TasNet\({}^{*}\)} & 2 & 8.8 & 9.4 & 6.8 \\ & 3 & 6.2 & 6.7 & 6.9 \\ \cline{2-5} & 2 & 12.8 & 13.4 & 14.8 \\ \cline{2-5} & 3 & 11.1 & 11.6 & 14.9 \\ \cline{2-5} & 2 & 12.9 & 13.5 & 25.9 \\ \cline{2-5} & 3 & 10.0 & 10.5 & 26.0 \\ \hline Conv/TasNet (NASS) & 2 & **9.6** & **10.2** & 6.8 \\ & 3 & **7.9** & **8.5** & 7.0 \\ \cline{2-5} & 2 & **13.1** & **13.6** & 14.9 \\ \cline{2-5} & 3 & **11.5** & **12.0** & 15.0 \\ \cline{2-5} & 2 & **13.6** & **14.1** & 25.9 \\ \cline{2-5} & 3 & **10.9** & **11.4** & 26.1 \\ \hline \hline \end{tabular} \end{table} Table 3: The competitive results of proposed NASS method on LibriMix. “\({}^{*}\) ” denotes the baseline results is self-reproduced. Figure 4: The comparison results of spectrum on Libri2mix with Sepformer. Subplot (a) denotes the mixture; (b), (d), (f) denotes the separated result of speaker 1 in Sepformer baseline, noise-speaker, patch-wise contrastive learning, respectively; (c), (e), (g) denotes the same as (b), (d), (f) in speaker 2, respectively; (h), (i) denotes the result of predicted noise in noise-speaker, patch-wise contrastive learning, respectively.
2301.02692
Isotonic Recalibration under a Low Signal-to-Noise Ratio
Insurance pricing systems should fulfill the auto-calibration property to ensure that there is no systematic cross-financing between different price cohorts. Often, regression models are not auto-calibrated. We propose to apply isotonic recalibration to a given regression model to ensure auto-calibration. Our main result proves that under a low signal-to-noise ratio, this isotonic recalibration step leads to explainable pricing systems because the resulting isotonically recalibrated regression functions have a low complexity.
Mario V. Wüthrich, Johanna Ziegel
2023-01-06T19:18:46Z
http://arxiv.org/abs/2301.02692v1
# Isotonic Recalibration under a Low Signal-to-Noise Ratio ###### Abstract Insurance pricing systems should fulfill the auto-calibration property to ensure that there is no systematic cross-financing between different price cohorts. Often, regression models are not auto-calibrated. We propose to apply isotonic recalibration to a given regression model to ensure auto-calibration. Our main result proves that under a low signal-to-noise ratio, this isotonic recalibration step leads to explainable pricing systems because the resulting isotonically recalibrated regression functions have a low complexity. **Keywords.** Auto-calibration, isotonic regression, isotonic recalibration, low signal-to-noise ratio, cross-financing, algorithmic solution, deep neural network, explainability. ## 1 Introduction There are two seemingly unrelated problems in insurance pricing that we are going to tackle in this paper. First, an insurance pricing system should not have any systematic cross-financing between different price cohorts. Systematic cross-financing implicitly means that some parts of the portfolio are under-priced, and this is compensated by other parts of the portfolio that are over-priced. We can prevent systematic cross-financing between price cohorts by ensuring that the pricing system is _auto-calibrated_. We propose to apply _isotonic recalibration_ which turns any regression function into an auto-calibrated pricing system. The second problem that we tackle is the explainability of complex algorithmic models for insurance pricing. In a first step, one may use any complex regression model to design an insurance pricing system such as, e.g., a deep neural network. Such complex regression models typically lack explainability and rather act as black boxes. For this reason, there are several tools deployed to explain such complex solutions, we mention, for instance, SHAP by Lundberg-Lee [22]. Since algorithmic solutions do not generally fulfill the aforementioned auto-calibration property, we propose to apply isotonic recalibration to the algorithmic solution. If the signal-to-noise ratio is low in the data, then the isotonic recalibration step leads to a coarse partition of the covariate space and, as a consequence, it leads to an explainable version of the algorithmic model used in the first place. Thus, explainability is a nice side result of applying isotonic recalibration in low signal-to-noise ratio problems, which is typically the case in insurance pricing settings. There are other methods for obtaining auto-calibration through a recalibration step; we mention Lindholm et al. [21] and Denuit et al. [8]. These other methods often require tuning of hyperparameters, e.g., using cross-validation. Isotonic recalibration does not involve any hyperparameters as it solves a constraint regression problem (ensuring monotonicity). As such, isotonic recalibration is universal because it also does not depend on the specific choice of the loss function within the family of Bregman losses. We formalize our proposal. Throughout, we assume that all considered random variables have finite means. Consider a response variable \(Y\) that is equipped with covariate information \(\mathbf{X}\in\mathcal{X}\subseteq\mathbb{R}^{q}\). The general goal is to determine the (true) regression function \(\mathbf{x}\mapsto\mathbb{E}[Y|\mathbf{X}=\mathbf{x}]\) that describes the conditional mean of \(Y\), given \(\mathbf{X}\). Typically, this true regression function is unknown, and it needs to be determined from i.i.d. data \((y_{i},\mathbf{x}_{i})_{i=1}^{n}\), that is, a sample from \((Y,\mathbf{X})\). For this purpose, we try to select a regression function \(\mathbf{x}\mapsto\mu(\mathbf{x})\) from a (pre-chosen) function class on \(\mathcal{X}\) that approximates the conditional mean \(\mathbb{E}[Y|\mathbf{X}=\cdot]\) as well as possible. Often, it is not possible to capture all features of the regression function from data. In financial applications, a minimal important requirement for a well-selected regression function \(\mu(\cdot)\) is that it fulfills the auto-calibration property. **Definition 1.1**: _The regression function \(\mu\) is auto-calibrated for \((Y,\mathbf{X})\) if_ \[\mu(\mathbf{X})=\mathbb{E}\left[\,Y|\,\mu(\mathbf{X})\right],\qquad\mathbb{P}\text{-a.s.}\] Auto-calibration is an important property in actuarial and financial applications because it implies that, on average, the (price) cohorts \(\mu(\mathbf{X})\) are self-financing for the corresponding claims \(Y\), i.e., there is no systematic cross-financing within the portfolio, if the structure of this portfolio is described by the covariates \(\mathbf{X}\sim\mathbb{P}\) and the price cohorts \(\mu(\mathbf{X})\), respectively. In a Bernoulli context, an early version of auto-calibration (called well-calibrated) has been introduced by Schervish [28] to the community in statistics, and recently, it has been considered in detail by Gneiting-Resin [12]. In an actuarial and financial context, the importance of auto-calibration has been emphasized in Kruger-Ziegel [17], Denuit et al. [8], Wuthrich [30] and Lindholm et al. [21]. Many regression models do not satisfy the auto-calibration property. However, there is a simple and powerful method, which we call _isotonic recalibration_, to obtain an (in-sample) auto-calibrated regression function starting from any candidate function \(\pi:\mathcal{X}\to\mathbb{R}\). We apply isotonic recalibration to the pseudo-sample \((y_{i},\pi(\mathbf{x}_{i}))_{i=1}^{n}\) to obtain an isotonic regression function \(\widehat{\mu}\). Then, \[\widehat{\mu}(\mathbf{X}^{\prime})=\mathbb{E}\left[Y^{\prime}|\widehat{\mu}(\mathbf{ X}^{\prime})\right],\quad\mathbb{P}_{n}\text{-a.s.}, \tag{1.1}\] where \((Y^{\prime},\mathbf{X}^{\prime})\) is distributed according to the empirical distribution \(\mathbb{P}_{n}\) of \((y_{i},\mathbf{x}_{i})_{i=1}^{n}\); see Section 2.1 for details. Isotonic regression determines an adaptive partition of the covariate space \(\mathcal{X}\), and \(\widehat{\mu}\) is determined by averaging \(y\)-values over the partition elements. Clearly, other binning approaches can also be used on the pseudo-sample \((y_{i},\pi(\mathbf{x}_{i}))_{i=1}^{n}\) to enforce (1.1), but we argue that isotonic regression is preferable since it avoids subjective choices of tuning parameters and leads to sensible regression functions under reasonable and verifiable assumptions. The only assumption for isotonic recalibration to be informative is that the function \(\pi\) gets the rankings of the conditional means right, that is, whenever \(\mathbb{E}\left[Y|\mathbf{X}=\mathbf{x}_{i}\right]\leq\mathbb{E}\left[Y|\mathbf{X}=\mathbf{x} _{j}\right]\), we would like to have \(\pi(\mathbf{x}_{i})\leq\pi(\mathbf{x}_{j})\). Using isotonic regression for recalibration is not new in the literature. In the case of binary outcomes, it as already been proposed by Zadrozny-Elkan [32], Menon et al. [23] and recently by Tasche [29, Section 5.3]. The monotone single index models of Balabdaoui et al. [2] follow the same strategy as described above but the focus of their work is different from ours. They specifically consider a linear regression model for the candidate function \(\pi\), which is called the index. In the case of distributional regression, that is, when interest is in determining the whole conditional distribution of \(Y\) given covariate information \(\mathbf{X}\), Henzi et al. [13] have suggested to first estimate an index function \(\pi\) that determines the ordering of the conditional distributions w.r.t. first order stochastic dominance and then estimate conditional distributions using isotonic distributional regression; see Henzi et al. [14]. As a new contribution, we show that the size of the partition of the isotonic recalibration may give insight concerning the information content of the recalibrated regression function \(\widehat{\mu}\). Furthermore, the partition of the isotonic recalibration allows to explain connections between covariates and outcomes, in particular, when the signal-to-noise ratio is small which typically is the case for insurance claims data. In order to come up with a candidate function \(\pi:\mathcal{X}\to\mathbb{R}\), one may consider any regression model such as, e.g., a generalized linear model, a regression tree, a tree boosting regression model or a deep neural network regression model. The aim is that \(\pi(\cdot)\) provides us with the correct rankings of the conditional means \(\mathbb{E}[Y|\mathbf{X}=\mathbf{x}_{i}]\), \(i=1,\ldots,n\). The details are discussed in Section 3. **Organization.** In Section 2, we formally introduce isotonic regression which is a constraint optimization problem. This constraint optimization problem is usually solved with the pool adjacent violators (PAV) algorithm, which is described in Appendix A.1. Our main result is stated in Section 2.2. It relates the complexity of the isotonic recalibration solution to the signal-to-noise ratio in the data. Section 3 gives practical guidance on the use of isotonic recalibration, and in Section 4 we exemplify our results on a frequently used insurance data set. In this section we also present graphic tools for interpreting the regression function. In Section 5, we conclude. ## 2 Isotonic regression ### Definition and basic properties For simplicity, we assume that the candidate function \(\pi:\mathcal{X}\to\mathbb{R}\) does not lead to any ties in the values \(\pi(\mathbf{x}_{1}),\ldots,\pi(\mathbf{x}_{n})\), and that the indices \(i=1,\ldots,n\) are chosen such that they are aligned with the ranking, that is, \(\pi(\mathbf{x}_{1})<\ldots<\pi(\mathbf{x}_{n})\). Remark 2.1 explains how to handle ties. The isotonic regression of \(\mathbf{z}=(y_{i},\pi(\mathbf{x}_{i}))_{i=1}^{n}\) with positive case weights \((w_{i})_{i=1}^{n}\) is the solution \(\widehat{\mathbf{\mu}}\in\mathbb{R}^{n}\) to the restricted minimization problem \[\widehat{\mathbf{\mu}}\ =\ \operatorname*{arg\,min}_{\mathbf{\mu}=(\mu_{1},\ldots,\mu_{ n})^{\top}}\ \sum_{i=1}^{n}w_{i}\left(y_{i}-\mu_{i}\right)^{2},\qquad\text{ subject to }\mu_{1}\leq\ldots\leq\mu_{n}. \tag{2.1}\] We can rewrite the side constraints as \(A\mathbf{\mu}\geq\mathbf{0}\) (component-wise), where \(A=(a_{i,j})_{i,j}\in\mathbb{R}^{n\times(n-1)}\) is the matrix with the elements \(a_{i,j}=\mathds{1}_{i=j-1}-\mathds{1}_{i=j}\). We define \(\mathbf{y}=(y_{1},\ldots,y_{n})^{\top}\in\mathbb{R}^{n}\) and the (diagonal) case weight matrix \(W=\operatorname{diag}(w_{1},\ldots,w_{n})\). The above optimization problem then reads as \[\widehat{\mathbf{\mu}}\ =\ \widehat{\mathbf{\mu}}(\mathbf{z})\ =\ \operatorname*{arg\,min}_{\mathbf{ \mu}:\,A\mathbf{\mu}\geq\mathbf{0}}\ (\mathbf{y}-\mathbf{\mu})^{\top}W(\mathbf{y}-\mathbf{\mu}). \tag{2.2}\] This shows that the isotonic regression is solved by a convex minimization with linear side constraints. It remains to verify that the auto-calibration property claimed in (1.1) holds. **Remark 2.1**: If there are ties in the values \(\pi(\mathbf{x}_{1}),\ldots,\pi(\mathbf{x}_{n})\), for example, \(\pi(\mathbf{x}_{i})=\pi(\mathbf{x}_{j})\) for some \(i\neq j\), we replace \(y_{i}\) and \(y_{j}\) with their weighted average \((w_{i}y_{i}+w_{j}y_{j})/(w_{i}+w_{j})\) and assign them weights \((w_{i}+w_{j})/2\). The procedure is analogous for more than two tied values. This corresponds to the second option of dealing with ties in Leeuw et al. [20, Section 2.1]. **Remark 2.2**: Barlow et al. [3, Theorem 1.10] show that the square loss function in (2.1) can be replaced by any Bregman loss function, \(L_{\phi}(y,\mu)=\phi(y)-\phi(\mu)+\phi^{\prime}(\mu)(y-\mu)\), without changing the optimal solution \(\widehat{\mathbf{\mu}}\). Here, \(\phi\) is a strictly convex function with subgradient \(\phi^{\prime}\). Bregman loss functions are the only consistent loss functions for the mean; see Savage [27] and Gneiting [11, Theorem 7]. If \(y\) and \(\mu\) only take positive values, a Bregman loss function of relevance for this paper is the gamma deviance loss, which is equivalent to the QLIKE loss that arises by choosing \(\phi(x)=-\log(x)\); see Patton [25]. The solution to the minimization problem (2.2) can be given explicitly as a min-max formula, that is, \[\widehat{\mu}_{i}\ =\ \min_{\ell=i,\ldots,n}\max_{k=1,\ldots,\ell}\,\frac{1}{ \sum_{j=k}^{\ell}w_{j}}\,\sum_{j=k}^{\ell}w_{j}y_{j}.\] While the min-max formula is theoretically appealing and useful, the related minimum lower sets (MLS) algorithm of Brunk et al. [6] is not efficient to compute the solution. The pool adjacent violators (PAV) algorithm, which is due to Ayer et al. [1], Miles [24] and Kruskal [18], allows for fast computation of the isotonic regression and provides us with the desired insights about the solution. In Appendix A.1, we describe the PAV algorithm in detail. The solution is obtained by suitably partitioning the index set \(\mathcal{I}=\{1,\ldots,n\}\) into (discrete) intervals \[\mathcal{I}_{k}=\mathcal{I}_{k}(\mathbf{z})=\{i_{k-1}+1,\ldots,i_{k}\}\qquad\text{ for }\ k=1,\ldots,K(\mathbf{z}), \tag{2.3}\] with \(\mathbf{z}\)-dependent slicing points \(0=i_{0}<i_{1}<\ldots<i_{K}=n\), and with \(K(\mathbf{z})\in\{1,\ldots,n\}\) denoting the number of discrete intervals \(\mathcal{I}_{k}\). The number \(K(\mathbf{z})\) of intervals and the slicing points \(i_{k}=i_{k}(\mathbf{z})\), \(k=1,\ldots,K(\mathbf{z})\), for the partition of \(\mathcal{I}\) depend on the observations \(\mathbf{z}\). On each discrete interval \(\mathcal{I}_{k}\) we then obtain the isotonic regression parameter estimate for instance \(i\in\mathcal{I}_{k}\) \[\widehat{\mu}_{i}=\widehat{\mu}_{i_{k}}=\frac{1}{\sum_{j\in\mathcal{I}_{k}}w_ {j}}\,\sum_{j\in\mathcal{I}_{k}}w_{j}y_{j}, \tag{2.4}\] see also (A.5). Thus, on each block \(\mathcal{I}_{k}\) we have a constant estimate \(\widehat{\mu}_{i_{k}}\), and the isotonic property tells us that these estimates are strictly increasing over the block indices \(k=1,\ldots,K(\mathbf{z})\), because these blocks have been chosen to be maximal. We call \(K(\mathbf{z})\) the _complexity number_ of the resulting isotonic regression. Figure 1 gives an example for \(n=20\) and rankings \(\pi(\mathbf{x}_{i})=i\) for \(i=1,\ldots,n\). The resulting (non-parametric) isotonic regression function \(\widehat{\mathbf{\mu}}=\widehat{\mathbf{\mu}}(\mathbf{z})\), which is only uniquely determined at the observations \((\pi(\mathbf{x}_{i}))_{i=1}^{n}\), can be interpolated by a step function. In Figure 1 this results in a step function having \(K(\mathbf{z})-1=9\) steps, that is, we have \(K(\mathbf{z})=10\) blocks, and the estimated regression function \(\widehat{\mu}\) takes only \(K(\mathbf{z})=10\) different values. This motivates to call \(K(\mathbf{z})\) the complexity number of the resulting step function, see Figure 1. monotonicity. This is different from the regression tree approach considered in Lindholm et al. [21]. In fact, this latter reference does not require monotonicity but aims at minimizing the "plain" square loss using, e.g., cross-validation for determining the optimal number of partitions. In our context, the complexity number \(K(\mathbf{z})\) is fully determined through requiring monotonicity and, in general, the results will differ. In insurance applications, the blocks \(\mathcal{I}_{k}\subset\mathcal{I}\) provide us with the (empirical) price cohorts \(\widehat{\mu}_{i}=\widehat{\mu}_{i_{k}}\), for \(i\in\mathcal{I}_{k}\), and (2.4) leads to the (in-sample) auto-calibration property for \(Y\) \[\mathbb{E}\left[\left.Y^{\prime}\right|\widehat{\mu}(\mathbf{X}^{\prime})= \widehat{\mu}_{i_{k}}\right]\ =\ \frac{1}{\sum_{i\in\mathcal{I}_{k}}w_{i}}\,\sum_{i\in\mathcal{I}_{k}}w_{i}y_{ i}\ =\ \widehat{\mu}_{i_{k}}, \tag{2.5}\] where \((Y^{\prime},\mathbf{X}^{\prime})\) is distributed according to the weighted empirical distribution of \((y_{i},\mathbf{x}_{i})_{i=1}^{n}\) with weights \((w_{i})_{i=1}^{n}\). Moreover, summing over the entire portfolio we have the (global) balance property \[\sum_{i=1}^{n}w_{i}\widehat{\mu}_{i}=\sum_{k=1}^{K(\mathbf{z})}\sum_{i\in \mathcal{I}_{k}}w_{i}\widehat{\mu}_{i}=\sum_{k=1}^{K(\mathbf{z})}\widehat{\mu}_{ i_{k}}\sum_{i\in\mathcal{I}_{k}}w_{i}=\sum_{k=1}^{K(\mathbf{z})}\sum_{i\in \mathcal{I}_{k}}w_{i}y_{i}=\sum_{i=1}^{n}w_{i}y_{i}, \tag{2.6}\] that is, in average the overall (price) level is correctly specified if we price the insurance policies with covariates \(\mathbf{x}_{i}\) by \(w_{i}\widehat{\mu}_{i}\), where the weights \(w_{i}>0\) now receive the interpretation of exposures. ### Monotonicity of the expected complexity number In this section, we prove that the expected complexity number \(\mathbb{E}[K(\mathbf{z})]\) is an increasing function of the signal-to-noise ratio. For this, we assume a location-scale model for the responses \(Y_{i}\), that is, we assume that \[Y_{i}=\mu_{i}+\sigma\epsilon_{i},\quad i=1,\ldots,n, \tag{2.7}\] with noise terms \(\epsilon_{i}\), location parameters \(\mu_{i}\in\mathbb{R}\) with \(\mu_{1}\leq\ldots\leq\mu_{n}\), and scale parameter \(\sigma>0\). Here, \(\mu_{i}\) takes the role of \(\pi(\mathbf{x}_{i})\) in the previous section. The parameters \(\mu_{1},\ldots,\mu_{n}\) are unknown but it is known that they are labeled in increasing order. The signal-to-noise ratio is Figure 1: Example of an isotonic regression with \(K(\mathbf{z})=10\) blocks. then described by the scale parameter \(\sigma\), i.e., we have a low signal-to-noise ratio for high \(\sigma\) and vice-versa. The explicit location-scale structure (2.7) allows us to analyze \[\boldsymbol{y}\ =\ \boldsymbol{Y}_{\sigma}(\omega)\ =\ \boldsymbol{\mu}+\sigma \boldsymbol{\epsilon}(\omega)\ =\ (\mu_{1},\ldots,\mu_{n})^{\top}+\sigma(\epsilon_{1},\ldots, \epsilon_{n})^{\top}(\omega), \tag{2.8}\] point-wise in the sample points \(\omega\in\Omega\) of the probability space \((\Omega,\mathcal{F},\mathbb{P})\) as a function of \(\sigma>0\); this is similar to the re-parametrization trick of Kingma-Welling [16] that is frequently used to explore variational auto-encoders. In this section, we write \(K(\boldsymbol{y})=K(\boldsymbol{z})\), because the ranking of the outcomes \(\boldsymbol{y}\) is clear from the context (labeling), and we do not go via a ranking function \(\pi(\cdot)\). **Theorem 2.3**: _Assume that the responses \(Y_{i}\), \(i=1,\ldots,n\), follow the location-scale model (2.7) with (unknown) ordered location parameters \(\mu_{1}<\ldots<\mu_{n}\), and scale parameter \(\sigma>0\). Then, the expected complexity number \(\mathbb{E}[K(\boldsymbol{Y})]\) of the isotonic regression of \(\boldsymbol{Y}\) is a decreasing function in \(\sigma>0\). If the distribution of the noise vector \(\boldsymbol{\epsilon}=(\epsilon_{1},\ldots,\epsilon_{n})^{\top}\) has full support on \(\mathbb{R}^{n}\), then \(\mathbb{E}[K(\boldsymbol{Y})]\) is strictly decreasing in \(\sigma\)._ Theorem 2.3 proves that, under a specific but highly relevant model, the complexity number \(K(\boldsymbol{Y})\) of the isotonic regression is decreasing on average with a decreasing signal-to-noise ratio. Implicitly, this means that more noisy data, which has a lower information ratio, leads to a less granular regression function. Consequently, if the partition of the isotonic regression is used to obtain a partition of the covariate space \(\mathcal{X}\) via the candiate function \(\pi\), this partition will be less granular, the more noise of \(\boldsymbol{Y}\) cannot be explained by \(\pi(\boldsymbol{X})\), see also Section 3.3 for a further discussion. To the best of our knowledge, our result is a new contribution to the literature on isotonic regression. While we focus on the finite sample case, a related result is the analysis of the complexity number of the isotonic regression function as function of the sample size \(n\), see Dimitriadis et al. [9, Lemma 3.2]. We are assuming strictly ordered location parameters in the formulation of Theorem 2.3. This assumption simplifies the proof in the case where we show that the expected complexity number \(K(\boldsymbol{Y})\) is strictly decreasing in \(\sigma\). With some additional notation, the theorem could be generalized to allow for ties between some (but not all) \(\mu_{i}\). Figure 2 gives an example of a location-scale model (2.7) with i.i.d. standard Gaussian noise and scale parameters \(\sigma=2\) (lhs) and \(\sigma=20\) (rhs), and both figures consider the same sample point \(\omega\in\Omega\) in the noise term \(\boldsymbol{\epsilon}(\omega)\), see (2.8). On the right-hand side of Figure 2, we have complexity number \(K(\boldsymbol{y})=13\), and on the left-hand side \(K(\boldsymbol{y})=46\); the chosen sample size is \(n=100\). ## 3 Isotonic recalibration for prediction and interpretation ### Prediction and estimation In order to determine an auto-calibrated model for the true regression function \(\boldsymbol{x}\mapsto\mathbb{E}[Y|\boldsymbol{X}=\boldsymbol{x}]\) from i.i.d. data \((y_{i},\boldsymbol{x}_{i})_{i=1}^{n}\), we are suggesting a two-step estimation procedure. First, we choose a regression model and use the data \((y_{i},\boldsymbol{x}_{i})_{i=1}^{n}\) to obtain an estimate \(\widehat{\pi}\) of a candidate function \(\pi\) that should satisfy \[\pi(\boldsymbol{x})\leq\pi(\boldsymbol{x}^{\prime})\quad\Longleftrightarrow \quad\mathbb{E}[Y|\boldsymbol{X}=\boldsymbol{x}]\leq\mathbb{E}[Y|\boldsymbol {X}=\boldsymbol{x}^{\prime}], \tag{3.1}\] for all \(\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{X}\). For example, in the case study in Section 4, a deep neural network model is chosen for \(\pi\). For sensible results, it is important that the estimation method for \(\widehat{\pi}\) does not overfit to the data. In the second step, we apply isotonic regression to the pseudo-sample \((y_{i},\widehat{\pi}(\mathbf{x}_{i}))_{i=1}^{n}\) to obtain an in-sample auto-calibrated regression function \(\widehat{\mu}\) defined on \(\{\widehat{\pi}(\mathbf{x}_{i}):i=1,\ldots,n\}\). We call this second step _isotonic recalibration_. In order to obtain a prediction for a new covariate value \(\mathbf{x}\in\mathcal{X}\), we compute \(\widehat{\pi}(\mathbf{x})\), find \(i\) such that \(\widehat{\pi}(\mathbf{x}_{i})<\widehat{\pi}(\mathbf{x})\leq\widehat{\pi}(\mathbf{x}_{i+1})\), and interpolate by setting \(\widehat{\mu}(\mathbf{x})=(\widehat{\mu}(\mathbf{x}_{i})+\widehat{\mu}(\mathbf{x}_{i+1}))/2\). This interpolation may be advantageous for prediction. For interpretation and analysis, however, we prefer a step function interpolation as this leads to a partition of the covariate space, see Section 3.3, below, and Figure 2. This two-step estimation approach can be interpreted as a generalization of the monotone single index models considered by Balabdaoui et al. [2]. They assume that the true regression function is of the form \(\mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\psi(\mathbf{\alpha}^{\top}\mathbf{x})\), with an increasing function \(\psi\). In contrast to our proposal, the regression model \(\pi\) is fixed to be a linear model \(\mathbf{\alpha}^{\top}\mathbf{x}\) in their approach. They consider global least squares estimation jointly for \((\psi,\mathbf{\alpha})\), but find it computationally intensive. As an alternative they suggest a two-step estimation procedure similar to our approach but with a split of the data such that \(\mathbf{\alpha}\) and the isotonic regression are estimated on independent samples. They find that if the rate of convergence of the estimator for \(\mathbf{\alpha}\) is sufficiently fast, then the resulting estimator of the true regression function is consistent with a convergence rate of order \(n^{1/3}\). In a distributional regression framework, Henzi et al. [13] considered the described two-step estimation procedure with an isotonic distributional regression [14], instead of a classical least squares isotonic regression as described in Section 2.1. They show that in both cases, with and without sample splitting, the procedure leads to consistent estimation of the conditional distribution of \(Y\) given \(\mathbf{X}\), as long as the index \(\pi\) can be estimated at a parametric rate. The two options, with and without sample splitting, do not result in relevant differences in predictive performance in the applications considered by Henzi et al. [13]. Assumption (3.1) can be checked by diagnostic plots using binning similarly to the plots in Henzi Figure 2: Example of an isotonic regression of location-scale type with varying signal-to-noise ratio for the identical sample point \(\omega\in\Omega\): (lhs) \(\sigma=2\) with \(K(\mathbf{y})=46\) and (rhs) \(\sigma=20\) with \(K(\mathbf{y})=13\). et al. [13, Figure 2] in the distributional regression case. Predictive performance should be assessed on a test set of data disjoint from \((y_{i},\mathbf{x}_{i})_{i=1}^{n}\), that is, on data that has not been used in the estimation procedure at all. Isotonic recalibration insures auto-calibration in-sample, and under an i.i.d. assumption, auto-calibration will also hold approximately out-of-sample. Out-of-sample auto-calibration can be diagnosed with CORP (consistent, optimally binned, reproducible and PAV) mean reliability diagrams as suggested by Gneiting-Resin [12], and comparison of predictive performance can be done with the usual squared error loss function or deviance loss functions. ### Over-fitting at the boundary There is a small issue with the isotonic recalibration, namely, it tends to over-fit at the lower and upper boundaries of the ranks \(\widehat{\pi}(\mathbf{x}_{1})<\ldots<\widehat{\pi}(\mathbf{x}_{n})\). For instance, if \(y_{n}\) is the largest observation in the portfolio (which is not unlikely since the ranking \(\widehat{\pi}\) is chosen response data-driven), then we estimate \(\widehat{\mu}_{i_{K}}=y_{n}\), where \(K=K((y_{i},\widehat{\pi}(\mathbf{x}_{i}))_{i=1}^{n})\). Often, this over-fits to the (smallest and largest) observations, as such extreme values/estimates cannot be verified on out-of-sample data. For this reason, we visually analyze the largest and smallest values in the estimates \(\widehat{\mathbf{\mu}}\), and we may manually merge, say, the smallest block \(\mathcal{I}_{1}\) with the second smallest one \(\mathcal{I}_{2}\) (with the resulting estimate (2.4) on the merged block). More rigorously, this pooling could be cross-validated on out-of-sample data, but we refrain from doing so. We come back to this in Figure 5, below, where we merge the two blocks with the biggest estimates. ### Interpretation In (2.3) we have introduced the complexity number \(K((y_{i},\widehat{\pi}(\mathbf{x}_{i}))_{i=1}^{n})\) that counts the number of different values in \(\widehat{\mathbf{\mu}}\), obtained by the isotonic regression (2.2) in the isotonic recalibration step. This complexity number \(K((y_{i},\widehat{\pi}(\mathbf{x}_{i}))_{i=1}^{n})\) allows one to assess the information content of the model, or in other words, how much signal is explainable from the data. Theorem 2.3 shows that the lower the signal-to-noise ratio, the lower the complexity number of the isotonic regression that we can expect. Clearly, in Theorem 2.3 we assume that the ranking of the observations is correct which will only be approximately satisfied since \(\pi\) has to be estimated. In general, having large samples and flexible regression models for modeling \(\pi\), it is reasonable to assume that the statement remains qualitatively valid. However, in complex (algorithmic) regression models, we need to ensure that we prevent from in-sample overfitting; this is typically controlled by either using (independent) validation data or by performing a cross-validation analysis. Typical claims data in non-life insurance have a low signal-to-noise ratio. Regarding claims frequencies, this low signal-to-noise ratio is caused by the fact that claims are not very frequent events, e.g., in car insurance annual claims frequencies range from 5% to 10%, that is, only one out of 10 (or 20) drivers suffers a claim within a calendar year. A low signal-to-noise ratio also applies to claim amounts, which are usually strongly driven by randomness and the explanatory part from policyholder information is comparably limited. Therefore, we typically expect a low complexity number \(K((y_{i},\widehat{\pi}(\mathbf{x}_{i}))_{i=1}^{n})\) both for claims frequency and claim amounts modeling. In case of a small to moderate complexity number \(K=K((y_{i},\widehat{\pi}(\mathbf{x}_{i}))_{i=1}^{n})\), the regression function \(\widehat{\mathbf{\mu}}\) becomes interpretable through the isotonic recalibration step. For this, we extend the auto-calibrated regression function \(\widehat{\mu}\) from the set \(\{\widehat{\pi}(\mathbf{x}_{1}),\ldots,\widehat{\pi}(\mathbf{x}_{n})\}\) to the entire covariate space by defining a step function \[\widehat{\mu}(\mathbf{x})=\widehat{\mu}_{i_{k}},\quad\text{if}\quad\ \widehat{\pi}(\mathbf{x}_{i_{k}})\leq\widehat{\pi}(\mathbf{x})<\widehat{\pi}(\mathbf{x}_{i_{k +1}}),\] for all \(\mathbf{x}\in\mathcal{X}\), where \(0=i_{0}<i_{1}<\cdots<i_{K}=n\) are the slicing points of the isotonic regression as defined in (2.3). Figure 1 illustrates this step function interpolation which is different from an interpolation scheme that one would naturally use for prediction. We define a partition \(\mathcal{X}_{1},\ldots,\mathcal{X}_{K}\) of the original covariate space \(\mathcal{X}\) by \[\mathcal{X}_{k}=\{\mathbf{x}\in\mathcal{X}:\widehat{\mu}(\mathbf{x})=\widehat{\mu}_{i_ {k}}\},\quad k=1,\ldots,K. \tag{3.2}\] Figure 4 illustrates how this partition of \(\mathcal{X}\) provides insights on the covariate-response relationships in the model. This procedure has some analogy to regression trees and boosting trees that rely on partitions of the covariate space \(\mathcal{X}\). In the case study in Section 4, we illustrate two further possibilities to use the partition defined at (3.2) for understanding covariate-response relationships. First, in Figure 7, the influence of individual covariates on the price cohorts is analyzed, and second, Figure 9 gives a summary view of the whole covariate space for a chosen price cohort. ## 4 Swedish motorcycle data We consider claim amounts modeling on the Swedish motorcycle data which was originally presented in the text book of Ohlsson-Johansson [26] and which is also studied in Wuthrich-Merz [31].1 This data set comprises comprehensive insurance for motorcycles in Sweden. The insurance product covers loss or damage of motorcycles other than collision, e.g., caused by theft, fire or vandalism. The data contains claims aggregated per feature (covariate) combination for the calendar years 1994-1998. There are 683 claims on 62,036 different covariates, thus, claims are very sparse. We use exactly the same data pre-processing as described in [31, Listing 13.3], and an excerpt of the pre-processed data is shown in Listing 1; for a description of the different covariates we refer to [26, Section 2.4] and [31, Section 13.2]. The goal is to build a regression model for these 683 positive claim amounts, and use isotonic recalibration for auto-calibration and interpretation as described in Section 3.3. Footnote 1: The Swedish motorcycle data set is available through the R package CASdatasets [10]. ``` 1'data.frame':62036obs.of9variables: 2%OwnerAge:num18181818181818181818... 3$Gender:Factorw/2levels"Female","Male":111111111111... 4$Area:Factorw/2levels"Zone1","Zone2"...:11112223333... 5$RiskClass:int1233113111... 6$VehAge:num81199111224466... 7$BonusClass:int22341121122... 8$Exposure:num10.7780.4990.5010.929... 9$ClaimNb:int000000000000... 10$ClaimAbout:int00000000000... ``` ### Isotonic recalibration vs. binary regression trees We start by considering the two covariate components RiskClass and VehAge only. Since the resulting covariate space \(\mathcal{X}=\{(\texttt{RiskClass},\texttt{VehAge})\}\subset\mathbb{R}^{2}\) is two-dimensional, we can graphically illustrate the differences between the isotonic recalibration approach and a binary regression tree (as a competing model) for interpretation. In Section 4.2, we consider all available covariates. We fit a deep feed-forward neural network (FFNN) regression model to these \(683\) claims. We choose a network architecture of depth \(3\) with \((20,15,10)\) neurons in the three hidden layers, the hyperbolic tangent activation function in the hidden layers, and the log-link for the output layer. The input has dimension \(2\), this results in a FFNN architecture with a network parameter of dimension \(546\); for a more detailed discussion of FFNNs we refer to [31, Chapter 7], in particular, to Listings 7.1-7.3 of that reference. We fit this model using the gamma deviance loss, see [31, Section 5.3.7] and Remark 2.2, use the nadam version of stochastic gradient descent, and exercise early stopping on a validation set being \(20\%\) of the entire data. Line (1a) of Table 1, called gamma FFNN, shows the performance of the fitted FFNN regression model. This is compared to the null model (empirical mean) on line (0) that does not consider any covariates.2 We observe a decrease in gamma deviance loss and in root mean squared error (RMSE) which justifies the use of a regression model; note that these are in-sample figures, but we use early stopping to prevent the network from in-sample overfitting. The difficulty here is that, only having \(683\) claims, we cannot provide a reasonable out-of-sample analysis. The last column of Table 1 called 'average' compares the average claims estimate of the FFNN to the empirical mean, and we observe a slight positive bias in the FFNN prediction, i.e., \(24,932>24,641\). Footnote 2: In a gamma null model, i.e., assuming i.i.d. gamma distributed responses, we obtain that the MLE of the mean is equal to the empirical mean of the observations; this generally holds true within the exponential dispersion family. In the next step, we use the FFNN estimates as ranks \(\widehat{\pi}(\mathbf{x}_{i})\) for ordering the claims \(y_{i}\) and the covariates \(\mathbf{x}_{i}\), respectively. Then we apply the non-parametric isotonic recalibration step (2.2) to these ranks and claims. The Swedish motorcycle claims data is aggregated w.r.t. the available covariate combinations, and the \(683\) positive claims come from \(656\) different covariate combinations \(\mathbf{x}_{i}\). This requires that we work with the weighted version of (2.2), where \(w_{i}\in\mathbb{N}\) corresponds to the number of claims that have been observed for covariate \(\mathbf{x}_{i}\), and \(y_{i}\) corresponds to the average observed claim amount on \(\mathbf{x}_{i}\).3 We use the R package monotone[7] which provides \begin{table} \begin{tabular}{|l l||c c|c|} \hline & & gamma deviance & RMSE & average \\ \hline (0) & null model & 2.085 & 35,311 & 24,641 \\ \hline (1a) & gamma FFNN & 1.704 & 32,562 & 24,932 \\ (1b) & gamma FFNN recalibrated & 1.640 & 32,005 & 24,641 \\ \hline (2) & binary regression tree & 1.761 & 32,706 & 24,641 \\ \hline \end{tabular} \end{table} Table 1: Loss figures in the Swedish motorcycle example only considering RiskClass and VehAge as covariates. a fast implementation of the PAV algorithm. The numerical results are presented on line (1b) of Table 1. There is a slight decrease in average loss through the isotonic recalibration. This is expected since the isotonic regression is optimizing the in-sample loss for any Bregman loss function, see Remark 2.2. The last column of Table 1 verifies that now the global balance property (2.6) holds. Footnote 1: The \(\widehat{\pi}(\mathbf{x}_{i})\) is computationally feasible). Figure 3 provides the resulting step function from the isotonic recalibration (in red color) of the ranking \((\widehat{\pi}(\mathbf{x}_{i}))_{i=1}^{n}\) given by the gamma FFNN; this is complemented with the observed amounts \(y_{i}\) (in blue color). The resulting complexity number is \(K=K((y_{i},\widehat{\pi}(\mathbf{x}_{i}))_{i=1}^{n})=18\), i.e., in this example the conditional expected claim amounts can be represented by 18 different estimates \(\widehat{\mu}_{i_{k}}\in\mathbb{R}\), \(k=1,\ldots,K=18\); the FFNN regression function uses \(6\cdot 21=126\) different values (ranks) which corresponds to the cardinality of the available covariate values \((\texttt{RiskClass},\texttt{VehAge})\in\mathcal{X}\). The isotonic recalibration on the ranks \(\widehat{\pi}(\mathbf{x})=\widehat{\pi}(\texttt{RiskClass},\texttt{VehAge})\) of the FFNN leads to a partition \(\mathcal{X}_{1},\ldots,\mathcal{X}_{18}\) of the covariate space as defined at (3.2). We compare this partition to the one that results from a binary split regression tree approach. We use 10-fold cross-validation to determine the optimal tree size. In this example the optimal tree has only 3 splits, and they all concern the variable VehAge. The resulting losses of this optimal tree are shown on line (2) of Table 1, and we conclude that the regression tree approach is not fully competitive, here. More interestingly, Figure 4 shows the resulting partitions of the covariate space \(\mathcal{X}=\{(\texttt{RiskClass},\texttt{VehAge})\}\) from the two approaches. The plot on the right-hand side shows the three splits of the regression tree (all w.r.t. VehAge). From the isotonic recalibration approach on the left-hand side, we learn that a good regression model should have diagonal structures, emphasizing that the two covariates interact in a nontrivial way which cannot be captured by the binary split regression tree in this case. Figure 3: Isotonic recalibration in the Swedish motorcycle example only using RiskClass and VehAge as covariates resulting in the complexity number \(K((y_{i},\widehat{\pi}(\mathbf{x}_{i}))_{i=1}^{n})=18\). ### Consideration of all covariates We now consider all available covariate components, see lines 2-7 of Listing 1. We first fit a FFNN to this data. This is done ex Figure 4: (lhs) Isotonic recalibration and (rhs) binary regression tree, both only using RiskClass and VehAge as covariates; the color scale is the same in both plots. \begin{table} \begin{tabular}{|l l||c c|c|} \hline & & gamma deviance & RMSE & average \\ \hline (0) & null model & 2.085 & 35,311 & 24,641 \\ \hline (1a) & gamma GLM & 1.717 & 32,562 & 25,105 \\ (1b) & gamma GLM recalibrated with \(K=24\) & 1.641 & 31,578 & 24,641 \\ \hline (2a) & gamma FFNN & 1.496 & 29,673 & 24,526 \\ (2b) & gamma FFNN recalibrated with \(K=22\) & 1.452 & 28,806 & 24,641 \\ (2c) & gamma FFNN tree adjustment with 4 bins (seed 1) & 1.508 & 29,371 & 24,641 \\ (2d) & gamma FFNN tree adjustment with 8 bins (seed 2) & 1.466 & 27,942 & 24,641 \\ \hline \end{tabular} \end{table} Table 2: Losses in the Swedish motorcycle example based on all available covariates. GLM fails to have the global balance property because we work with the log-link and not with the canonical link of the Gamma GLM, here. In the next step, we use the FFNN predictions as ranks \(\widehat{\pi}(\mathbf{x}_{i})\) for ordering the responses and covariates, and we label the claims \(y_{i}\) such that \(\widehat{\pi}(\mathbf{x}_{1})<\ldots<\widehat{\pi}(\mathbf{x}_{n})\). There are no ties in this data, and we obtain \(n=656\) pairwise different values. The results of the isotonic recalibration are presented in Figure 5 (middle). The complexity number is \(K=K((y_{i},\widehat{\pi}(\mathbf{x}_{i}))_{i=1}^{n})=23\), thus, the entire regression problem is encoded in 23 different values \(\widehat{\mu}_{i_{k}}\), \(k=1,\ldots,K\). In view of this plot, it seems that the largest value \(\widehat{\mu}_{i_{K}}\) over-fits to the corresponding observation, as this estimate is determine by a single observation \(y_{n}\), being bigger than the weighted block mean \(\widehat{\mu}_{i_{K-1}}\) on the previous block \(\mathcal{I}_{K-1}\); compare Section 3.2. For this reason, we manually pool the two last blocks \(\mathcal{I}_{K-1}\) and \(\mathcal{I}_{K}\). This provides us with a new estimate (2.4) on this merged block, and reduces the complexity number by 1 to \(K=22\). The resulting isotonic recalibration is shown in Figure 5 (rhs), and the empirical losses are provided on line (2b) of Table 2. Importantly, this isotonic recalibrated regression is in-sample auto-calibrated (2.5) and, henceforth, it fulfills the global balance property which can be verified in the last column of Table 2. We perform the same isotonic recalibration to the ranks obtained from the gamma GLM in Table 2. We observe that the isotonic recalibration step leads to a major decrease in average loss in the gamma GLM, and it results in the complexity number \(K=24\), see also Figure 5 (lhs). We compare isotonic recalibration to a recent proposal of Lindholm et al. [21] that also achieves auto-calibration in-sample. Isotonic regression provides a partition of the index set \(\mathcal{I}=\{1,\ldots,n\}\) into disjoint blocks \(\mathcal{I}_{1},\ldots,\mathcal{I}_{K}\) on which the estimated regression function is constant. This can also be achieved by considering a binary regression tree algorithm applied to the (rank) covariates \(\{\widehat{\pi}(\mathbf{x}_{i});\,1\leq i\leq n\}\) and corresponding responses \(y_{i}\); see Section 2.3.2 of Lindholm et al. [21]. We call this latter approach the tree binning approach. There are two main differences between the tree binning approach and the isotonic recalibration approach. First, generally, the tree binning approach does not provide a regression function that has the same ranking as the first regression step providing \(\widehat{\pi}(\mathbf{x}_{i})\). Second, in the isotonic regression approach, the complexity number \(K((y_{i},\widehat{\pi}(\mathbf{x}_{i}))_{i=1}^{n})\) is naturally given, i.e., the isotonic regression (2.2) automatically Figure 5: Isotonically recalibrated regression models in the Swedish motorcycle example using all covariates for the gamma GLM with complexity number \(K((y_{i},\widehat{\pi}(\mathbf{x}_{i}))_{i=1}^{n})=24\) (lhs), for the gamma FFNN with complexity number \(K((y_{i},\widehat{\pi}(\mathbf{x}_{i}))_{i=1}^{n})=23\) (middle) and over-fitting corrected (rhs). extracts the degree of information contained in the responses \(\mathbf{y}\), and generally, this degree of information is increasing for an increasing signal-to-noise ratio by Theorem 2.3. Conversely, in the tree binning approach, we need to determine the optimal number of bins (leaves), e.g., by \(k\)-fold cross-validation. The obtained number of bins depends on the hyperparameters of the minimal leaf size and of the number of folds in cross-validation, as well as on the random partition of the instances for cross-validation. We found that the number of bins is sensitive to the tuning choices, and hence, contrary to isotonic recalibration, the resulting partition is subject to potentially subjective choices and randomness. For the results on the tree binning approach in Table 2 we have chosen \(k=10\) folds and a minimal leaf size of 10, and only the random partitioning of the pseudo-sample is different for the results in lines (2c)-(2d). A first random seed gives 4 bins and a second one 8 bins, and we observe a considerable difference in the two models with respect to gamma deviance loss and the RMSE. Figure 6 shows the isotonic recalibration and the tree binning approach with 8 bins, corresponding to lines (2b) and (2d) of Table 2. From this plot, we conclude that the tree binning approach does not necessarily preserve the rankings induces by \(\widehat{\pi}(\mathbf{x}_{i})\) as the resulting step function (in blue color) is not monotonically increasing. We recommend isotonic recalibration to achieve auto-calibration since it preserves monotonicity of the regression model in the first estimation step, and there are no potentially influential tuning parameters. In Figure 7, we illustrate the resulting marginal plots if we project the estimated values \(\widehat{\mathbf{\mu}}\) of the isotonic recalibration to the corresponding covariate values, i.e., this is the marginal view of the resulting covariate space partition (3.2). For a low complexity number \(K((y_{i},\widehat{\pi}(\mathbf{x}_{i}))_{i=1}^{n})\) this can be interpreted nicely. We see relevant differences in the distributions of the colors across the different covariate levels of OwnerAge, Zone, RiskClass and VehAge. This indicates that these variables are important for explaining claim sizes, with the reservation that this marginal view ignores potential interactions. For the variable Gender we cannot make any conclusion as the gender balance inequality is too large. The interpretation of BonusClass is less obvious. In fact, from the gamma GLM we know that BonusClass is not significant, see [31, Table 5.13]. This is because the BonusClass is related to collision claims, whereas our data studies comprehensive insurance that excludes collision claims. Figure 8 shows the marginal view of the isotonically Figure 6: Tree binning vs. isotonic recalibration; the step functions correspond to lines (2d) and (2b) of Table 2 with 8 bins for line (2d) and complexity number \(K=22\) for line (2b). recalibrated gamma FFNN (lhs) and the gamma GLM (rhs) for the covariate BonusClass. As mentioned, BonusClass is not significant in the gamma GLM, and it seems from the figure that, indeed, the color distribution across the different levels is rather similar for both models. Clearly, the VehAge is the most important variable showing the picture that claims on new motorcycles are more expensive. There are substantial differences in claim size distributions between the zones, Zone 1 being the three largest cities of Sweden having typically more big claims. RiskClass corresponds to the size of the motorcycle which interacts with the OwnerAge, the VehAge and the Zone, and it is therefore more difficult to interpret as we have relevant interactions between these variables. Figure 8: Marginal view of the isotonically recalibrated gamma FFNN model (lhs) and the isotonically recalibrated gamma GLM (rhs) for the covariate components BonusClass. Figure 7: Marginal view of the isotonically recalibrated gamma FFNN model of Table 2 of the 6 considered covariate components OwnerAge, Gender, Zone, RiskClass, VehAge, BonusClass. Figure 9 gives an illustration of the partition \((\mathcal{X}_{k})_{k=1,\ldots,K}\) of the 6-dimensional covariate space \(\mathcal{X}\) w.r.t. the isotonic recalibration \((\widehat{\mu}_{i_{k}})_{k=1,\ldots,K}\) for two selected values of \(k\). The lines connect all the covariate components in \(\mathbf{x}\) that are observed within the data \((\mathbf{x}_{i})_{1\leq i\leq n}\) for a given value \(\widehat{\mu}_{i_{k}}\), and the size of the black dots illustrates how often a certain covariate level is observed. E.g., the figure on the right-hand side belongs to the second largest claim prediction \(\widehat{\mu}_{i_{K-1}}=59,851\). For this expected response level, the OwnerAge is comparably small (around 25 years), everyone is Male mostly living in Zone 1 (three biggest cities of Sweden), having a motorcycle of a higher RiskClass with a small VehAge. Similar conclusions can be drawn for the other parts \(\mathcal{X}_{k}\) of the covariate space \(\mathcal{X}\), thus, having a low complexity number \(K((y_{i},\widehat{\pi}(\mathbf{x}_{i}))_{i=1}^{n})\) enables to explain the regression model. ## 5 Conclusions We have tackled two problems. First, we have enforced that the regression model fulfills the auto-calibration property by applying an isotonic recalibration to the ranks of a fitted (first) regression model. This isotonic recalibration does not involve any hyperparameters, but it solely assumes that the ranks from the first regression model are (approximately) correct. Isotonic regression has the property that the complexity of the resulting (non-parametric) regression function is small in low signal-to-noise ratio problems. Benefiting from this property, we have shown that this leads to explainable regression functions because a low complexity is equivalent to a coarse partition of the covariate space. In insurance pricing problems this is particularly useful, as we typically face a low signal-to-noise ratio in insurance claims data. We can then fit a complex (algorithmic) model to that data in a first step, and in a subsequent step we propose to auto-calibrate the first regression function using isotonic recalibration, which also leads to a substantial simplification of the regression function. Figure 9: Partition \((\mathcal{X}_{k})_{k=1,\ldots,K}\) of the covariate space \(\mathcal{X}\) w.r.t. the isotonic recalibration for two selected values of \(k=12,21\).
2306.06453
Isometric models of the Funk disc and the Busemann function
In this article, we find three isometric models of the Funk disc: Finsler upper half of the hyperboloid of two sheets model, the Finsler band model and the Finsler upper hemi sphere model; and we also find two new models of the Finsler-Poincar\'e disc. We explicitly describe the geodesics in each model. Moreover, we compute the Busemann function and consequently describe the horocycles in the Funk and the Hilbert disc. Finally, we prove the asymptotic harmonicity of the Funk disc. We also show that, the concept of asymptotic harmonicity of the Finsler manifolds {\it tacitly} depends on the measure, in {\it contrast} to the Riemannian case.
Ashok Kumar, Hemangi Madhusudan Shah, Bankteshwar Tiwari
2023-06-10T14:20:31Z
http://arxiv.org/abs/2306.06453v1
# Isometric models of the Funk disc and the Busemann function ###### Abstract In this article, we find three isometric models of the Funk disc: Finsler upper half of the hyperboloid of two sheets model, the Finsler band model and the Finsler upper hemi sphere model; and we also find two new models of the Finsler-Poincare disc. We explicitly describe the geodesics in each model. Moreover, we compute the Busemann function and consequently describe the horocycles in the Funk and the Hilbert disc. Finally, we prove the asymptotic harmonicity of the Funk disc. We also show that, the concept of asymptotic harmonicity of the Finsler manifolds _tacitly_ depends on the measure, in _contrast_ to the Riemannian case. ## 1 Introduction The Funk and the Hilbert metrics were introduced as supporting examples to solve the _Hilbert fourth problem_: _Find all the metrics for which line segments are geodesics_. The Funk metric on the unit disc is a well-known Finsler metric of constant flag curvature \(-\frac{1}{4}\); whereas the Hilbert metric on the unit disc is the arithmetic symmetrization of its Funk metric and is of constant curvature \(-1\). In the sequel, we denote by [FF] the Funk unit disc. **[FF]**: The Funk metric on the unit disc can also be interpreted as Randers metric on the unit disc. It is the deformation of the well known Klein metric on the disc by a closed \(1\)-form (Subsection 3.1). Recently, the isometries between the Funk disc [FF], the Finsler-Poincare disc [FP] and the Finsler-upper half plane [FH] has been established in ([7], SS3). In this article, the new isometric models of the Funk disc and the Finsler-Poincare disc have been introduced. The Lorentzian metric in \(\mathbb{R}^{3}\) is a well-known non positive definite Riemannian metric. However, its pullback on the upper half of the hyperboloid of two sheets is a positive definite Riemannian metric, what we call, the hyperbolic metric on the unit disc. In this article, we construct a non positive definite Randers metric \(F_{L}\), (see (12)), on the upper half space \(\mathbb{H}^{3}\), whose pullback on the upper half of the hyperboloid of two sheets is the well known Funk metric on the unit disc. Further, we construct a non positive definite Randers metric \(F_{+}\), (see (21)), on the upper half space \(\mathbb{H}^{3}\), whose pullback on the upper hemi-sphere is the well known Funk metric on the unit disc. Thus, the Funk metric in the unit disc can be realized as the pull back of a Randers metric \(F_{L}\), (see (12)), on the upper half of the hyperboloid of two sheets [FUH-1], as well as the pull back of a Randers metric \(F_{+}\) on the upper hemi-sphere [FUS-2]. We also show the isometry between the Finsler band model [FB] and the Finsler upper half plane [FU]. As the Funk disc and the Finsler-Poincare disc are isometric to each other, we further show that the pullback of the Randers metric \(F_{L}\) on the upper half of the hyperboloid of two sheets [FUH-2], as well as the pullback of the Randers metric \(F_{+}\) on the upper hemi-sphere can be realized as the models of the Finsler-Poincare disc, termed as [FUS-2]. See the Figure 1 for the overview of all the isometric different models. We also find the geodesics in these models. In particular, the geodesics of the Band model are shown in Figure 2. We compute the Busemann function in both the discs. Consequently, we find all the horocycles in these models. In Section \(2\), we discuss the preliminaries required for the paper. In Section \(3\), we introduce and explore all the \(5\) isometric models of the Funk disc in detail as follows. **[_FUH_**-\(1\)]**: It is the pullback of the deformed Lorentzian metric on the upper half of the hyperboloid of two sheets ( Subsection \(3.2\)). **[_FUH_**-\(2\)]**: It is the another pullback of the deformed Lorentzian metric on the upper half of the hyperboloid of two sheets ( Subsection \(3.4\)). **[_FB_]**: It is the deformation of the Riemannian band model by closed \(1\)-form. The resulting metric is isometric to the Funk metric ( Subsection \(3.5\)). **[_FUS_**-\(1\)]**: It is the pullback of the deformation of the hyperbolic metric on \(\mathbb{R}^{3}_{+}\) on the upper hemisphere ( Subsection \(3.6\)). **[_FUS_**-\(2\)]**: It is the pullback of the deformation of the hyperbolic metric on \(\mathbb{R}^{3}_{+}\) on the upper hemisphere ( Subsection \(3.7\)). We recall the Finsler-Poincare disc [FP] and the Finsler-Poincare upper half plane [FU] from [7], Theorem \(3\). **[_FP_]**: The Finsler-Poincare metric is a Randers metric on the unit disc which is the deformation of the Poincare metric by a closed \(1\)-form (Subsection \(3.3\)). [_FU_]: The Finsler-Poincare upper half plane is a Randers deformation in the upper half plane by the hyperbolic metric in the upper half plane by a closed \(1\)-form (Subsection 3.5). In Section \(4\), we study the geodesics of all these isometric models. We explicitly obtain the parameterization of the Funk geodesics, which are Klien lines as the point sets. We also show that, the geodesics of \([FP]\) are semicircles which intersect orthogonally to the boundary of the unit disc. The geodesics of \([FU]\) are either vertical lines or semicircles centred at \(x\)-axis. The geodesics of the band model are vertical lines, curves asymptotic to \(x\)-axis and translate of \(x\)-axis and some slant curves. In Section \(5\), we show how to find geodesics of the Hilbert metric using the Funk metric. Consequently we obtain the explicit parametrization of the Klein geodesic. In Section \(6\), we find the forward Busemann function for the forward ray, of the Funk disc. And in the Hilbert disc we find the Busemann function for a line. In Section \(7\), we show the asymptotic harmonicity of the Funk disc with respect to the Busemann-Hausdorff volume. ## 2 Preliminaries The theory of Finsler manifolds can be considered as a generalization of that of Riemannian manifolds, where the Riemannian metric is replaced by a so called Finsler metric, which is a smoothly varying family of Minkowski norms in each tangent space of the manifold. Let \(M\) be an \(n\)-dimensional smooth manifold, \(T_{x}M\) denotes the tangent space of \(M\) at \(x\). The tangent bundle \(TM\) of \(M\) is the disjoint union of tangent spaces: \(TM:=\sqcup_{x\in M}T_{x}M\). We denote the elements of \(TM\) by \((x,v)\), where \(v\in T_{x}M\) and \(TM_{0}:=TM\setminus\{0\}\). Figure 1: Maps in this figure are isometries obtained in Section 3. **Definition 2.1** (Finsler structure, [3], SS1.2): _A Finsler structure on the manifold \(M\) is a function \(F:TM\rightarrow[0,\infty)\) satisfying the following conditions: (i) \(F\) is smooth on \(TM_{0}\), (ii) \(F\) is a positively 1-homogeneous on the fibers of the tangent bundle \(TM\), (iii) The Hessian of \(\dfrac{F^{2}}{2}\) with elements \(g_{ij}=\dfrac{1}{2}\dfrac{\partial^{2}F^{2}}{\partial v^{i}\partial v^{j}}\) is positive definite on \(TM_{0}\). The pair \((M,F)\) is called a Finsler space and \(g_{ij}\) is called the fundamental tensor of the Finsler structure \(F\)._ Although all Riemannian metrics are examples of Finsler metrics, however Randers metric is the simplest example of a non-Riemannian Finsler metric. **Definition 2.2** (Randers Metric, [3], SS1.2): _Let \(\alpha=\sqrt{a_{ij}(x)v^{i}v^{j}}\) be a Riemannian metric on the manifold \(M\) and \(\beta\) be a one-form on the manifold with \(||\beta||_{\alpha}<1\), where \(||\beta||_{\alpha}=\sqrt{a^{ij}(x)b_{i}(x)b_{j}(x)}\), then \(F(x,v)=\alpha(x,v)+\beta(x,v)\) is called a Randers metric._ It is well known that there is no canonical volume form on the Finsler manifolds, like in Riemannian case. Some of the well known volume forms on Finsler manifolds are the _Busemann-Hausdorff_ volume form, the _Holmes-Thompson_ volume form, the _maximum volume_ form and the _minimum volume_ form. A Finsler space with a volume form \(d\mu\) is called a Finsler \(m\)-space and is denoted by \((M,F,d\mu)\). The canonical volume form in a Riemannian manifold \((M^{n},\alpha),\alpha=\sqrt{a_{ij}(x)dx^{i}dx^{j}}\), is given by \[dV=\sqrt{\det(a_{ij}(x))}\ dx.\] **Definition 2.3** (Volume forms in Finsler manifolds, [14], SS2.2): _The Busemann Hausdorff volume form is defined as: \(dV_{BH}=\sigma_{BH}(x)dx\), where_ \[\sigma_{BH}(x)=\dfrac{vol(B^{n}(1))}{vol\left\{(v^{i})\in T_{x}M:F(x,v)<1\right\}}. \tag{1}\] _The Holmes-Thompson volume form is defined as \(dV_{HT}=\sigma_{HT}(x)dx\), where_ \[\sigma_{HT}(x)=\dfrac{1}{vol(B^{n}(1))}\int_{F(x,v)<1}\det(g_{ij}(x,v))dv. \tag{2}\] _Here, \(B^{n}(1)\) is the Euclidean unit ball in \(\mathbb{R}^{n}\) and vol is the Euclidean volume._ _The maximum and the minimum volume form of a Finsler metric \(F\) with the fundamental metric tensor \(g_{ij}\) is defined as,_ \[dV_{\max}=\sigma_{\max}(x)\ dx,\ \ dV_{\min}=\sigma_{\min}(x)\ dx, \tag{3}\] _where \(\sigma_{\max}(x)=\max\limits_{v\in I_{x}}\sqrt{\det(g_{ij}(x,v))}\), \(\sigma_{\min}(x)=\min\limits_{v\in I_{x}}\sqrt{\det(g_{ij}(x,v))},\) and \(I_{x}=\{v\in T_{x}M:F(x,v)=1\}\) is the indicatrix at the point \(x\) of the Finsler manifold._ The well-known volume forms of Randers metric can be computed as given by the following lemma. **Lemma 2.1** ([18], SS3 ): _The Busemann-Hausdorff volume form of a Randers metric \(F=\alpha+\beta\) is given by,_ \[dV_{BH}=\left(1-||\beta||_{\alpha}^{2}\right)^{\frac{n+1}{2}}dV_{\alpha}, \tag{4}\] _where \(dV_{\alpha}=\sqrt{\det(a_{ij})}dx.\) The Holmes-Thompson volume form of a Randers metric is given by,_ \[dV_{HT}=dV_{\alpha}. \tag{5}\] _The maximum volume form is given by,_ \[dV_{max}=\left(1+||\beta||_{\alpha}\right)^{n+1}dV_{\alpha}. \tag{6}\] _And the minimum volume form is given by,_ \[dV_{min}=\left(1-||\beta||_{\alpha}\right)^{n+1}dV_{\alpha}. \tag{7}\] In the Finslerian case the Hessian can not be defined uniquely, in contrast to Riemannian case. We first need to define the gradient of a \(C^{k}\)\((k\geq 1)\) function on \((M,F)\). **Definition 2.4** (Gradient, [14], SS14.1): _Let \((M,F,d\mu)\) be a Finsler \(m\)-space, and \(f\) be a \(C^{k}\)\((k\geq 1)\) function on \((M,F)\). Then the gradient \(\nabla f\) is \(C^{k-1}\) on \(\mathcal{U}_{f}:=\{x\in M:df_{x}\neq 0\}\) and \(C^{0}\) on \(M\setminus\mathcal{U}_{f}\). Then for \(x\in\mathcal{U}_{f}\),_ \[\nabla f(x):=A^{i}(x,df_{x})\frac{\partial}{\partial x^{i}},\] _where \(A^{i}(\eta)\) are given in a standard local coordinate system \((x^{i},\eta_{i})\) in \(T^{*}M\) by_ \[A^{i}(x,\eta)=\frac{1}{2}\frac{\partial[F^{*2}]}{\partial\eta_{i}}(x,\eta)=g^{ *ij}(x,\eta)\eta_{j},\qquad\eta\neq 0,\] _where \(F^{*}_{x}:T^{*}_{x}M\rightarrow\mathbb{R}\) is defined as \(F^{*}_{x}(\xi)=\sup\limits_{F_{x}(v)=1}\xi(v),v\in T_{x}M\). \(F^{*}\) is the Finsler metric dual to \(F\)._ **Definition 2.5** (Distance function, [14], SS3.2.5 ): _A locally Lipschitz function \(f\) on a Finsler space \((M,F)\) is called a distance function, if_ \[F^{*}(x,df_{x})=1=F(x,\nabla f(x)),\] _holds almost everywhere._ **Definition 2.6** (Laplacian, [14], SS14.1): _Let \((M,F,d\mu)\) be a Finsler \(m\)-space and \(f\) be a \(C^{k}\)\((k\geq 2)\) function on \((M,F)\). Then \(div(\nabla f)\) is a \(C^{k-2}\) function on \(\mathcal{U}_{f}\). Define Laplacian of \(f\) as:_ \[\Delta f(x):=\text{div}(\nabla f(x)),\ \ x\in\mathcal{U}_{f}.\] _The Laplacian of \(f\) is locally expressed as,_ \[\Delta_{\mu}f(x)=\frac{1}{\sigma_{\mu}(x)}\frac{\partial}{\partial x^{i}} \left(\sigma_{\mu}(x)g^{*ij}(x,df_{x})\frac{\partial f(x)}{\partial x^{j}} \right), \tag{8}\] _where \(\sigma_{\mu}(x)\) is the volume density of the volume form \(d\mu\)._ **Remark 2.1**: _The set \(\mathcal{U}_{f}\) is open in \(M\) and \(\mu(M\setminus\mathcal{U}_{f})=0\)._ **Lemma 2.2** ([15], Lemma 5.1): _Let \((M,F,d\mu)\) be an \(n\)-dimensional Finsler manifold with volume form \(d\mu\) and \(f\) be a differentiable function in \(M\). Then on \(\mathcal{U}_{f}\) we have,_ \[\Delta_{\mu}f=tr_{g_{\nabla f}}H(f)-S_{\mu}(\nabla f),\] _where \(S_{\mu}\), is the \(S\)-curvature of measure \(\mu\)._ In what follows, we will be dealing with the Funk and the Hilbert metric on the unit disc. We first define the Funk and the Hilbert metric on a strongly convex domain \(\Omega\) in \(\mathbb{R}^{n}\). Let \(\Omega\) be a non empty strongly convex domain in \(\mathbb{R}^{n}\) and let \(\partial\Omega=\bar{\Omega}\setminus\Omega\) denote the boundary of \(\Omega\). For any two points \(x_{1},x_{2};x_{1}\neq x_{2}\) in \(\Omega\), \(\overrightarrow{x_{1}x_{2}}\) denotes the ray starting at \(x_{1}\) and passing through \(x_{2}\) and \(\overleftarrow{x_{1}x_{2}}\) denotes the ray starting at \(x_{2}\) and passing through \(x_{1}\). In the sequel, \(|.|\) denotes the usual Euclidean norm in \(\mathbb{R}^{n}\). **Definition 2.7**: _(Funk metric, [10], Chapter 2, SS2) The Funk metric on a strongly convex domain \(\Omega\) is denoted by \(d_{F,\Omega}\) and is defined as: For any \(x_{1}\) and \(x_{2}\) in \(\Omega\) and \(a=\overrightarrow{x_{1}x_{2}}\cap\partial\Omega\),_ \[d_{F,\Omega}(x_{1},x_{2})=\log\left(\frac{|x_{1}-a|}{|x_{2}-a|}\right).\] Note that the Funk metric is actually a weak metric in the sense that it is not symmetric. It turns out that the Funk metric is actually an example of a Finsler metric. It can be shown that the Funk distance \(d_{F,\Omega}\) is realized by the smooth Finsler structures \(F_{F}\) on \(T\Omega\), where \(\Omega\) is a _strongly convex_ domain in \(\mathbb{R}^{n}\) with smooth boundary \(\partial\Omega\) ([8], SS1). Thus the Finsler structure \(F_{F}\) on \(\Omega\) is given by, \[F_{F}(x,v)=\frac{|v|}{|x-a|}, \tag{9}\] where \((x,v)\in T\Omega\). **Definition 2.8** (Hilbert metric, [10], Chapter 3, SS4): _The Hilbert metric on a proper convex domain \(\Omega\), denoted by \(d_{H,\Omega}\), is defined as: For any \(x_{1}\) and \(x_{2}\) in \(\Omega\), let \(a=\overrightarrow{x_{1}x_{2}}\cap\partial\Omega\) and \(b=\overleftarrow{x_{1}x_{2}}\cap\partial\Omega\). Then_ \[d_{H,\Omega}(x_{1},x_{2})=\frac{1}{2}\ln\left(\frac{|x_{2}-b|.|x_{1}-a|}{|x_{ 1}-b|.|x_{2}-a|}\right). \tag{10}\] It can be easily shown that, \(d_{H,\Omega}\) is the distance function on \(\Omega\) and satisfies the interesting property that, the line segments between any two points are minimizing. In the particular case, where \(\Omega\) is the unit ball, \(d_{H,\Omega}\) coincides with the Klein metric of the hyperbolic space. If \(\partial\Omega\) is smooth and \(\Omega\) is _strongly convex_, then \(d_{H}\) is realized by the smooth Finsler metric on \(\Omega\), called as the Hilbert metric and is given by: \[F_{H}(x,v)=\frac{|v|}{2}\left\{\frac{1}{|x-b|}+\frac{1}{|x-a|}\right\},\] where \((x,v)\in T\Omega\). Clearly, we have \(2F_{H}(x,v)=F_{F}(x,v)+F_{F}(x,-v)\) ([8], SS1). The Funk metric on the unit disc in \(\mathbb{R}^{n}\) is a special type of Randers metric of constant flag curvature, whose geodesics can be easily described, by the general result. **Theorem 2.1**: _([2], SS\(11.3\), [3], SS\(3.4.8\)) If \(F=\alpha+\beta\) is a Randers metric on a manifold \(M\) with \(\beta\) a closed \(1\)-form, then the Finslerian geodesics have the same trajectories as the geodesics of the underlying Riemannian metric \(\alpha\). Moreover, if \((M,\alpha)\) has constant curvature, then \((M,F)\) is locally projectively flat and consequently, in this case \((M,F)\) is projectively equivalent to \((M,\alpha)\)._ ## 3 Isometric models of the Funk disc and the Finsler-Poincare disc In this section, we introduce the isometric models of the Funk disc viz., [FUH-1], [FB] and [FUS-1] and we also introduce the isometric models of the Finsler-Poincare disc viz., [FUH-2] and [FUS-2], as stated in the introduction. Throughout \(\langle,\rangle\) will denote the Euclidean inner product and we will be using the following notations. * \(\mathbb{D}=\left\{(x^{1},x^{2})\in\mathbb{R}^{2}:(x^{1})^{2}+(x^{2})^{2}<1\right\},\) the unit disc in \(\mathbb{R}^{2}\). * \(\mathbb{U}=\left\{(x^{1},x^{2})\in\mathbb{R}^{2}:x^{2}>0\right\},\) the upper half plane in \(\mathbb{R}^{2}\) * \(\mathbb{H}_{+}=\left\{(\tilde{x}^{1},\tilde{x}^{2},\tilde{x}^{3})\in \mathbb{R}^{3}:\tilde{x}^{3}=\sqrt{1+(\tilde{x}^{1})^{2}+(\tilde{x}^{2})^{2}} \right\},\) the upper half of the hyperboloid of two sheets in \(\mathbb{R}^{3}\). * \(\mathbb{B}=\left\{(x^{1},x^{2})\in\mathbb{R}^{2}:\frac{-\pi}{2}<x^{2}<\frac{ \pi}{2}\right\},\) the band in \(\mathbb{R}^{2}\). * \(\mathbb{S}^{2}_{+}=\left\{(x^{1},x^{2},x^{3})\in\mathbb{R}^{3}:(x^{1})^{2}+( x^{2})^{2}+(x^{3})^{2}=1\text{ and }x^{3}>0\right\}\), the upper half of the hemisphere in \(\mathbb{R}^{3}\). ### The Funk Disc [FF] Let us consider a proper strongly convex bounded set \(\Omega\subset\mathbb{R}^{n}\). This will be our ground manifold. The tangent space \(T_{x}\Omega\) at each point \(x\in\Omega\) can be identified with \(\mathbb{R}^{n}\). The Finsler structure \(F_{F}\) on \(\Omega\) is such that the unit ball centered at a point \(x\in\Omega\) is the domain \(\Omega\) in the tangent space \(T_{x}\Omega\cong\mathbb{R}^{n}\) itself. Thus the Finsler structure \(F_{F}\) on \(\Omega\) is defined by (9). Let the convex set \(\Omega\) be the unit disc \(\mathbb{D}\), then \[a=\left(x+\frac{v}{F_{F}(x,v)}\right)\in\partial\mathbb{D}\iff|x+\frac{v}{F_ {F}(x,v)}|^{2}=1.\] Rewriting this condition as, \[F_{F}^{2}(x,v)(1-|x|^{2})-2F_{F}(x,v)\langle x,v\rangle-|v|^{2}=0.\] The non-negative root of the above quadratic equation is, \[F_{F}(x,v)=\alpha_{F}(x,v)+\beta_{F}(x,v), \tag{11}\] where \(\alpha_{F}(x,v)=\dfrac{\sqrt{\left(1-|x|^{2}\right)|v|^{2}+\langle x,v\rangle ^{2}}}{1-|x|^{2}}\) is the well known Klein metric on the unit disc and \(\beta_{F}(x,v)=\dfrac{\langle x,v\rangle}{1-|x|^{2}}\) is a \(1\)-form on the disc. Since \(||\beta_{F}||_{\alpha_{F}}=|x|<1\), \(F_{F}\) is a positive definite Randers metric also known as the Funk metric on the unit disc. Clearly, the Klein metric and the Funk metric are projectively equivalent (Theorem 2.1). **Proposition 3.1**: _The geodesics of the Funk metric are the line segments in the open unit disc._ Proof:The Funk metric is given by (11), where \(\beta_{F}=df_{F}\) with \(f_{F}(x)=\log\frac{1}{\sqrt{1-|x|^{2}}}\). Thus, \(\beta_{F}\) is exact as well as closed one form. Therefore, by Theorem 2.1 the geodesics of the Funk metric and the Klein metric are pointwise same, and they are the line segments in the open unit disc. \(\square\) The realization of the Funk metric in the unit disc on the upper sheet of the hyperboloid of two sheets [FUH-1] It is well known that, the pullback of the Lorenzian metric on the upper sheet of the hyperboloid of two sheets is the realization of the Klein metric on the unit disc. In this subsection, we _construct_ a non-positive definite Randers metric on the upper half space and show that, its pullback on the upper sheet of the hyperboloid of two sheets is the realization of the Funk metric on the unit disc. Let \((\mathbb{R}^{3}_{+},\alpha_{L})\) denote the upper half space \(\mathbb{R}^{3}_{+}\) with the Lorentzian metric \(\alpha_{L}\) defined below, that is, \[\mathbb{R}^{3}_{+}=\left\{(\tilde{x}^{1},\tilde{x}^{2},\tilde{x}^{3})\in \mathbb{R}^{3}:\tilde{x}^{3}>0\right\},\] \(\alpha_{L}(\tilde{x},\tilde{v})=\sqrt{(\tilde{v}^{1})^{2}+(\tilde{v}^{2})^{2} -(\tilde{v}^{3})^{2}}\) with \(\tilde{x}\in\mathbb{R}^{3}_{+}\) and \(\tilde{v}\in T_{\mathbb{Z}}\mathbb{R}^{3}_{+}\cong\mathbb{R}^{3}\). Now consider the deformation \(F_{L}\) of \(\alpha_{L}\) by \(\beta_{L}=\frac{1}{\tilde{x}^{3}}d\tilde{x}^{3}\) in \(\mathbb{R}^{3}_{+}\) as follows: \[F_{L}(\tilde{x},\tilde{v})=\alpha_{L}(\tilde{x},\tilde{v})+\beta_{L}(\tilde{x},\tilde{v}). \tag{12}\] It should be noted that \(F_{L}\) is _a non-positive definite_ Randers metric. Now we parametrize the upper half portion \(\mathbb{H}_{+}\) of the hyperboloid of two sheets in \(\mathbb{R}^{3}\) as: \[\eta:\mathbb{D}\subset\mathbb{R}^{2}\rightarrow\mathbb{H}_{+}\subset\mathbb{ R}^{3}_{+},\ \ \ \eta(x)=\left(\frac{x}{\sqrt{1-|x|^{2}}},\frac{1}{\sqrt{1-|x|^{2}}}\right). \tag{13}\] Note that \(\eta\) is a smooth diffeomorphism between \(\mathbb{D}\) and \(\mathbb{H}_{+}\). **Proposition 3.2**: _The pullback of the metric \(F_{L}\) defined as above, on the upper sheet of the hyperboloid of two sheets by the map \(\eta\) is the realization of the Funk metric on the upper sheet of the hyperboloid, that is, \(\eta^{*}F_{L}(x,v)=F_{F}(x,v)\) for all \((x,v)\in T\mathbb{D}\)._ Proof:First we find \(\eta^{*}F_{L}(x,v)\), for \(x\in\mathbb{D}\) and \(v\in T_{x}\mathbb{D}\cong\mathbb{R}^{2}\). We have by (13), \[\eta^{1}(x)=\frac{x^{1}}{\sqrt{1-|x|^{2}}},\ \eta^{2}(x)=\frac{x^{2}}{\sqrt{1-|x|^ {2}}}\ \mbox{and}\ \eta^{3}(x)=\frac{1}{\sqrt{1-|x|^{2}}}.\] Therefore, \[d\eta_{x}^{1}=\frac{1}{(1-|x|^{2})^{\frac{3}{2}}}\left[\left\{1-(x^{2})^{2}\right\} dx^{1}+x^{1}x^{2}dx^{2}\right],\] \[d\eta_{x}^{2}=\frac{1}{(1-|x|^{2})^{\frac{3}{2}}}\left[x^{1}x^{2}dx^{1}+\left\{1- (x^{1})^{2}\right\}dx^{2}\right],\] \[d\eta_{x}^{3}=\frac{1}{(1-|x|^{2})^{\frac{3}{2}}}\left[x^{1}dx^{1}+x^{2}dx^{2} \right].\] Hence, \[\eta^{*}F_{L}(x,v)=\left(\sqrt{(d\eta_{x}^{1})^{2}+(d\eta_{x}^{2} )^{2}-(d\eta_{x}^{3})^{2}}+\frac{1}{\eta^{3}(x)}d\eta_{x}^{3}\right)(v),\] \[=\frac{\sqrt{(1-|x|^{2})\,|v|^{2}+\langle x,v\rangle^{2}}}{1-|x| ^{2}}+\frac{\langle x,v\rangle}{1-|x|^{2}}=F_{F}(x,v).\] \(\square\) **Remark 3.1**: _It is interesting to note that the Randers metric \(F_{L}\) is not positive definite on the upper half space, however its pullback on the upper sheet of the hyperboloid of two sheets is a positive definite Randers metric, which is precisely the Funk metric on the unit disc._ ### The Finsler-Poincare Disc [FP] The Poincare metric on the unit disc is a model for the hyperbolic geometry in which a geodesic is represented as an arc of a circle, which intersect the disc's boundary orthogonally. More precisely, the Poincare metric \(\alpha_{P}\) on the unit disc \(\mathbb{D}\) is defined by \(\alpha_{P}(x,v)=\frac{2|v|}{1-|x|^{2}}\), where \(x=(x^{1},x^{2})\in\mathbb{D}\) and \(v=(v^{1},v^{2})\in T_{x}\mathbb{D}\). The Finsler-Poincare metric \(F_{P}\) on the unit disc \(\mathbb{D}\) is the deformation of the Poincare metric \(\alpha_{P}\) by a one form \(\beta_{P}\), given by \(\beta_{P}(x,v)=\frac{4\langle x,v\rangle}{1-|x|^{4}}\), \(x=(x^{1},x^{2})\in\mathbb{D}\) and \(v=(v^{1},v^{2})\in T_{x}\mathbb{D}\), and is defined as follows: \[F_{P}(x,v)=\alpha_{P}(x,v)+\beta_{P}(x,v). \tag{14}\] Since \(||\beta_{P}||_{\alpha_{P}}^{2}=\frac{4|x|^{2}}{(1+|x|^{2})^{2}}<1\), the Finsler-Poincare metric \(F_{P}\) is a positive definite Randers metric ([3], SS\(1.3\)\(E\)). **Remark 3.2**: _The Finsler-Poincare metric \(F_{P}\) is given by (14), where \(\beta_{P}=df_{P},\) with \(f_{P}(x)=\log\frac{1+|x|^{2}}{1-|x|^{2}}\), \(x=(x^{1},x^{2})\in\mathbb{D}\). Hence, \(\beta_{P}\) is an exact as well as closed one form and therefore, in view of Theorem 2.1, the Poincare metric \(\alpha_{P}\) and the Finsler-Poincare metric \(F_{P}\) are locally projectively equivalent. And the geodesics of \(\alpha_{P}\) and \(F_{P}\) are pointwise same._ The realization of the Finsler-Poincare disc on the upper half of the hyperboloid of two sheets [FUH-2] In this subsection, we show that the pullback on the upper sheet of the hyperboloid of two sheets of a non-positive definite Randers metric on the upper half space is the realization of the Finsler-Poincare metric on the unit disc. Let us consider a diffeomorphism \(\pi\) between \(\mathbb{D}\) and \(\mathbb{H}_{+}\), given by: \[\pi:\mathbb{D}\subset\mathbb{R}^{2}\rightarrow\mathbb{H}_{+}\subset\mathbb{R }_{+}^{3},\ \ \ \pi(x)=\left(\frac{2x}{1-|x|^{2}},\frac{1+|x|^{2}}{1-|x|^{2}}\right). \tag{15}\] **Proposition 3.3**: _The pullback of the metric \(F_{L}\) defined as above, on the upper sheet of the hyperboloid of two sheets, by the map \(\pi\) is the realization of the Finsler-Poincare metric on the unit disc, that is, \(\pi^{*}F_{L}(x,v)=F_{P}(x,v)\)._ Proof:To show that \(\pi^{*}F_{L}(x,v)=F_{P}(x,v)\). We have by (15), for \(x\in\mathbb{D}\), \[\pi^{1}(x)=\frac{2x^{1}}{1-|x|^{2}},\;\pi^{2}(x)=\frac{2x^{2}}{1-|x|^{2}}\; \text{and}\;\pi^{3}(x)=\frac{1+|x|^{2}}{1-|x|^{2}}.\] Therefore, for \(v\in T_{x}\mathbb{D}\), \[d\pi^{1}_{x}=\frac{2}{(1-|x|^{2})^{2}}\left[\left\{1-|x|^{2}+2(x^{1})^{2} \right\}dx^{1}+2x^{1}x^{2}dx^{2}\right],\] \[d\pi^{2}_{x}=\frac{2}{(1-|x|^{2})^{2}}\left[2x^{1}x^{2}dx^{1}+\left\{1-|x|^{2} +2(x^{2})^{2}\right\}dx^{2}\right],\] \[d\pi^{3}_{x}=\frac{2}{(1-|x|^{2})^{2}}\left[2x^{1}dx^{1}+2x^{2}dx^{2}\right].\] Hence, \[\pi^{*}F_{L}(x,v)=\left(\sqrt{(d\pi^{1}_{x})^{2}+(d\pi^{2}_{x})^{ 2}-(d\pi^{3}_{x})^{2}}+\frac{1}{\pi^{3}(x)}d\eta^{3}\right)(v)\] \[=\frac{2|v|}{1-|x|^{2}}+\frac{4\langle x,v\rangle}{1-|x|^{4}}=F_{ P}(x,v).\] \(\square\) **Remark 3.3**: _We have shown that the pullback of \(F_{L}\) on the upper sheet of the hyperboloid of two sheets \(\mathbb{H}_{+}\) through two different diffeomorphisms gives the Funk as well as the Finsler-Poincare metric on the open unit disc._ ### The Finsler-Poincare upper half plane [FU] and the Finsler Band model [FB] The upper half-plane \(\mathbb{U}=\{(x^{1},x^{2})\in\mathbb{R}^{2}:x^{2}>0\}\) with the metric \(\alpha_{U}(x,v)=\dfrac{|v|}{x^{2}}\), where \(x=(x^{1},x^{2})\in\mathbb{U}\), \(v=(v^{1},v^{2})\in T_{x}\mathbb{U}\), called the Poincare upper half metric, is the standard model of two-dimensional hyperbolic geometry. The geodesics in this model of hyperbolic plane geometry are vertical lines in \(\mathbb{U}\) and upper semi circles centred on \(x^{1}\)-axis. The Finsler-Poincare upper half plane metric \(F_{U}\) in the upper half plane is the deformation of the Poincare upper half plane metric \(\alpha_{U}\) by the \(1\)-form \(\beta_{U}(x,v)=\dfrac{\langle w(x),v\rangle}{x^{2}(4+|x|^{2})}\), where \(x=(x^{1},x^{2})\in\mathbb{U}\), \(v=(v^{1},v^{2})\in T_{x}\mathbb{U}\) and \(w(x):=(2x^{1}x^{2},(x^{2})^{2}-(x^{1})^{2}-4)\); and is defined as follows: \[F_{U}(x,v)=\alpha_{U}(x,v)+\beta_{U}(x,v). \tag{16}\] It is easy to see that, as \(||\beta_{U}||_{\alpha_{U}}^{2}=\dfrac{|w(x)|}{(4+|x|^{2})^{2}}<1\), the Finsler-Poincare metric \(F_{U}\) in the upper half plane \(\mathbb{U}\) is a positive definite Randers metric. **Remark 3.4**: _The Finsler-Poincare upper half metric \(F_{U}\) is given by (16), where \(\beta_{U}=df_{U},\) with \(f_{U}(x)=\log\dfrac{4+|x|^{2}}{x^{2}}\). Hence, \(\beta_{U}\) is an exact as well as closed one form. Therefore, by Theorem 2.1, the Poincare upper half metric \(\alpha_{U}\) and the Finsler-Poincare upper half metric \(F_{U}\) in the upper half plane \(\mathbb{U}\) are locally projectively equivalent, that is, the geodesics of \(\alpha_{P}\) and \(F_{P}\) are pointwise same._ In this subsection, we find a new Finsler model corresponding to the _Band model_ of the Riemannian hyperbolic space, and termed it as the _Finsler Band model_. Recall that the Band in \(\mathbb{R}^{2}\) is: \[\mathbb{B}=\left\{(x^{1},x^{2})\in\mathbb{R}^{2}:\dfrac{-\pi}{2}<x^{2}<\dfrac {\pi}{2}\right\}.\] And \((\mathbb{B},\alpha_{B})\) is isometric to the Riemannian hyperbolic space, where for \(x\in\mathbb{B},v\in T_{x}\mathbb{B}\), \[\alpha_{B}(x,v)=\dfrac{|v|}{\cos x^{2}}. \tag{17}\] Consider one form \(\beta\) on \(\mathbb{B}\) defined as: \[\beta_{B}(x,v)=\dfrac{\left(e^{2x^{1}}-4\right)v^{1}\cos x^{2}+\left(e^{2x^{1} }+4\right)v^{2}\sin x^{2}}{(e^{2x^{1}}+4)\cos x^{2}}. \tag{18}\] Define a function \(F_{B}\) on \(T\mathbb{B}\) as: \[F_{B}(x,v)=\alpha_{B}(x,v)+\beta_{B}(x,v). \tag{19}\] It is easy to see that \(||\beta_{B}||_{\alpha_{B}}^{2}=\dfrac{\left(e^{2x^{1}}+4\right)^{2}-16e^{2x^{1}} \left(\cos x^{2}\right)^{2}}{\left(e^{2x^{1}}+4\right)^{2}}<1\), therefore, \(F_{B}\) is a Randers metric on \(\mathbb{B}\). **Remark 3.5**: _The Finsler metric \(F_{B}\) is given by (19), where \(\beta_{B}=df_{B},\) with_ \[f_{B}(x)=x^{1}+\log\left(1+\dfrac{4}{e^{2x^{1}}}\right)+\log\sec x^{2}.\] _So \(\beta_{B}\) is the exact as well as closed \(1\)-form. Therefore, the Finsler fundamental metric \(F_{B}\) and \(\alpha_{B}\) in the band \(\mathbb{B}\) are locally projectively equivalent and the geodesics of \(\alpha_{B}\) and \(F_{B}\) are pointwise same (Theorem 2.1)._ Now we show that the Finsler band model \((\mathbb{B},F_{B})\) is isometric to the Finsler-Poincare upper half plane and hence in turn isometric to the Funk disc. **Proposition 3.4**: _The Finsler band model \((\mathbb{B},F_{B})\) is isometric to the Finsler-Poincare upper half plane \((\mathbb{U},F_{U})\)._ Proof:Consider the map \(\varphi:\mathbb{B}\rightarrow\mathbb{U}\) defined by, \[\varphi(x)=e^{x^{1}}\left(-\sin x^{2},\cos x^{2}\right).\] Its inverse, \(\varphi^{-1}:\mathbb{U}\rightarrow\mathbb{B}\) is given by, \[\varphi^{-1}(x)=\left(\log\sqrt{(x^{1})^{2}+(x^{2})^{2}},\ -\tan^{-1}\dfrac{x^{1}}{x^{2}} \right).\] Clearly \(\varphi\) is a diffiomorphism between \(\mathbb{B}\) and \(\mathbb{U}\). We claim that \(\varphi\) is indeed Finslerian isometry between \((\mathbb{B},F_{B})\) and \((\mathbb{U},F_{U})\), i.e., \(\varphi^{*}F_{U}(x,v)=F_{B}(x,v)\). By definition of the pull back \(\varphi^{*}\), we have, \[\varphi^{*}F_{U}(x,v)=F_{U}\left(\varphi(x),D\varphi_{x}(v)\right)=\alpha_{U} (\varphi(x),D\varphi_{x}(v))+\beta_{U}(\varphi(x),D\varphi_{x}(v)).\] Since, \[D\varphi_{x}(v)=-e^{x^{1}}(v^{1}\sin x^{2}+v^{2}\cos x^{2},-v^{1}\cos x^{2}+v ^{2}\sin x^{2}),\] for every \(v=(v^{1},v^{2})\in T_{x}D\cong\mathbb{R}^{2}\) and \[w(\varphi(x))=\left(-e^{2x^{1}}\sin 2x^{2},\ e^{2x^{1}}\cos 2x^{2}-4\right),\] as \(w(x)=(2x^{1}x^{2},(x^{2})^{2}-(x^{1})^{2}-4).\) We have, \[\left\langle w(\varphi(x)),D\varphi_{x}(v)\right\rangle=e^{x^{1}}\left(e^{2x^{1} }-4\right)v^{1}\cos x^{2}+e^{x^{1}}\left(e^{2x^{1}}+4\right)v^{2}\sin x^{2}.\] Therefore, in view of \(\alpha_{U}(x,v)=\dfrac{|v|}{x^{2}}\) and \(\beta_{U}(x,v)=\dfrac{\left\langle w(x),v\right\rangle}{x^{2}(4+|x|^{2})}\), we have \(\alpha_{U}(\varphi(x),D\varphi_{x}(v))=\dfrac{|v|}{\cos x^{2}},\) and \(\beta_{U}(\varphi(x),D\varphi_{x}(v))=\dfrac{\left(e^{2x^{1}}-4 \right)v^{1}\cos x^{2}+\left(e^{2x^{1}}+4\right)v^{2}\sin x^{2}}{(e^{2x^{1}}+4 )\cos x^{2}}.\) Hence, \(\varphi^{*}F_{U}(x,v)=F_{U}\left(\varphi(x),D\varphi_{x}(v)\right)=\alpha_{U} (\varphi(x),D\varphi_{x}(v))+\beta_{U}(\varphi(x),D\varphi_{x}(v))=F_{B}(x,v),\) \(\forall(x,v)\in T\mathbb{B}\). Thus, the map \(\varphi\) is an isometry between \(FB\) and \(FU\). \(\Box\) **Theorem 3.1**: _The Finsler band model \((\mathbb{B},F_{B})\) and the Funk disc model \((\mathbb{D},F_{F})\) are isometric to each other._ **Proof:** Let us consider the map \(\xi:\mathbb{D}\rightarrow\mathbb{B}\), given by \[\xi(x)=\left(\xi^{1}(x),\xi^{2}(x)\right)=\left(\log\left(2\sqrt{\dfrac{1-x^{ 1}}{1+x^{1}}}\right),-\tan^{-1}\left(\dfrac{x^{2}}{\sqrt{1-|x|^{2}}}\right) \right), \tag{20}\] with its inverse \[\xi^{-1}:\mathbb{B}\rightarrow\mathbb{D},\ \ \xi^{-1}(x)=\left(\dfrac{4-e^{2x^{1} }}{4+e^{2x^{1}}},\dfrac{-4e^{x^{1}}\sin x^{2}}{4+e^{2x^{1}}}\right).\] It suffices to show, \[F_{F}(x,v)=F_{B}\left(\xi(x),D\xi_{x}(v)\right),\ \forall(x,v)\in T\mathbb{D},\] where \(D\xi_{x}\) denote the differential of \(\xi\) at the point \(x\). For any \(v=(v^{1},v^{2})\in T_{x}\mathbb{D}\cong\mathbb{R}^{2}\), the derivative \(D\xi_{x}\) can be written as, \[D\xi_{x}(v)=(-\dfrac{v^{1}}{1-(x^{1})^{2}},-\dfrac{x^{1}x^{2}v^{1}}{(1-(x^{1}) ^{2})\sqrt{1-|x|^{2}}}-\dfrac{v^{2}}{\sqrt{1-|x|^{2}}}).\] In view of (17) and (18), we have, \[\alpha_{B}(\xi(x),D\xi_{x}(v))=\dfrac{\sqrt{(1-|x|^{2})\,|v|^{2}+\langle x,v \rangle^{2}}}{1-|x|^{2}},\ \ \beta_{B}(\xi(x),D\xi_{x}(v))=\dfrac{\langle x,v\rangle}{1-|x|^{2}}.\] Thus for any \((x,v)\in T\mathbb{D}\), we have, \[F_{B}\left(\xi(x),D\xi_{x}(v)\right)=\alpha_{B}(\xi(x),D\xi_{x}(v))+\beta_{B} (\varphi(x),D\xi_{x}(v))=F_{F}(x,v).\] ### The realization of Funk metric in the unit disc on the upper hemi sphere [FUS-1] Let \(\mathbb{R}^{3}_{+}=\{(\tilde{x}^{1},\tilde{x}^{2},\tilde{x}^{3})\in\mathbb{R}^{3} :\tilde{x}^{3}>0\}\) be the upper half space with the hyperbolic metric \(\alpha_{+}\), defined by \(\alpha_{+}(\tilde{x},\tilde{v})=\dfrac{\sqrt{(\tilde{v}^{1})^{2}+(\tilde{v}^{2} )^{2}+(\tilde{v}^{3})^{2}}}{\tilde{x}^{3}}\) with \(\tilde{x}=(\tilde{x}^{1},\tilde{x}^{2},\tilde{x}^{3})\in\mathbb{R}^{3}_{+}\) and \(\tilde{v}=(\tilde{v}^{1},\tilde{v}^{2},\tilde{v}^{3})\in T_{\tilde{x}}\mathbb{ R}^{3}_{+}\cong\mathbb{R}^{3}\). Now let us consider the deformation of the upper half space \((\mathbb{R}^{3}_{+},\alpha_{+})\) by the one form \(\beta_{+}(\tilde{x},\tilde{v})=-\dfrac{\tilde{v}^{3}}{\tilde{x}^{3}}\) as follows: \[F_{+}(\tilde{x},\tilde{v})=\alpha_{+}(\tilde{x},\tilde{v})+\beta_{+}(\tilde{x },\tilde{v})=\dfrac{|\tilde{v}|}{\tilde{x}^{3}}-\dfrac{\tilde{v}^{3}}{\tilde{ x}^{3}}, \tag{21}\] where \(\tilde{x}=(\tilde{x}^{1},\tilde{x}^{2},\tilde{x}^{3})\in\mathbb{R}^{3}_{+}\) and \(\tilde{v}=(\tilde{v}^{1},\tilde{v}^{2},\tilde{v}^{3})\in T_{\tilde{x}}\mathbb{ R}^{3}_{+}\cong\mathbb{R}^{3}\). Let \(\psi:\mathbb{D}\to\mathbb{S}^{2}_{+}\) be an immersion given by, \[\psi(x^{1},x^{2})=\left(x^{1},x^{2},\sqrt{1-|x|^{2}}\right), \tag{22}\] then it turns out that the pullback \(\psi^{*}\alpha_{+}\) of the upper half hyperbolic space \((\mathbb{R}^{3}_{+},\alpha_{+})\) on \(\mathbb{D}\) is actually the Klein metric given by \(\psi^{*}\alpha_{+}=\alpha_{F}\). **Proposition 3.5**: _The pullback of the metric \(F_{+}\) defined as above, on the upper hemi sphere by the map \(\psi\) is the realization of the Funk metric on the unit disc, on the upper hemi sphere, that is, \(\psi^{*}F_{+}(x,v)=F_{F}(x,v)\) for all \((x,v)\in T\mathbb{D}\)._ Proof:In view of (21) and (22), we have for all \((x,v)\in T\mathbb{D}\), \[\psi^{*}F_{+}(x,v)=F(\psi(x),D\psi_{x}(v))=\dfrac{\sqrt{(1-|x|^{2})\,|v|^{2}+ \langle x,v\rangle^{2}}}{1-|x|^{2}}+\dfrac{\langle x,v\rangle}{1-|x|^{2}}=F_{ F}(x,v).\] \(\square\) ### The realization of the Finsler-Poincare metric in the unit disc on the upper hemi sphere (FUS-2) If we consider the immersion \(\sigma:\mathbb{D}\subset\mathbb{R}^{2}\to\mathbb{S}^{2}_{+}\subset\mathbb{R}^ {3}_{+}\) defined by, \[\sigma(x^{1},x^{2})=\left(\dfrac{2x^{1}}{1+|x|^{2}},\dfrac{2x^{2}}{1+|x|^{2}}, \dfrac{1-|x|^{2}}{1+|x|^{2}}\right), \tag{23}\] then the pullback \(\sigma^{*}\alpha_{+}\) of the upper half hyperbolic space \((\mathbb{R}^{3}_{+},\alpha_{+})\) on \(\mathbb{D}\) is actually the well known Poincare metric \(\alpha_{P}=\sigma^{*}\alpha_{+}\), given by (14). **Proposition 3.6**: _The pullback of the metric \(F_{+}\) defined as above, on the upper hemi sphere by the map \(\sigma\) is the realization of the Finsler-Poincare metric on the unit disc, on the upper hemisphere, that is, \(\sigma^{*}F_{+}(x,v)=F_{P}(x,v)\) for all \((x,v)\in T\mathbb{D}\)._ **Proof:** We have by (23), \[d\sigma_{x}^{1}=\frac{2}{\left(1+|x|^{2}\right)^{2}}\left[\left\{1+|x|^{2}-2(x ^{1})^{2}\right\}dx^{1}-2x^{1}x^{2}dx^{2}\right],\] \[d\sigma_{x}^{2}=\frac{2}{\left(1+|x|^{2}\right)^{2}}\left[-2x^{1}x^{2}dx^{1}+ \left\{1+|x|^{2}-2(x^{2})^{2}\right\}\right]dx^{2},\] \[d\sigma_{x}^{3}=\frac{2}{\left(1+|x|^{2}\right)^{2}}\left[-2x^{1}dx^{1}-2x^{2 }dx^{2}\right].\] Therefore, \[\sigma^{*}F(x,v)=\frac{2|v|}{1-|x|^{2}}+\frac{4\langle x,v\rangle}{1-|x|^{4}} =F_{P}(x,v).\] Thus, we have shown that the pullback of this hyperbolic metric on the upper hemisphere \(\mathbb{S}_{+}^{2}\) through two different immersions gives the well known Funk as well as the well known Finsler-Poincare metric on the open unit disc. ## 4 The Geodesics in isometric models of Funk disc In this section, we describe the geodesics of all the isometric models of the Funk disc explicitly, described in Section \(3\). In view of Section \(3\), we already know the geodesics as point sets in each model, however in this section the explicit parametrization of the geodesics will be explored. We also classify all the geodesic in each model. We first begin with the Funk metric. By Proposition 3.1, the line segments in a convex domain \(\Omega\) with the Funk metric are the Funk geodesics as point sets. The explicit parametrization of the unit speed geodesics in the convex domain \(\Omega\) is obtained by Athanase and Troynov in Chapter 3 of [10]. **Proposition 4.1** ([10], Chapter 3, SS\(3\)): _Let \(\Omega\subset\mathbb{R}^{n}\) be a convex domain with the weak Finsler structure \(F_{f}\). Then the forward unit speed linear geodesic in \(\Omega\) starting at \(p\in\Omega\) with velocity \(v\in T_{p}\Omega\) is given by,_ \[\gamma(t)=p+\frac{\left(1-e^{-t}\right)}{F_{f}(p,v)}v. \tag{24}\] **Proposition 4.2**: _For the Funk unit disc, the unit speed geodesic \(\gamma:[0,\infty)\rightarrow\mathbb{D}\) with \(\gamma(0)=p\) and the forward hitting point \(y\in\partial\mathbb{D}\) is given by,_ \[\gamma(t)=e^{-t}p+(1-e^{-t})y. \tag{25}\] Proof:Consider the particular case when \(\Omega\) is open unit disc \(\mathbb{D}\subset\mathbb{R}^{2}\). Recall that by Remark 3.3, \(F_{f}=F_{F}\). Let \(y=(y^{1},y^{2})\in\partial\mathbb{D}\) be such that \(\Big{(}p+\frac{v}{F_{F}(p,v)}\Big{)}=y\). Then clearly, by (24) the forward unit speed Funk geodesic \(\gamma:[0,\infty)\to\mathbb{D}\) with \(\gamma(0)=p\) and hitting at point \(y\in\partial\mathbb{D}\) is given by (25). \(\Box\) Clearly, if \(p=(p^{1},p^{2})\in\mathbb{D}\), \(v=(v^{1},v^{2})\in T_{p}\mathbb{D}\), then (25) yields that, \[\gamma(t)=(\gamma_{1}(t),\gamma_{2}(t))=(e^{-t}(p^{1}-y^{1})+y^{1},e^{-t}(p^{ 2}-y^{2})+y^{2}). \tag{26}\] Now using isometries obtained in the previous section, we classify the geodesics in various Funk model spaces. ### Geodesics in the Finsler-Poincare upper half plane [FU] As pointed out in Remark 3.4, the geodesics of the Finsler-Poincare upper half plane are pointwise same as the Poincare metric on the upper half plane. These geodesics are completely classified. **Theorem 4.1**: _The geodesics of the Finsler-Poincare upper half plane are the vertical rays in the Finsler-Poincare upper half plane as well as the semicircles centred on \(x\)-axis. More precisely,_ * _the line segments in the Funk disc passing through_ \((-1,0)\) _correspond to the vertical rays in the upper half plane,_ * _the vertical line segments in the Funk disc correspond to the concentric semi circles centred at origin,_ * _the other line segments in the Funk disc correspond to the semicircles centred on_ \(x\)_-axis._ Proof:Let \(g:\mathbb{D}\to\mathbb{U}\) be the isometry between \((\mathbb{D},F_{F})\) and \((\mathbb{U},F_{U})\) given by (see [7], SS4, Theorem 2), \[g(x)=\left(\frac{2x^{2}}{1+x^{1}},\frac{2\sqrt{1-|x|^{2}}}{1+x^{1}}\right).\] The geodesics in \((\mathbb{U},F_{U})\) are \(g\)-isometric images of the geodesics \(\gamma(t)=(\gamma_{1}(t),\gamma_{2}(t))\) given by (26). Let \(\omega(t)=g(\gamma(t))=(X^{1}(t),X^{2}(t))\). Then \[\omega(t)=\left(\frac{2\gamma_{2}(t)}{1+\gamma_{1}(t)},\frac{2\sqrt{1-| \gamma(t)|^{2}}}{1+\gamma_{1}(t)}\right).\] Therefore, \(X^{1}(t)=\dfrac{2\gamma_{2}(t)}{1+\gamma_{1}(t)}\) and \(X^{2}(t)=\dfrac{2\sqrt{1-|\gamma(t)|^{2}}}{1+\gamma_{1}(t)}\). It is easy to see that, \[(X^{1}(t))^{2}+(X^{2}(t))^{2}=4\dfrac{1-\gamma_{1}(t)}{1+\gamma_{1}(t)}. \tag{27}\] For any line segments (non-vertical) in the Funk disc, we have \(\gamma_{2}(t)=m\gamma_{1}(t)+c\), where \(m\) is the slope of the line segment and \(c\) is its y-intercept. 1. For a line segment in the Funk disc passing through \((-1,0)\), \(\gamma_{2}(t)\) and \(1+\gamma_{1}(t)\) are in constant ratio and hence \(m=c\). Therefore, \(X^{1}(t)=2c\) (constant). Hence, such line segments in the Funk disc correspond to the vertical rays in \(\mathbb{U}\) through the isometry \(g\). 2. For the vertical line segments in the Funk disc, \(\gamma_{1}(t)=k\) (constant) with \(|k|<1\), then by (27), \[(X^{1}(t))^{2}+(X^{2}(t))^{2}=c,\text{ where }c=4\dfrac{1-k}{1+k}.\] This shows that the vertical line segments in the Funk disc correspond to the concentric semicircles centred at origin in the upper half plane. 3. For the line segments in the Funk disc not passing through point \((-1,0)\), we have \(m\neq c\), then by (27), \[\left(X^{1}(t)+\dfrac{2}{m-c}\right)^{2}+(X^{2}(t))^{2}=4\dfrac{(m^{2}-c^{2}+1 )}{(m-c)^{2}}.\] Hence, these line segments in the Funk disc correspond to the semicircles centred on \(x\)-axis in the upper half plane. ### Geodesics in the Finsler-Poincare disc [FP] **Theorem 4.2**: _The geodesics of the Finsler-Poincare disc \((\mathbb{D},F_{P})\) are diametric line segments and the circular arcs that intersect orthogonally to the boundary circle (in the Euclidean sense). More explicitly,_ 1. _the line segments in the Funk disc passing through the centre of the disc correspond to the diametric line segments in the Finsler-Poincare disc passing through the centre of the disc,_ _and_ 2. _the other line segments in the Funk disc correspond to the circular arcs that intersect orthogonally to the boundary circle._ **Proof:** Let \(f:(\mathbb{D},F_{F})\rightarrow(\mathbb{D},F_{P}),\) be the isometry between \((\mathbb{D},F_{F})\) and \((\mathbb{D},F_{P})\), given by (see [7], SS4, Theorem 1), \[f(x)=\frac{x}{1+\sqrt{1-|x|^{2}}}.\] Thus, the geodesics in \((\mathbb{D},F_{P})\) are \(\omega(t)=f(\gamma(t))=(X^{1}(t),X^{2}(t))\), where \(\gamma(t)=(\gamma_{1}(t),\gamma_{2}(t))\) is given by (26). Hence, we obtain, \[\omega(t)=\frac{\gamma(t)}{1+\sqrt{1-|\gamma(t)|^{2}}}=\left(\frac{\gamma_{1} (t)}{1+\sqrt{1-|\gamma(t)|^{2}}},\frac{\gamma_{2}(t)}{1+\sqrt{1-|\gamma(t)|^{ 2}}}\right).\] Then, \[X^{1}(t)=\frac{\gamma_{1}(t)}{1+\sqrt{1-|\gamma(t)|^{2}}}\qquad\mbox{and} \qquad X^{2}(t)=\frac{\gamma_{2}(t)}{1+\sqrt{1-|\gamma(t)|^{2}}}.\] The line segments \(\gamma(t)\), represented as \(\gamma(t)=(\gamma_{1}(t),\gamma_{2}(t))\), in the Funk disc can be written as \(\gamma_{2}(t)=m\gamma_{1}(t)+c\). * If \(c=0\), then \(\gamma_{2}(t)=m\gamma_{1}(t)\). In this case, \(X^{2}(t)=mX^{1}(t).\) That is the line segments (geodesics) in the Funk disc passing through the centre of the disc correspond to the line segments in the Finsler-Poincare disc passing through the centre of the disc. * If \(\gamma_{2}(t)=m\gamma_{1}(t)+c\), for some real numbers \(m,c\) with \(c\neq 0\), then we obtain, \[\left(X^{1}(t)+\frac{m}{c}\right)^{2}+\left(X^{2}(t)-\frac{1}{c}\right)^{2}= \left(\frac{1+m^{2}-c^{2}}{c^{2}}\right).\] (28) * If \(\gamma_{1}(t)=0\), then \(X^{1}(t)=0\). Also, if \(\gamma_{1}(t)=c\neq 0,|c|<1\), then \[\left(X^{1}(t)-\frac{1}{c}\right)^{2}+\left(X^{2}(t)\right)^{2}=\left(\frac{1 -c^{2}}{c^{2}}\right).\] (29) Clearly, the arcs of the circles given by (28) and (29) intersect orthogonally to the boundary (in the Euclidean sense) of the unit disc. \(\square\) ### Geodesics in the Finsler-band model [FB] **Theorem 4.3**: _The geodesic \(\omega(t)=(X^{1}(t),X^{2}(t))\), of the Finsler-band model \((\mathbb{B},F_{B})\) satisfy,_ \[4e^{X^{1}(t)}\sin X^{2}(t)=\pm\Big{[}m(4-e^{2X^{1}(t)})+c(4+e^{2X^{1}(t)}) \Big{]}, \tag{30}\] _where \(m\) and \(c\) are arbitrary real numbers._ **Proof:** The Funk-disc model \((\mathbb{D},F_{F})\) and the Finsler-band model \((\mathbb{B},F_{B})\) are isometric to each other via the map \(\xi:\mathbb{D}\to\mathbb{B}\) is given by (20). Therefore, if \(\gamma(t)=(\gamma_{1}(t),\gamma_{2}(t))\) given by (26), we obtain, \[\xi(\gamma(t))=\left(\log\left(2\sqrt{\frac{1-\gamma_{1}(t)}{1+\gamma_{1}(t)}} \right),-\tan^{-1}\left(\frac{\gamma_{2}(t)}{\sqrt{1-|\gamma(t)|^{2}}}\right) \right).\] Let \(\omega(t)=\xi(\gamma(t))=(X^{1}(t),X^{2}(t))\), then \[X^{1}(t)=\log\left(2\sqrt{\frac{1-\gamma_{1}(t)}{1+\gamma_{1}(t)}}\right) \qquad\text{and}\qquad X^{2}(t)=-\tan^{-1}\left(\frac{\gamma_{2}(t)}{\sqrt{1- |\gamma(t)|^{2}}}\right). \tag{31}\] Clearly, if \(\gamma_{1}(t)\) is constant, then \(X^{1}(t)\) is constant, that is, the isometric images of the vertical line segments in the Funk disc are the vertical line segments in the band model. If \(\gamma\) is a non-vertical line segment in \(\mathbb{D}\), then its coordinates can be written as \(\gamma_{2}(t)=m\gamma_{1}(t)+c\). Using this expression in (31), the coordinates \((X^{1}(t),X^{2}(t))\) of the geodesics in the band model satisfy (30). \(\square\) Some of the curves given by above equation are drawn in the Figure \(2\) (this figure is drawn with the help of Python). ## 5 The Geodesics of the Hilbert disc are the geodesics of the Beltrami Klein model Note that the Hilbert metric on the unit disc \(\mathbb{D}\) is the well known Riemannian Beltrami-Klein metric. In this section, we find the parameterization of the Klein geodesics com Figure 2: Some geodesics in the band model. pletely. As stated in Section 2, the Hilbert metric [[10], chapter 3, SS4] is the arithmetic symmetrization of the Funk metric. Thus, for \(x\in\mathbb{D}\) and \(v\in T_{x}\mathbb{D}\), \[F_{H}(x,v)=\frac{1}{2}\{F_{F}(x,v)+F_{F}(x,-v)\}=\frac{\sqrt{(1-|x|^{2})\,|v|^{ 2}+\langle x,v\rangle^{2}}}{1-|x|^{2}}. \tag{32}\] The unit speed geodesic in a convex domain \(\Omega\) with Hilbert metric is explicitly described by Athanase and Troynov [[10], chapter 3, SS4]. **Proposition 5.1** ([10], Chapter 3, SS4): _The unit speed linear geodesic of the Hilbert metric starting at \(p\in\Omega\) with velocity \(v\in T_{p}\Omega\) is the path,_ \[\beta(t)=p+\varphi(t)v,\;\mbox{where}\;\;\varphi(t)=\frac{e^{t}-e^{-t}}{F_{F} (p,v)e^{t}+F_{F}(p,-v)e^{-t}}. \tag{33}\] **Proposition 5.2**: _Let \(\beta\) be the unit speed geodesic (line segment) in the Hilbert disc \(\mathbb{D}\) with \(\beta(0)=p\) and suppose that \(\beta\) hits at \(y\in\partial\mathbb{D}\) in the forward direction. Then \(\beta\) is given by,_ \[\beta(t)=(1-s(t))p+s(t)y, \tag{34}\] _where \(s(t)=\frac{e^{t}-e^{-t}}{e^{t}+k(p,y)e^{-t}}\;\) and \( k(p,y)=\frac{|y-p|^{2}}{|y|^{2}-|p|^{2}}\)._ Proof:For \(p=(p^{1},p^{2})\in\mathbb{D}\) and \(v=(v^{1},v^{2})\in T_{p}\mathbb{D}\), the unit speed Hilbert geodesic in \(\mathbb{D}\) is (Proposition 5.1), \[\beta(t)=p+\frac{e^{t}-e^{-t}}{F_{F}(p,v)e^{t}+F_{F}(p,-v)e^{-t}}v. \tag{35}\] In order to simplify the second term on R.H.S. of (35), we proceed as follows: Let \(y^{\prime}\in\partial\mathbb{D}\) be the unique hitting point of \(\beta(t)\), different from \(y=(y^{1},y^{2})\in\partial\mathbb{D}\). It is easy to see that the coordinates of \(y^{\prime}=(y^{\prime 1},y^{\prime 2})\) are given by: \[(y^{\prime 1},y^{\prime 2})=\left(y^{1}+\frac{2(1-\langle p,y\rangle)}{1+|p|^{ 2}-2\langle p,y\rangle}(p^{1}-y^{1}),y^{2}+\frac{2(1-\langle p,y\rangle)}{1+|p |^{2}-2\langle p,y\rangle}(p^{2}-y^{2})\right).\] Now we find a relation between \(F_{F}(p,v)\) and \(F_{F}(p,-v)\). As \(p+\frac{v}{F_{F}(p,v)}=y\), we get \[p^{1}+\frac{v^{1}}{F_{F}(p,v)}=y^{1}\;\;\mbox{and}\;\;p^{2}+\frac{v^{2}}{F_{F} (p,v)}=y^{2}. \tag{36}\] Also we have, \(p-\frac{v}{F_{F}(p,-v)}=y^{\prime}\) which gives, \[\begin{split} p^{1}-\frac{v^{1}}{F_{F}(p,-v)}&=y^{1}+ \frac{2(1-\langle p,y\rangle)}{1+|p|^{2}-2\langle p,y\rangle}(p^{1}-y^{1}),\\ \text{and}\;\;p^{2}-\frac{v^{2}}{F_{F}(p,-v)}&=y^{2 }+\frac{2(1-\langle p,y\rangle)}{1+|p|^{2}-2\langle p,y\rangle}(p^{2}-y^{2}). \end{split} \tag{37}\] From (36) and (37), we have \[\frac{1-|p|^{2}}{1+|p|^{2}-2\langle p,y\rangle}=\frac{F_{F}(p,v)}{F_{F}(p,-v)}.\] Thus, \[F_{F}(p,-v)=k(p,y)F_{F}(p,v), \tag{38}\] where \(k(p,y)=\frac{1+|p|^{2}-2\langle p,y\rangle}{1-|p|^{2}}=\frac{|y-p|^{2}}{|y|^{2 }-|p|^{2}}\). Using (38) in (35) yields, \[\beta(t)=\frac{(1+k(p,y))e^{-t}}{e^{t}+k(p,y)e^{-t}}p+\frac{e^{t}-e^{-t}}{e^{t }+k(p,y)e^{-t}}y. \tag{39}\] Thus, the unit speed geodesic \(\beta(t)\) with \(\beta(0)=p\) and hitting the boundary of the disc at \(y\) is, \[\beta(t)=(1-s(t))p+s(t)y,\] where \(s(t)=\frac{e^{t}-e^{-t}}{e^{t}+k(p,y)e^{-t}}\). \(\square\) **Remark 5.1**: _In the next section, we calculate the forward Busemann function for the Funk disc and the Busemann function for the Hilbert disc explicitly. For this purpose, the form of the Funk and the Hilbert geodesics obtained in terms of the initial and the hitting boundary point turns out to be more useful._ 6 The Busemann function and the horocycles in the Funk and the Hilbert disc \(\mathbb{I}\mathbb{D}\) The Busemann function for a geodesic \(\gamma\) in a Riemannian or in a Finsler manifold can be interpreted as the distance function from "\(\gamma(\infty)\)". The Busemann functions play a vital role in studying the geometry of the underlying manifold. See eg., [1], [5], [12], [16], [17], [13], for the insightful discussions about the Busemann functions in both complete Riemannian and Finslerian manifolds. In this section, we first calculate the forward Busemann function in the Funk disc, and then in the Hilbert disc. We first recall the definition of the forward Busemann function in the context of Finsler geometry. To define the Busemann function we need _line_ and _ray_ in a Finsler manifold. For details see Section \(3\) of [9]. In the sequel, \((M,F)\) denotes a forward complete, non-compact Finsler manifold without boundary. **Definition 6.1** (Forward Ray): _A unit speed forward geodesic \(\gamma:[0,\infty)\to M\) with \(\gamma(0)=p\), \(\dot{\gamma}(0)=v\) is called a forward ray, if \(d_{F}(\gamma(s),\gamma(t))=t-s\), for all \(s,t\in[0,\infty)\) with \(s<t\). Thus, \(\gamma\) is a globally minimizing forward geodesic._ Similarly, one can analogously define a _backward ray_. **Definition 6.2** (Line): _A unit speed geodesic \(\gamma:(-\infty,\infty)\to M\) with \(\gamma(0)=p\), \(\dot{\gamma}(0)=v\), is a line, if \(d_{F}(\gamma(s),\gamma(t))=t-s,\) for all \(s,t\in(-\infty,\infty)\) with \(s<t\). Thus, \(\gamma\) is a globally minimizing geodesic._ Associated to a forward ray \(\gamma\), we consider the generalized distance function on \((M,F)\) given by, \(b_{\gamma,t}:M\to\mathbb{R}\), \[b_{\gamma,t}(x)=t-d_{F}(x,\gamma(t)), \tag{40}\] where \(d_{F}\) is the Finsler asymmetric distance. It follows from the triangle inequality that, \(b_{\gamma,t}(x)\) is monotonically increasing function with \(t\) and \(-d_{F}(x,\gamma(0))<b_{\gamma}(x)<d_{F}(x,\gamma(0))\). **Definition 6.3** (The forward Busemann function): _The forward Busemann function for the forward geodesic ray \(\gamma\) is defined as:_ \[b_{\gamma}(x):=\lim_{t\to\infty}\left\{t-d_{F}(x,\gamma(t))\right\}. \tag{41}\] _By the above discussion, the limit exist and therefore, \(b_{\gamma}\) is well defined._ **Remark 6.1**: _Note that in contrast to the Riemannian case, we can't define the Busemann function in a forward complete Finsler manifold as:_ \[b_{\gamma}(x):=\lim_{t\to\infty}b_{\gamma}(t)=\lim_{t\to\infty}\left\{d_{F}(x,\gamma(t))-t\right\}. \tag{42}\] _See Chapter 9, \(\lx@sectionsign 3.4\) of [11], for the detailed definition of Busemann function in Riemannian manifold. In general, for simply connected, complete Riemannian manifold without conjugate points, the Busemann functions are distance functions, that is \(|\nabla b_{\gamma}|\equiv 1\) (see Proposition \(1\) of [4]). The definition (42) leads to \(b_{\gamma}(\gamma(t))=-t\) and in turn \(\nabla b_{\gamma}(\gamma(t))=-\dot{\gamma}(t)\). In case of Finsler manifolds, we expect the Busemann functions to be a distance function (Definition 2.5). And therefore, \(\overrightarrow{\nabla}b_{\gamma}(\gamma(t))=-\dot{\gamma}(t)\) may not be a unit vector, as desired unless the space is reversible. Hence, for Finsler manifolds it is appropriate to define the Busemann function by (41)._ ### The forward Busemann function on the Funk disc **Theorem 6.1**: _Let \(\gamma\) be the forward unit speed geodesic in \((\mathbb{D},F_{F})\) with initial point \(p\in\mathbb{D}\) and the forward hitting point at \(y\in\partial\mathbb{D}\). Then the forward Busemann function for \(\gamma\) on \((\mathbb{D},F_{F})\) is given by,_ \[b_{\gamma}(x)=\ln\frac{1-\langle p,y\rangle}{1-\langle x,y\rangle}. \tag{43}\] **Proof:** Let \(\gamma(t)=e^{-t}p+(1-e^{-t})y\) be a forward unit speed geodesic in \((\mathbb{D},F_{F})\) given by (25). Then for \(x\in\mathbb{D}\) and \(a=\overrightarrow{x\gamma(t)}\cap\partial\mathbb{D}\), \[d_{F}(x,\gamma(t))=\ln\frac{|x-a|}{|\gamma(t)-a|}. \tag{44}\] The line passing through \(x=(x^{1},x^{2})\) and \(\gamma(t)=(\gamma_{1}(t),\gamma_{2}(t))\) is given by, \[\frac{X^{1}-\gamma_{1}(t)}{x^{1}-\gamma_{1}(t)}=\frac{X^{2}-\gamma_{2}(t)}{x^ {2}-\gamma_{2}(t)}=\lambda(x,\gamma(t)),\] where \(\lambda(x,\gamma(t))\) is a continuous parameter. Therefore, \[X^{1}=\gamma_{1}(t)+\lambda(x,\gamma(t))(x^{1}-\gamma_{1}(t))\;\;\text{and}\; \;X^{2}=\gamma_{2}(t)+\lambda(x,\gamma(t))(x^{2}-\gamma_{2}(t)). \tag{45}\] If this line intersects the unit circle, then we get, \[\lambda^{2}(x,\gamma(t))\Big{[}|\gamma(t)-x|^{2}\Big{]}+2\lambda(x,\gamma(t)) \Big{[}\langle x,\gamma(t)\rangle-|\gamma(t)|^{2}\Big{]}-\Big{[}1-|\gamma(t)| ^{2}\Big{]}=0. \tag{46}\] The roots of the above equations are \(\lambda_{1}(x,\gamma(t))\) and \(\lambda_{2}(x,\gamma(t))\), given by \[\lambda_{1}(x,\gamma(t))=\frac{|\gamma(t)|^{2}-\langle x,\gamma(t)\rangle- \sqrt{[|\gamma(t)|^{2}-\langle x,\gamma(t)\rangle]^{2}+|\gamma(t)-x|^{2}(1-| \gamma(t)|^{2})}}{|\gamma(t)-x|^{2}}, \tag{47}\] \[\lambda_{2}(x,\gamma(t))=\frac{|\gamma(t)|^{2}-\langle x,\gamma(t)\rangle+ \sqrt{[|\gamma(t)|^{2}-\langle x,\gamma(t)\rangle]^{2}+|\gamma(t)-x|^{2}(1-| \gamma(t)|^{2})}}{|\gamma(t)-x|^{2}}. \tag{48}\] By (47), we observe that for all \(x,\gamma(t)\in\mathbb{D}\), \(\lambda_{1}(x,\gamma(t))<0\) and by (46) \(\lambda_{1}(x,\gamma(t))\cdot\lambda_{2}(x,\gamma(t))<0\); consequently, \(\lambda_{2}(x,\gamma(t))>0\). Also, \(\lambda_{1}(x,\gamma(t))\to 0\) as \(t\to\infty\) since \(\gamma(t)\to y\) as \(t\to\infty\). Let \(a=\overrightarrow{x\gamma(t)}\cap\partial\mathbb{D}\). Therefore, \[|\gamma(t)-a|^{2}=\lambda_{1}^{2}(x,\gamma(t))|x-\gamma(t)|^{2}\;\;\text{and}\; \;|x-a|^{2}=(1-\lambda_{1}(x,\gamma(t)))^{2}|x-\gamma(t)|^{2}. \tag{49}\] Substituting (49) in (44) we get, \[d_{F}(x,\gamma(t))=\ln\frac{1-\lambda_{1}(x,\gamma(t))}{|\lambda_{1}(x,\gamma(t)) |}. \tag{50}\] Now again using (47) in (50) yields, \[b_{\gamma}(x)=\lim_{t\to\infty}\left\{t-d_{F}(x,\gamma(t))\right\}=\lim_{t\to \infty}\ln\left\{e^{t}\cdot\frac{|\lambda_{1}(x,\gamma(t))|}{1-\lambda_{1}(x, \gamma(t))}\right\}. \tag{51}\] Finally, using (47) in (51) and then, substituting \(\gamma(t)=e^{-t}p+(1-e^{-t})y\); after some simplifications we obtain, \[b_{\gamma}(x)=\ln\frac{1-\langle p,y\rangle}{1-\langle x,y\rangle}.\] \(\Box\) ### The Busemann function on the Hilbert disc **Theorem 6.2**: _Let \(\beta\) be the unit speed geodesic line in the Hilbert disc \((\mathbb{D},F_{H})\) with initial point \(p\in\mathbb{D}\) and the forward hitting point \(y\in\partial\mathbb{D}\). Then the Busemann function for the line \(\beta\) is,_ \[b_{\beta}(x)=\frac{1}{2}\ln\frac{(1-|x|^{2})(1-\langle p,y\rangle)^{2}}{(1-|p |^{2})(1-\langle x,y\rangle)^{2}}.\] Proof:Let \(\beta\) be the unit speed geodesic line in the Hilbert disc \((\mathbb{D},F_{H})\) given by (34). Then for \(x\in\mathbb{D}\), we have \[b_{\beta}(x)=\lim_{t\to\infty}\left\{t-d_{H}(x,\beta(t))\right\}. \tag{52}\] And \[d_{H}(x,\beta(t))=\frac{1}{2}\ln\frac{|x-a|\cdot|\beta(t)-b|}{|\beta(t)-a| \cdot|x-b|}, \tag{53}\] where \(a=\overrightarrow{x\gamma(t)}\cap\partial\mathbb{D}\) and \(b=\overleftarrow{x\gamma(t)}\cap\partial\mathbb{D}\). Then as in the Funk case, by considering the equation of the line passing through \(x=(x^{1},x^{2})\) and \(\beta(t)=(\beta_{1}(t),\beta_{2}(t))\); we obtain that, if this line intersects the unit circle, then the following equation holds. \[\lambda^{2}(x,\beta(t))\Big{[}|\beta(t)-x|^{2}\Big{]}+2\lambda(x,\beta(t)) \Big{[}\langle x,\beta(t)\rangle-|\beta(t)|^{2}\Big{]}-\Big{[}1-|\beta(t)|^{ 2}\Big{]}=0. \tag{54}\] Let the roots of the above equations be \(\lambda_{1}(x,\beta(t))\) and \(\lambda_{2}(x,\beta(t))\), then \[\lambda_{1}(x,\beta(t))=\frac{|\beta(t)|^{2}-\langle x,\beta(t)\rangle-\sqrt{ |\beta(t)|^{2}-\langle x,\beta(t)\rangle|^{2}+|\beta(t)-x|^{2}(1-|\beta(t)|^{ 2})}}{|\beta(t)-x|^{2}}, \tag{55}\] \[\lambda_{2}(x,\beta(t))=\frac{|\beta(t)|^{2}-\langle x,\beta(t)\rangle+\sqrt{[| \beta(t)|^{2}-\langle x,\beta(t)\rangle]^{2}+|\beta(t)-x|^{2}(1-|\beta(t)|^{2})} }{|\beta(t)-x|^{2}}. \tag{56}\] Clearly by (55), \(\lambda_{1}(x,\beta(t))<0\) for all \(x,\beta(t)\in\mathbb{D}\) and by (54), \(\lambda_{1}(x,\beta(t))\cdot\lambda_{2}(x,\beta(t))<0\). Therefore, \(\lambda_{2}(x,\beta(t))>0\) for all \(x,\beta(t)\in\mathbb{D}\). Also, \(\lambda_{1}(x,\beta(t))\to 0\) as \(t\to\infty\), since \(\beta(t)\to y\) as \(t\to\infty\). Again following the calculation as in the Funk case we obtain, \[|x-a|^{2}=(1-\lambda_{1}^{2}(x,\beta(t)))|x-\beta(t)|^{2},\ |x-b|^{2}=(1- \lambda_{2}^{2}(x,\beta(t)))|x-\beta(t)|^{2}, \tag{57}\] \[|\beta(t)-a|^{2}=\lambda_{1}^{2}(x,\beta(t))|x-\beta(t)|^{2},\ | \beta(t)-b|^{2}=\lambda_{2}^{2}(x,\beta(t))(x,\beta(t))|x-\beta(t)|^{2}. \tag{58}\] Also it is clear that from (57) that \(\lambda_{2}(x,\beta(t))\neq 1\)(as \(x\neq b\)). Substituting (57),(58) in (53) yields, \[d_{H}(x,\beta(t))=\frac{1}{2}\ln\frac{\lambda_{2}(x,\beta(t)))\cdot(1-\lambda _{1}(x,\beta(t))))}{|\lambda_{1}(x,\beta(t)))|\cdot|1-\lambda_{2}(x,\beta(t)) ))|}. \tag{59}\] Therefore, from (52) and (59) we get, \[b_{\beta}(x)=\frac{1}{2}\lim_{t\to\infty}\ln\left\{e^{2t}\cdot\frac{|\lambda_ {1}(x,\beta(t)))|\cdot|1-\lambda_{2}(x,\beta(t)))|}{\lambda_{2}(x,\beta(t))) \cdot(1-\lambda_{1}(x,\beta(t))))}\right\}. \tag{60}\] Substituting \(\beta(t)=(1-s(t))p+s(t)y\) (see (34)) in (60), after some simplifications we obtain, \[b_{\beta}(x)=\frac{1}{2}\ln\frac{(1-|x|^{2})(1-\langle p,y\rangle)^{2}}{(1-|p |^{2})(1-\langle x,y\rangle)^{2}}.\] \(\Box\) ### Horocycles in the Funk and the Hilbert disc Let \((M,F)\) be a forward complete, simply connected Finsler manifold without conjugate points. Then for \(q\in M\), the forward and the backward spheres are, respectively, defined by ([2], SS\(6.2\)\(B\)), \[S(q,r)^{+}=\left\{x\in M|d_{F}(q,x)=r\right\}\ \ \mbox{and}\ \ S(q,r)^{-}=\left\{x\in M |d_{F}(x,q)=r\right\}.\] **Definition 6.4** (Forward Horosphere): _Let \(\gamma:[0,\infty)\to M\) be a forward ray with \(\gamma(0)=p\), \(\dot{\gamma}(0)=v\). Then,_ \[b_{\gamma}^{-1}(a)=\lim_{t\to\infty}S(\gamma(t),t-a)^{-}, \tag{61}\] _the limit of the backward spheres is called the forward horosphere passing through \(p\in M\). In dimension two horospheres are termed as horocycles._ We describe the forward horocycles of the Funk disc explicitly. Note that in the Funk disc backward horocycles do not exist. **Proposition 6.1**: _Let \(\gamma(t)=e^{-t}p+(1-e^{-t})y\) be the Funk forward geodesic of \(\mathbb{D}\) starting with \(p\) and the forward hitting point at \(y\in\partial\mathbb{D}\). Then the forward horocycles along the \(\gamma\) are line segments in the Funk disc perpendicular to the line joining origin and the forward hitting point at \(y\)._ **Proof:** In the case of Funk disc, the forward Busemann function for the unit speed forward geodesic \(\gamma\) is given by (43): \[b_{\gamma}(x)=\ln\frac{1-\langle p,y\rangle}{1-\langle x,y\rangle}.\] Therefore, it follows that for \(a\in\mathbb{R}\), \[b_{\gamma}^{-1}(a)=\left\{(x^{1},x^{2})\in\mathbb{D}^{2}:x^{1}y^{1}+x^{2}y^{2 }=1-e^{-a}(1-\langle p,y\rangle)\right\}.\] Clearly, the line \(x^{1}y^{1}+x^{2}y^{2}=1-e^{-a}(1-\langle p,y\rangle)\) is perpendicular to the line \(x^{1}y^{1}-x^{2}y^{2}=0\). \(\square\) **Proposition 6.2**: _The forward and the backward horocycles of the Hilbert disc are the ellipses._ **Proof:** In case of the Hilbert disc, the Busemann function for the unit speed line \(\beta\) given by (34) is, \[b_{\beta}(x)=\frac{1}{2}\ln\frac{(1-|x|^{2})(1-\langle p,y\rangle)^{2}}{(1-|p |^{2})(1-\langle x,y\rangle)^{2}}.\] Therefore, it follows that for \(a\in\mathbb{R}\), \[b_{\beta}^{-1}(a)=\left\{(x^{1},x^{2})\in\mathbb{D}^{2}:\frac{1}{2}\ln\frac{( 1-|x|^{2})(1-\langle p,y\rangle)^{2}}{(1-|p|^{2})(1-\langle x,y\rangle)^{2}}=a \right\}. \tag{62}\] Simplifying RHS of (62), we obtain the following (63) which represents the ellipse. \[(x^{1})^{2}\Big{[}e^{2a}(1-|p|^{2})(y^{1})^{2}+(1-\langle p,y \rangle)^{2}\Big{]}+(x^{2})^{2}\Big{[}e^{2a}(1-|p|^{2})(y^{2})^{2}+(1-\langle p,y\rangle)^{2}\Big{]}\] \[+2x^{1}x^{2}y^{1}y^{2}e^{2a}(1-|p|^{2})-2x^{1}y^{1}(1-|p|^{2})-2x ^{2}y^{2}(1-|p|^{2})\] \[+\Big{[}e^{2a}(1-|p|^{2})-(1-\langle p,y\rangle)^{2}\Big{]}=0. \tag{63}\] \(\square\) The Forward Asymptotic Harmonicity of the Funk disc The concept of _asymptotically harmonic_ Riemannian manifolds was originally introduced by Ledrappier [[6], Theorem 1] in connection with the rigidity of measures related to the Dirichlet problem (harmonic measure) and the dynamics of the geodesic flow (Bowen-Margulis measure). The concept of the asymptotic harmonicity of the Finsler manifold was extended in Definition 4.3 of [13]. Recall that the Hessian of a smooth function \(f\) is well defined only on the set \({\cal U}_{f}\); where \({\cal U}_{f}:=\{x\in M:df_{x}\neq 0\}\) (Definition 2.4). Using this we _correct_ the Definition 4.3 [13] as follows. **Definition 7.1** (Forward Asymptotically Harmonic Finsler Manifold): _Let \((M,F,d\mu)\) be a forward complete, simply connected Finsler \(\mu\)-manifold without conjugate points. Then \(M\) is said to be forward asymptotically harmonic, if there exist a constant \(h\), independent of \(x\in M\) and \(\gamma\), such that \(\overrightarrow{\Delta}_{\mu}b_{\gamma}(x)\equiv h\) for \(x\in{\cal U}_{b_{\gamma}}\), in the sense of distributions._ In this section, we show that the Funk BH-disc \((\mathbb{D},F_{F},BH)\) is forward asymptotically harmonic Finsler surface. Recall that the Hilbert metric on the unit disc is the well-known Riemannian Beltrami Klein metric and is known to be asymptotically harmonic [12]. We also show that the Funk-HT disc \((\mathbb{D},F_{F},HT)\), the Funk-max disc \((\mathbb{D},F_{F},\max)\), and the Funk-min disc \((\mathbb{D},F_{F},\min)\) are _not_ forward asymptotically harmonic Finsler surfaces. Towards this, we show by the detailed calculations that \(\overrightarrow{\Delta}_{BH}b_{\gamma}(x)=-2\), for \(x\in M\), where \(\overrightarrow{\Delta}_{BH}\) denotes the forward Shen's Laplacian on \(M\). But on the other hand, \(\overrightarrow{\Delta}_{HT}b_{\gamma}(x)\), \(\overrightarrow{\Delta}_{\max}b_{\gamma}(x)\), and \(\overrightarrow{\Delta}_{\min}b_{\gamma}(x)\) are _not_ constants for \(x\in M\). ### The Dual of the Funk metric on disc \(\mathbb{D}\) In this subsection, we find the dual (See Definition 2.4) of the Funk metric on the unit disc \(\mathbb{D}\). This result is important on its own right. This dual metric is required to calculate the Laplacian of the Busemann function. **Proposition 7.1**: _The dual \(F^{*}\) of the Funk metric \(F=\alpha+\beta\), on the unit disc \(\mathbb{D}\) defined by (11) is \(F^{*}=\alpha^{*}+\beta^{*}\), where \(\alpha^{*}(x,\xi)=|\xi|\) and \(\beta^{*}(x,\xi)=-\langle x,\xi\rangle\), for all \(x\in\mathbb{D},\xi\in T_{x}^{*}\mathbb{D}\). Consequently, \(F^{*}\) is a Randers metric._ **Proof:** The Funk metric on the unit disc \(\mathbb{D}\), given by (11), can be rewritten as \[F_{F}(x,v) = \sqrt{a_{ij}(x)v^{i}v^{j}}+b_{i}(x)v^{j},\] where \(a_{ij}(x)=\dfrac{\delta_{ij}(1-|x|^{2})+x_{i}x_{j}}{(1-|x|^{2})^{2}}\), \(b_{i}(x)=\dfrac{\delta_{ij}x^{j}}{1-|x|^{2}}\) and \(x_{i}=\delta_{ij}x^{j}\). The components \(a^{ij}(x)\) of the inverse of the matrix \((a_{ij}(x))\) are given by, \[a^{ij}(x)=(1-|x|^{2})(\delta^{ij}-x^{i}x^{j}). \tag{64}\] Clearly, \[||\beta||_{\alpha}^{2}=a^{ij}(x)b_{i}(x)b_{j}(x)=|x|^{2}. \tag{65}\] Using the techniques of Example 3.1.1 in [14], we find \(F^{*}\) explicitly as follows. \[F^{*}(x,\xi)=\alpha^{*}(x,\xi)+\beta^{*}(x,\xi)=\sqrt{a^{*ij}(x)\xi_{i}\xi_{j} }+b^{*i}(x)\xi_{i}, \tag{66}\] where \(\xi=(\xi_{i})\in T^{*}_{x}\mathbb{D}\), \(a^{*ij}(x)=\dfrac{(1-|\beta|_{\alpha}^{2})a^{ij}(x)+b^{i}(x)b^{j}(x)}{(1-|| \beta||_{\alpha}^{2})^{2}}=\delta^{ij},\) and \(b^{*i}(x)=-\dfrac{b^{i}(x)}{1-||\beta||_{\alpha}^{2}}=-x^{i},\) where \(b^{i}(x)=a^{ij}(x)b_{j}(x)=x^{i}(1-|x|^{2})\). Therefore, \[F^{*}(x,\xi)=|\xi|-\langle x,\xi\rangle. \tag{67}\] Further, since \(||\beta^{*}||_{\alpha^{*}}=|x|<1\). Hence \(F^{*}\) is a Randers metric. \(\square\) **Corollary 7.1**: _The Funk-Busemann functions are distance functions (see Definition 2.5) on the Funk disc and consequently, \(\mathcal{U}_{b_{\gamma}}=\mathbb{D}\)._ **Proof:** As \(b_{\gamma}(x)=\ln\dfrac{1-\langle p,y\rangle}{1-\langle x,y\rangle}\), \(db_{\gamma_{|x}}=\dfrac{y^{1}dx^{1}+y^{2}dx^{2}}{1-\langle x,y\rangle}\neq 0\). Hence by (67), \(F^{*}(x,db_{\gamma_{|x}})=1=F(x,\nabla b_{\gamma}(x))\). Consequently, the Funk-Busemann functions are _distance functions_. \(\square\) ### The forward Shen Laplacian of the forward Busemann function in the Funk disc In this subsection, we compute the forward Shen Laplacian of the forward Busemann function in the Funk disc directly by the formula (8). Let \(\gamma(t)=pe^{-t}+(1-e^{-t})y\) be the forward geodesic in the Funk disc \(\mathbb{D}\) given by (25) with \(\gamma(0)=p\) and \(\dot{\gamma}(0)=y-p\). **Theorem 7.1**: _We have for all \(x\in\mathbb{D}\), \(\overrightarrow{\Delta}_{BH}b_{\gamma}(x)=-2\)._ Proof:Let \((M,F,d\mu)\) be a Finsler manifold. Then by (8) we obtain, \[\overrightarrow{\Delta}b_{\gamma}(x)=\frac{1}{\sigma_{BH}(x)}\frac{\partial}{ \partial x^{i}}\left(\sigma_{BH}(x)g^{*ij}(x,db_{\gamma_{|x}})\frac{\partial b_ {\gamma}(x)}{\partial x^{j}}\right). \tag{68}\] By (1.6) of Example 1.2.1 in [14], we get \[g^{*ij}(x,\xi) = \frac{1}{2}[F^{*2}(x,\xi)]_{\xi^{i}\xi^{j}}\] \[= \frac{F^{*}(x,\xi)}{\alpha^{*}(x,\xi)}\left(a^{*ij}(x)-\frac{\xi ^{i}}{\alpha^{*}(x,\xi)}\frac{\xi^{j}}{\alpha^{*}(x,\xi)}\right)\] \[+\left(\frac{\xi^{i}}{\alpha^{*}(x,\xi)}-x^{i}\right)\left(\frac {\xi^{j}}{\alpha^{*}(x,\xi)}-x^{j}\right),\] where \(\xi^{i}=a^{*is}(x)\xi_{s}=\delta^{is}\xi_{s}=\xi_{i}\). Substituting \(\alpha^{*}(x,\xi)=|\xi|\) and \(F^{*}(x,\xi)=|\xi|-\langle x,\xi\rangle\) (Proposition 7.1) in the above equation yields, \[g^{*ij}(x,\xi)=\left(1-\frac{\langle x,\xi\rangle}{|\xi|}\right)\left(\delta^ {ij}-\frac{\xi_{i}}{|\xi|}\frac{\xi_{j}}{|\xi|}\right)+\left(\frac{\xi_{i}}{| \xi|}-x^{i}\right)\left(\frac{\xi_{j}}{|\xi|}-x^{j}\right). \tag{69}\] Let \(\xi=db_{\gamma_{|x}}\). Then \[\xi=(\xi_{i})=\left(\frac{\partial b_{\gamma}(x)}{\partial x^{i}}\right)= \left(\frac{y^{i}}{1-\langle x,y\rangle}\right)\ \ \mbox{and}\ \ \ |\xi|=\frac{1}{1-\langle x,y\rangle}. \tag{70}\] Also, \[\frac{\partial^{2}b_{\gamma}(x)}{\partial x^{i}\partial x^{j}}=\frac{y^{i}y^{ j}}{(1-\langle x,y\rangle)^{2}}. \tag{71}\] Therefore, (69) and (70) yields, \[g^{*ij}(x,db_{\gamma_{|x}})=(1-\langle x,y\rangle)\left(\delta^{ij}-y^{i}y^{ j}\right)+\left(y^{i}-x^{i}\right)\left(y^{j}-x^{j}\right), \tag{72}\] \[\frac{\partial g^{*ij}(x,db_{\gamma_{|x}})}{\partial x^{k}}=-\Big{[}\delta_{ lk}y^{l}\left(\delta^{ij}-y^{i}y^{j}\right)+\delta_{jk}\left(y^{i}-x^{i}\right)+ \delta_{ik}\left(y^{j}-x^{j}\right)\Big{]}. \tag{73}\] We also have \(\sigma_{BH}(x)=1\) (cf. Lemma 2.1). Using (70), (71), (72), (73) in (68), after some simplifications we get, \[\overrightarrow{\Delta}_{BH}b_{\gamma}(x) = \frac{\partial}{\partial x^{i}}\left(g^{*ij}(x,db_{\gamma_{|x}}) \frac{\partial b_{\gamma}(x)}{\partial x^{j}}\right)\] \[= \frac{\partial}{\partial x^{i}}\left(g^{*ij}(x,db_{\gamma_{|x}}) \right)\frac{\partial b_{\gamma}(x)}{\partial x^{j}}+g^{*ij}(x,db_{\gamma_{|x }})\left(\frac{\partial^{2}b_{\gamma}(x)}{\partial x^{i}\partial x^{j}}\right)\] \[= -\Big{[}y^{i}\left(\delta^{ij}-y^{i}y^{j}\right)+\delta_{ji} \left(y^{i}-x^{i}\right)+\left(y^{j}-x^{j}\right)\Big{]}\left(\frac{y^{j}}{1 -\langle x,y\rangle}\right)\] \[+\Big{[}\left(1-\langle x,y\rangle\right)\left(\delta^{ij}-y^{i }y^{j}\right)+\left(y^{i}-x^{i}\right)\left(y^{j}-x^{j}\right)\Big{]}\left( \frac{y^{i}y^{j}}{(1-\langle x,y\rangle)^{2}}\right)\] \[= -2.\] \(\square\) **Corollary 7.2**: _We have,_ 1. \(\overrightarrow{\Delta}_{HT}b_{\gamma}(x)=\overrightarrow{\Delta}_{BH}b_{ \gamma}(x)+\frac{3(\langle x,y\rangle-|x|^{2})}{1-|x|^{2}}\)_,_ \(\forall x\in\mathbb{D}\)_._ 2. \(\overrightarrow{\Delta}_{max}b_{\gamma}(x)=\overrightarrow{\Delta}_{BH}b_{ \gamma}(x)+\frac{3(\langle x,y\rangle-|x|^{2})}{|x|(1-|x|^{2})}\)_,_ \(\forall x\in\mathbb{D}\setminus\{0\}\)_._ 3. \(\overrightarrow{\Delta}_{min}b_{\gamma}(x)=\overrightarrow{\Delta}_{BH}b_{ \gamma}(x)-\frac{3(\langle x,y\rangle-|x|^{2})}{|x|(1-|x|^{2})}\)_,_ \(\forall x\in\mathbb{D}\setminus\{0\}\)_._ **Proof:** (1) For \(\sigma_{HT}(x)=\frac{1}{(1-|x|^{2})^{\frac{3}{2}}}\) (cf. Lemma 2.1). Therefore, (68) implies that, \[\overrightarrow{\Delta}_{HT}b_{\gamma}(x) = \overrightarrow{\Delta}_{BH}b_{\gamma}(x)+\left\{\left(1-\langle x,y\rangle\right)\left(\delta^{ij}-y^{i}y^{j}\right)+\left(y^{i}-x^{i}\right) \left(y^{j}-x^{j}\right)\right\}\] \[\times\frac{y^{j}}{1-\langle x,y\rangle}\times\frac{3x^{i}}{1-|x |^{2}},\] \[= \overrightarrow{\Delta}_{BH}b_{\gamma}(x)+\frac{3(\langle x,y \rangle-|x|^{2})}{1-|x|^{2}}.\] (2) For \(\sigma_{max}(x)=\frac{(1+|x|)^{\frac{3}{2}}}{(1-|x|)^{\frac{3}{2}}}\) (cf. Lemma 2.1). Hence, (68) gives, \[\overrightarrow{\Delta}_{max}b_{\gamma}(x) = \overrightarrow{\Delta}_{BH}b_{\gamma}(x)+\left\{\left(1-\langle x,y\rangle\right)\left(\delta^{ij}-y^{i}y^{j}\right)+\left(y^{i}-x^{i}\right) \left(y^{j}-x^{j}\right)\right\}\] \[\times\frac{y^{j}}{1-\langle x,y\rangle}\times\frac{3x^{i}}{|x|(1 -|x|^{2})},\] \[= \overrightarrow{\Delta}_{BH}b_{\gamma}(x)+\frac{3(\langle x,y \rangle-|x|^{2})}{|x|(1-|x|^{2})}.\] (3) For \(\sigma_{min}(x)=\frac{(1-|x|)^{\frac{3}{2}}}{(1+|x|)^{\frac{3}{2}}}\) (cf. Lemma 2.1). Consequently, using (68) we obtain, \[\overrightarrow{\Delta}_{max}b_{\gamma}(x) = \overrightarrow{\Delta}_{BH}b_{\gamma}(x)+\left\{(1-\langle x,y \rangle)\left(\delta^{ij}-y^{i}y^{j}\right)+\left(y^{i}-x^{i}\right)\left(y^{ j}-x^{j}\right)\right\}\] \[\times\frac{y^{j}}{1-\langle x,y\rangle}\times\frac{-3x^{i}}{|x| (1-|x|^{2})},\] \[= \overrightarrow{\Delta}_{BH}b_{\gamma}(x)-\frac{3(\langle x,y \rangle-|x|^{2})}{|x|(1-|x|^{2})}.\] \(\square\) **Remark 7.1**: 1. _Corollary_ 7.2 _shows that all_ \(\overrightarrow{\Delta}_{\mu}\) _(_\(\mu=\) _BH, HT, max, min) are different. Hence, we see that the concept of asymptotic harmonicity of Finsler manifolds strictly depends on the measure, in contrast to the Riemannian case._ 2. _As the Funk metrics are of constant flag curvature_ \(-\frac{1}{4}\)_, the mean curvature_ \(\Pi_{\nabla r}\) _of the backward geodesic sphere of radius_ \(r\)_, is given by ((3.7) of_ _[_19_]__, Lemma 3.2):_ \[\Pi_{\nabla r}=-\frac{(n-1)}{2}\coth\left(\frac{r}{2}\right)-\frac{(n+1)}{2}.\] (75) _So for_ \(n=2\) _we obtain, the mean curvature of forward horospheres as,_ \[\Pi_{\infty}=-2=\overrightarrow{\Delta}_{BH}b_{\gamma}(x).\] _This calculation matches with our direct calculation of Laplacian._ From Lemma 2.2 and (75) we conclude: **Corollary 7.3**: _The Finslerian and the induced Riemannian mean curvature, respectively, of all the forward horocycles of the Funk disc is constant \(-2\) and \(\frac{-1}{2}\), respectively._ From (75) we also conclude that: **Corollary 7.4**: _For the Busemann function on the Hilbert disc \(\Delta b_{\gamma}(x)\equiv\frac{-1}{2}\). We recover the well known result that the Hilbert disc is asymptotically harmonic Riemannian manifold._
2308.10945
Krylov Complexity of Open Quantum Systems: From Hard Spheres to Black Holes
We examine the complexity of quasi-static chaotic open quantum systems. As a prototypical example, we analytically compute the Krylov complexity of a slowly leaking hard-sphere gas using Berry's conjecture. We then connect it to the holographic complexity of a $d+1$-dimensional evaporating black hole using the Complexity=Volume proposal. We model the black hole spacetime by stitching together a sequence of static Schwarzschild patches across incoming negative energy null shock waves. Under certain identification of parameters, we find the late time complexity growth rate during each quasi-static equilibrium to be the same in both systems.
Vyshnav Mohan
2023-08-21T18:00:05Z
http://arxiv.org/abs/2308.10945v2
# Krylov Complexity of Open Quantum Systems: From Hard Spheres to Black Holes ###### Abstract We examine the complexity of quasi-static chaotic open quantum systems. As a prototypical example, we analytically compute the Krylov complexity of a slowly leaking hard-sphere gas using Berry's conjecture. We then connect it to the holographic complexity of a \(d+1\)-dimensional evaporating black hole using the Complexity=Volume proposal. We model the black hole spacetime by stitching together a sequence of static Schwarzschild patches across incoming negative energy null shock waves. Under certain identification of parameters, we find the late time complexity growth rate during each quasi-static equilibrium to be the same in both systems. ## 1 Introduction In recent years, the complexity of quantum systems has shown much promise as a tool in diagnosing quantum chaos [1; 2; 3; 4; 5]. Complexity tracks the degrees of freedom of the system, especially at very long time scales where other quantum information theoretic measures like entanglement entropy have saturated [6]. This feature has been particularly fruitful in theories involving black holes, where the volume of spacelike extremal codimension-1 surfaces has been conjectured to measure the complexity of the corresponding state in the dual boundary theory [7; 8]. The proposal, dubbed the Complexity=Volume (CV) prescription, adds a non-trivial entry to the holographic dictionary. In a quantum mechanical system, we can quantify complexity using _Krylov Complexity_ (or K-complexity) [9; 1], which measures the growth of operators by treating its evolution as a particle hopping on a semi-infinite chain1. Under Hamiltonian evolution, a wavefunction centered around the first site will spread deeper into the chain. Krylov complexity is defined as the average position of the particle on the chain as a function of time, thereby capturing the "spread" of the operator in the operator space. Footnote 1: For applications of Krylov complexity to various systems, refer to [10; 11; 12; 13; 14; 15; 16]. In section 2, we will use this formulation of operator complexity to study the growth of operators in a slowly leaking hard sphere gas. We will assume that the gas is leaking from a small box into a bigger box, as in [17]. Moreover, we will work in the semiclassical, low density limit of the system. A box of hard sphere gas is classically chaotic and satisfies _Berry's conjecture_, which states that the high-lying energy eigenstates behave as if they have been picked from a Gaussian ensemble [18; 19; 20]. Berry's conjecture played a crucial role in arriving at Eigenstate Thermalization Hypothesis (ETH) [20] (see also [21]) and makes the system analytically tractable. However, since we are dealing with an open quantum system, the canonical Krylov complexity calculations would not work here. This is because the evolution is non-unitary, and the _Lanczos algorithm_ one usually employs to calculate the K-complexity leads to unsatisfactory results [22; 23; 24; 25]. We will sidestep this difficulty by working with a slowly leaking gas. This allows us to focus on a time period, which we will refer to as an _epoch_, during which the boxes equilibrate separately, and there is no overall exchange of particles. During an epoch, the Lanczos algorithm gives meaningful results, and we can compute the Krylov complexity using standard techniques. Moreover, we will also show that we can patch together adjacent epochs to form a continuous curve. Using ETH and very mild assumptions on the off-diagonal matrix elements of the operators, we will argue how the hard sphere calculation can be generalized to any chaotic open quantum system. This prompts us to look at our slowly leaking hard sphere gas model as an excellent prototype where analytic calculations can be carried out explicitly. In section 3, we will use the CV prescription to calculate the holographic complexity of an evaporating black hole. We will model an evaporating black hole by stitching together a sequence of static Schwarzschild spacetimes across incoming negative energy null shock waves. Each Schwarzschild patch is characterized by a constant mass that decreases as we go across the shock waves. These patches correspond to periods where the black hole is effectively not evaporating. Therefore, we will refer to them as epochs, akin to our slowly leaking gas calculation. We will calculate the volume of boundary-anchored extremal codimension-1 surfaces in this background. Under an identification of parameters, we show that the late time rate of growth of complexity during an epoch matches the slowly leaking gas calculation. ## 2 Krylov Complexity of a Slowly Leaking Gas Consider an operator \(O\) that acts on the states of a quantum mechanical system. If the Hamiltonian of the system is \(H\), then the time evolution of this operator is given by \[O(t)=e^{iHt}Oe^{-iHt}=e^{i\mathcal{L}t}O \tag{1}\] where \({\cal L}(O)=[H,O]\) is the Liouvillian superoperator. Krylov complexity measures the spread of \(O(t)\) in the _Krylov subspace_, the Hilbert space spanned by \({\cal L}^{n}O\). We will not provide a pedagogical review of Krylov complexity here as detailed reviews can be found elsewhere [1; 9; 26]. The microcanonical refinement of Krylov complexity, as introduced in [27], will be our focus for analysis. Additionally, we will use the moment method to compute Krylov complexity. Let us quickly review this construction. Our starting point is the thermal two-point function of the operator: \[G(t)=\sum_{i,j}e^{-\frac{\beta}{2}(E_{i}+E_{j})}e^{it(E_{i}-E_{j})}\,|\langle E _{i}\,|O|\,E_{j}\rangle|^{2} \tag{2}\] Here \(\beta\) is the inverse temperature, and \(E_{i,j}\) are the energy eigenvalues of the system. Approximating the sum by an integral over the density of eigenstates \(\rho(E)\), we get \[G(t)=\int_{0}^{\infty}dEe^{-\beta E}\int_{-2E}^{2E}d\omega\rho \left(E+\frac{\omega}{2}\right)\rho\left(E-\frac{\omega}{2}\right)\left| \left\langle E+\frac{\omega}{2}\,|O|\,E-\frac{\omega}{2}\right\rangle\right|^ {2}e^{i\omega t} \tag{3}\] where we have defined the average energy and the energy difference as follows: \[E=\frac{E_{i}+E_{j}}{2}\quad\text{and}\quad\omega=E_{i}-E_{j}. \tag{4}\] The Liouvillian is sensitive only to the energy differences \(\omega\), and it does not mix different average energy \(E\) sectors [27]. Therefore, we can work with a fixed \(E\) and then average over all the other energy sectors at the end of the calculation. The fixed energy two point function can be obtained by taking its inverse Laplace transform: \[G_{E}(t)=\int_{-2E}^{2E}d\omega\rho\left(E+\frac{\omega}{2}\right) \rho\left(E-\frac{\omega}{2}\right)\left|\left\langle E+\frac{\omega}{2}\,|P_{ 1}|\,E-\frac{\omega}{2}\right\rangle\right|^{2}e^{i\omega t}. \tag{5}\] The key elements in our analysis are the moments of these two point functions. They are given by \[\mu_{n}^{E}=\left.\frac{\left(-i\frac{d}{dt}\right)^{n}G_{E}(t) \right|_{t=0}}{G_{E}(0)} \tag{6}\] Using the Hankel transformation matrix \(M_{ij}=\mu_{i+j}^{E}\), we can immediately calculate the _Lanczos coefficients_\(b_{n}^{E}\): \[\left(b_{1}^{E}\right)^{2n}\left(b_{2}^{E}\right)^{2n-2}\ldots \left(b_{n}^{E}\right)^{2}=\det\left[M_{ij}\right]_{0\leq i,j\leq n} \tag{7}\] The Lanczos coefficients are handy objects because they contain all the information about the dynamics of the operator \(O\). Moreover, they completely determine the Krylov complexity of the operator. To see this, let us note that the fixed energy Krylov complexity is given in terms of the \(K\)-wavefunctions \(\phi_{E,n}\) as follows [27] \[K_{E}(t)=\sum_{n=0}^{D_{O}-1}n\left|\phi_{E,n}(t)\right|^{2}, \tag{8}\] where \(D_{O}\) is the dimensionality of the Krylov subspace. The \(K\)-wavefunctions are, in turn, related to the Lanczos coefficients through the Schrodinger equation: \[\dot{\phi}_{E,n}(t)=b_{n+1}^{E}\phi_{E,n+1}(t)-b_{n}^{E}\phi_{E,n-1}(t). \tag{9}\] Using the initial condition \(\phi_{E,n}(0)=\delta_{n0}\), we can solve for the \(K\)-wavefunctions and compute \(K_{E}\). The _thermal Krylov complexity_ is given by taking a Laplace transform of its fixed energy counterpart: \[K_{th}(t)=\frac{\int_{0}^{\infty}dEe^{-\beta E}\mathcal{C}(E)K^{E}(t)}{\int_{0 }^{\infty}dEe^{-\beta E}\mathcal{C}(E)} \tag{10}\] where the normalization constant \(\mathcal{C}(E)\) is given by \[\mathcal{C}(E)=\int_{-2E}^{2E}d\omega\rho\left(E+\frac{\omega}{2}\right)\rho \left(E-\frac{\omega}{2}\right)\left|\left\langle E+\frac{\omega}{2}\left|O \right|E-\frac{\omega}{2}\right\rangle\right|^{2}. \tag{11}\] Our primary focus is on the thermal Krylov complexity. In the following subsections, we will use the moments of the thermal two point functions to calculate \(K_{th}(t)\) of a slowly leaking hard sphere gas. ### Warm-up: Single Box Before we look at the slowly leaking gas, it is instructive to look at a single box of hard sphere gas. Consider a cubic box of edge length \(L+2a\) enclosing \(N\) hard spheres. We will assume that the hard spheres are identical and have radius \(a\). The classical Hamiltonian of the system is given by \[H=\sum_{i=1}^{N}\frac{\mathbf{p}_{i}^{2}}{2m}+\sum_{i<j}V\left( \left|\mathbf{x}_{i}-\mathbf{x}_{j}\right|\right) \tag{12}\] where \[V(r)=\begin{cases}+\infty&\text{ for }r<2a\\ 0&\text{ for }r>2a\end{cases} \tag{13}\] This system is classically chaotic and shows eigenstate thermalization when treated quantum mechanically [20]. Let us denote the energy eigenfunctions of the system by \(\psi(X)\), where \(X\) is the \(3N\)-dimensinal position vector. The wavefunctions are defined on the domain \[D=\left\{\mathrm{x}_{1},\ldots,\mathrm{x}_{N}\middle|x_{i1,2,3} \in\left[-\frac{1}{2}L,\frac{1}{2}L\right];\left|\mathbf{x}_{i}-\mathbf{x}_{j }\right|\geq 2a\right\} \tag{14}\] We will impose the boundary condition that \(\psi(X)\) vanishes on \(\partial D\). The wavefunctions with energy \(E_{\alpha}\) can be chosen to be of the following form [20]: \[\psi_{\alpha}(\mathbf{X})=\mathcal{N}_{\alpha}\int d^{3N}PA_{\alpha}(\mathbf{ P})\delta\left(\mathbf{P}^{2}-2mE_{\alpha}\right)\exp(i\mathbf{P}\cdot\mathbf{X}/ \hbar) \tag{15}\] where \({\cal N}_{\alpha}\) is the normalization constant. We can choose the wavefunction to be everywhere real by imposing \(A_{\alpha}^{*}({\bf P})=A_{\alpha}(-{\bf P})\). Let us define the thermal wavelength \(\lambda\) of the system as follows: \[\lambda=\sqrt{\frac{2\pi\hbar^{2}}{mkT_{\alpha}}} \tag{16}\] where the termperature \(T_{\alpha}\) is related to the energy through the relation \(E_{\alpha}=\frac{3}{2}NkT_{\alpha}\). When the thermal wavelength \(\lambda\lesssim a\), then the high-lying energy eigenstates are expected to satisfy the _Berry's conjecture_, which states that \(A_{\alpha}({\bf P})\) can be regarded as a Gaussian variable with the two-point function \[\left\langle A_{\alpha}({\bf P})A_{\beta}\left({\bf P^{\prime}} \right)\right\rangle_{\rm EE}=\delta_{\alpha\beta}\frac{\delta^{3N}\left({\bf P }+{\bf P^{\prime}}\right)}{\delta\left({\bf P}^{2}-{\bf P^{\prime 2}}\right)}. \tag{17}\] The subscript EE stands for _eigenstate ensemble_, the fictitious Gaussian ensemble the high-lying wavefunctions can be thought to belong to. The wavefunction in the momentum space is given by: \[\widetilde{\psi}_{\alpha}({\bf P})\equiv h^{-3N/2}\int_{D}d^{3N} X\psi_{\alpha}({\bf X})\exp(-i{\bf P}\cdot{\bf X}/\hbar) \tag{18}\] We will assume that we always work in the regime where Berry's conjecture holds (\(\lambda\lesssim a\)). Moreover, we will also assume that the density of the gas is low, that is, \(Na\ll L^{3}\). These assumptions give us enormous analytic control over the system. In particular, it is easy to see that the averaged two point functions are given by [20] \[\left\langle\widetilde{\psi}_{\alpha}^{*}({\bf P})\widetilde{\psi }_{\beta}\left({\bf P^{\prime}}\right)\right\rangle_{\rm EE}=\delta_{\alpha \beta}{\cal N}_{\alpha}^{2}h^{3N}\delta\left({\bf P}^{2}-2mE_{\alpha}\right) \delta_{D}^{3N}\left({\bf P}-{\bf P^{\prime}}\right) \tag{19}\] where \[\delta_{D}^{3N}({\bf K})\equiv h^{-3N}\int_{D}d^{3N}X\exp(i{\bf K }\cdot{\bf X}/\hbar). \tag{20}\] ### Krylov Complexity from Moments Consider the operator \(P_{1}\) that measures the momentum of one of the particles. The matrix elements of this operator in the energy eigenbasis are given by \[\left\langle E_{m}|P_{1}|E_{n}\right\rangle=\int d^{3}p_{1}d^{3}p _{2}\cdots d^{3}p_{N}\ |p_{1}|\ \widetilde{\psi}_{m}^{*}({\bf P})\widetilde{\psi}_{n}({\bf P}), \tag{21}\] where \(|p_{1}|\) is the magnitude of the momenta of one of the particles. Let us calculate the Krylov complexity of this operator by using the moments of the thermal two point function. Using (6), we get \[\mu_{2n}^{E}=\frac{1}{{\cal C}(E)}\int_{-2E}^{2E}d\omega\ \rho \left(E+\frac{\omega}{2}\right)\rho\left(E-\frac{\omega}{2}\right)\left| \left\langle E+\frac{\omega}{2}\left|P_{1}\right|E-\frac{\omega}{2}\right\rangle \right|^{2}\omega^{2n} \tag{22}\] \({\cal C}(E)\) is the normalization constant we defined in (11). The density of eigenstates \(\rho\) is given by [20] \[\rho(E)=\frac{1}{\Gamma(3N/2)E}\left(\frac{mL^{2}E}{2\pi\hbar^{2}} \right)^{\frac{3N}{2}}. \tag{23}\] Now let us calculate the average of the moments in the eigenstate ensemble: \[\left\langle\mu_{2n}^{E}\right\rangle_{\rm EE}=\frac{1}{{\cal C} (E)}\int_{-2E}^{2E}d\omega\ \rho_{0}(E,\omega)\left|\left\langle E+\frac{\omega}{2}\left|P_{1}\right|E- \frac{\omega}{2}\right\rangle\right|_{\rm EE}^{2}\omega^{2n} \tag{24}\] Here we have used the shorthand notation \(\rho_{0}(E,\omega)\) for the product of the density of states. Using (21), we see that: \[\left|\left\langle E_{m}\left|P_{1}\right|E_{\ell}\right\rangle \right|_{\rm EE}^{2}=\int d^{3N}Pd^{3N}P^{\prime}\ |p_{1}||p_{1}^{\prime}|\left\langle\widetilde{\psi}_{m}^{*}({\bf P })\widetilde{\psi}_{\ell}({\bf P})\widetilde{\psi}_{\ell}^{*}({\bf P}^{\prime })\widetilde{\psi}_{m}({\bf P}^{\prime})\right\rangle_{\rm EE} \tag{25}\] The four-point function can be broken down into two point functions using Wick contractions: \[\left\langle\widetilde{\psi}_{m}^{*}({\bf P})\widetilde{\psi}_{ \ell}({\bf P})\widetilde{\psi}_{\ell}^{*}({\bf P}^{\prime})\widetilde{\psi}_{ m}({\bf P}^{\prime})\right\rangle_{\rm EE} =\left\langle\widetilde{\psi}_{m}^{*}({\bf P})\widetilde{\psi}_{ \ell}({\bf P})\right\rangle_{\rm EE}\left\langle\widetilde{\psi}_{\ell}^{*}( {\bf P}^{\prime})\widetilde{\psi}_{m}({\bf P}^{\prime})\right\rangle_{\rm EE} \tag{26}\] \[+\left\langle\widetilde{\psi}_{\ell}^{*}({\bf P})\widetilde{\psi} _{\ell}^{*}({\bf P}^{\prime})\right\rangle_{\rm EE}\left\langle\widetilde{ \psi}_{m}^{*}({\bf P})\widetilde{\psi}_{m}({\bf P}^{\prime})\right\rangle_{\rm EE}\] From (19), we can see that contracting two eigenfunctions will produce a delta function in its indices. The only terms in (26) which would contribute to the moment calculation are the ones without any \(\delta_{m\ell}\) factor. This is because (24) contains a factor \((E_{\ell}-E_{m})\), multtyling the four-point functions. Therefore, only the last term in (26) would contribute. Let us look at the integral of this term separately: \[\Phi_{m\ell}=\int d^{3N}Pd^{3N}P^{\prime}\ p_{1}p_{1}^{\prime} \left\langle\widetilde{\psi}_{\ell}({\bf P})\widetilde{\psi}_{\ell}^{*}({\bf P }^{\prime})\right\rangle_{\rm EE}\left\langle\widetilde{\psi}_{m}^{*}({\bf P}) \widetilde{\psi}_{m}({\bf P}^{\prime})\right\rangle_{\rm EE} \tag{27}\] Using (19), we see that \[\Phi_{m\ell}={\cal N}_{m}^{2}{\cal N}_{\ell}^{2}(Lh)^{3N}\int d^{3N}Pd^{3N}P^ {\prime}\ p_{1}p_{1}^{\prime}\delta\left({\bf P}^{2}-2mE_{\ell}\right)\delta \left({\bf P}^{\prime 2}-2mE_{m}\right)\delta_{D}^{3N}\left({\bf P}-{\bf P}^{ \prime}\right) \tag{28}\] where we have used [20] \[(\delta_{D}^{3N}\left({\bf P}-{\bf P}^{\prime}\right))^{2}=(L/h)^{3N}\delta_{ D}^{3N}\left({\bf P}-{\bf P}^{\prime}\right). \tag{29}\] Now let us look at the \(m=\ell\) component. This corresponds to the case where \(\omega=0\) and \(E_{m}=E_{\ell}=E\). In the low density limit, we can essentially replace \(\delta_{D}^{3N}\left({\bf P}-{\bf P}^{\prime}\right)\) with a dirac delta. This gives us \[\Phi_{mm}={\cal N}^{4}(Lh)^{3N}\int d^{3N}P\ p_{1}^{2}\delta\left({ \bf P}^{2}-2mE\right) \tag{30}\] Choosing the normalization constant as in [20] \[\mathcal{N}^{-2}\equiv\mathcal{N}_{i}^{-2}=L^{3N}\frac{(2\pi mE)^{\frac{3N}{2}}}{ \Gamma(3N/2)E}, \tag{31}\] we get \[\begin{split}\Phi_{mm}&=\mathcal{N}^{2}h^{3N}\int d ^{3}p_{1}\ p_{1}^{2}\left(2\pi mkT\right)^{-3/2}e^{-\mathbf{p}_{1}^{2}/2mkT}\\ &=4\pi\mathcal{N}^{2}h^{3N}\int dp_{1}\ p_{1}^{4}\left(2\pi mkT \right)^{-3/2}e^{-\mathbf{p}_{1}^{2}/2mkT}\\ &=3mkT\mathcal{N}^{2}(h)^{3N}\\ &=(h/L)^{3N}3mkT\frac{\Gamma(3N/2)(2mE)}{\left(2m\pi E\right)^{3N /2}}\\ &\equiv\Phi_{E}\end{split} \tag{32}\] Now let us return to the non-diagonal elements of \(\Phi_{m\ell}\). From our definition (20), it is easy to see that \(\delta_{D}^{3N}\left(\mathbf{P}-\mathbf{P}^{\prime}\right)\) is a sharply peaked function that is zero almost everywhere. Let us choose this function to be a Gaussian distribution as in [20]: \[\delta_{D}^{3N}\left(\mathbf{P}-\mathbf{P}^{\prime}\right)\simeq(L/h)^{3N}\exp \left[-\left(\mathbf{P}-\mathbf{P}^{\prime}\right)^{2}L^{2}/4\pi\hbar^{2}\right] \tag{33}\] This gives us [20] \[\Phi_{ij}\simeq\Phi_{ii}\exp\left[-m\left(E_{i}-E_{j}\right)^{2}L^{2}/8\pi \hbar^{2}E_{i}\right] \tag{34}\] In the notation \(E_{i}=E+\frac{\omega}{2}\) and \(E_{j}=E-\frac{\omega}{2}\), the expression simplifies to \[\Phi_{ij}\simeq\Phi_{E}\exp\left[-\frac{m\omega^{2}L^{2}}{8\pi\hbar^{2}E}\right] \tag{35}\] Plugging in the expressions, we find that the following integral gives the moments: \[\left\langle\mu_{2n}^{E}\right\rangle_{\text{EE}}=\frac{1}{\mathcal{C}(E)} \int_{-2E}^{2E}d\omega\ \rho_{0}\left(E,\omega\right)\omega^{2n}\Phi_{E}\exp\left[-\frac{m\omega^{2}L^{ 2}}{8\pi\hbar^{2}E}\right] \tag{36}\] #### 2.2.1 A Tale of Two Saddles Using a saddle point approximation, we will compute the moment integral (36). Let us assume that the saddle point \(\omega^{*}\) satisfies the relation \[\omega^{*}\ll 2E. \tag{37}\] We can work out the saddle point equation up to the leading order in \(\omega/2E\). This gives us \[\frac{2n}{\omega_{*}}-\frac{m\omega^{*}L^{2}}{4\pi\hbar^{2}E}=0 \tag{38}\] Solving the above equation, we find that the saddle point is located at \[\omega_{*}=\sqrt{\frac{8\lambda^{2}E^{2}n}{3L^{2}N}} \tag{39}\] where \(\lambda\) is the thermal wavelength at the energy \(E\). We can substitute the saddle point value in the integrand to obtain the moments: \[\left\langle\mu_{2n}^{E}\right\rangle_{\rm EE}\simeq\frac{1}{ \mathcal{C}(E)}\rho_{0}\left(E,\omega_{*}\right)\ \omega_{*}^{2n}\ \Phi_{E}\ \exp\left[-\frac{m\omega_{*}^{2}L^{2}}{8\pi\hbar^{2}E}\right] \tag{40}\] When \(n\) is sufficiently large, we can calculate the Lanczos coefficients using the relation [1; 9] \[\mu_{2n}\sim\left(b_{n}\right)^{2n}e^{o(n)} \tag{41}\] We find that \[\left\langle b_{n}^{E}\right\rangle_{\rm EE}\sim\sqrt{\frac{8 \lambda^{2}E^{2}n}{3L^{2}N}}. \tag{42}\] This behavior is termed "Lanczos ascent" in literature [27]. However, the coefficients transition to a different behavior when we look at large moments. We can see this by noting that when \(n\gg\frac{3NL^{2}}{2\lambda^{2}}\), \(\omega_{*}\gg 2E\). Since this violates our assumption (37), (39) ceases to be a saddle point of the moment integral. For large \(n\), (39) gets replaced by a new saddle point located at \(\omega_{*}\simeq 2E\). We can obtain this saddle point mechanically by noting that if2 Footnote 2: Note that the \(\frac{\lambda}{L}\) is a tiny number. \[\left|\frac{\omega}{2}-E\right|\ll\frac{2\lambda^{2}E}{L^{2}}, \tag{43}\] the saddle point equation is given by \[\frac{d}{d\omega}\left(\rho_{0}\left(E,\omega\right)\omega^{2n} \right)=0. \tag{44}\] Solving the equation, we find that the new saddle point is located at \[\omega_{*}=2E\sqrt{\frac{n}{\frac{3N-2}{2}+n}}. \tag{45}\] When \(n\gg\frac{3NL^{2}}{2\lambda^{2}}\), \[\omega_{*}\simeq 2E. \tag{46}\] It is easy to verify that (45) satisfies (43). Therefore, the saddle point of the integral changes when we look at higher moments. This has interesting consequences for the behavior of Lanczos coefficients. In particular, we can use (41) to see that: \[\left\langle b_{n}^{E}\right\rangle_{\text{EE}}\sim 2E. \tag{47}\] The saturation of Lanczos coefficients is referred to as "Lanczos plateau". This gives us \[\left\langle b_{n}^{E}\right\rangle_{\text{EE}}\sim\begin{cases}\sqrt{\frac{8 \lambda^{2}E^{2}n}{3L^{2}N}},&n<\frac{3NL^{2}}{2\lambda^{2}}\\ E,&n>\frac{3NL^{2}}{2\lambda^{2}}\end{cases} \tag{48}\] From [9; 28; 29], we can see that these Lanczos coefficients result in an initial _scrambling phase_ where the Krylov complexity grows quadratically. Following the scrambling phase, K-complexity switches to linear growth. Working out the growth rates and reinstating factors of \(\hbar\), we find that the Krylov complexity is given by \[\left\langle K^{E}(t)\right\rangle_{\text{EE}}\sim\begin{cases}\frac{8 \lambda^{2}E^{2}}{3L^{2}N\hbar^{2}}t^{2},&t<t_{S}\\ \frac{E}{\hbar}\ t,&t>t_{S}\end{cases} \tag{49}\] where \(t_{S}\) is the scrambling time. We can determine \(t_{S}\) by noting that the dynamics of the operator can be thought of as a particle moving on a one-dimensional semi-infinite chain. The sites on the chain are labeled by \(n\), and Krylov complexity (8) measures the average position \(\left\langle n\right\rangle\). The saddle points change when \(n\sim\frac{3NL^{2}}{2\lambda^{2}}\equiv n_{S}\). Figure 1: The figure on the left shows the growth of the Lanczos coefficients w.r.t \(n\). There is an initial scrambling phase (shaded in green) where \(b_{n}^{E}\) grows as \(\sqrt{n}\), and then it saturates to a constant value. The figure on the right shows the corresponding Krylov Complexity growth. During the scrambling phase, the K-complexity grows quadratically. Then it transitions into a linear growth proportional to the average energy of the box. Therefore, scrambling time is the amount of time Krylov complexity takes to reach \(n_{S}\). This gives us \[t_{S}\simeq\frac{\beta L^{2}\hbar}{2\lambda^{2}}\text{ where }\beta=\frac{1}{k_{B}T} \tag{50}\] Using (10), we can compute the thermal K-complexity in the eigenstate ensemble: \[\left\langle K_{th}\right\rangle_{\text{EE}}=\frac{\int_{0}^{\infty}dEe^{- \beta E}\mathcal{C}(E)\left\langle K^{E}(t)\right\rangle_{\text{EE}}}{\int_{0 }^{\infty}dEe^{-\beta E}\mathcal{C}(E)} \tag{51}\] It is easy to see that we get \[\left\langle K_{th}\right\rangle_{\text{EE}}\sim\begin{cases}\frac{8\lambda_{ 4}^{2}E_{4}^{2}}{3L^{2}N\hbar^{2}}t^{2},&t<t_{S}\\ \frac{E_{*}}{\hbar}\ t,&t>t_{S}\end{cases} \tag{52}\] where \(E_{*}\) and \(\lambda_{*}\) are the average energy and thermal wavelength of the box. Plotting these functions, we obtain figure 1. ### Slowly Leaking Gas Now let us look at the slowly leaking gas model used in [17]. Consider two cubic boxes sharing a common side as in figure 2. Let us assume that the left box has \(N\) hard spheres while the right box is empty. At time \(t=0\), we make a small hole in their shared wall so that the gas leaks slowly into the right box. We assume that the gas is leaking so slowly that there exists a time scale over which both the boxes have separately equilibrated. We will refer to this period as an _epoch_. During each epoch, the number of particles in each box, denoted by \(N_{L,R}\), remains roughly constant. Since we are working in the semiclassical limit, we can always use either \(N_{L}\) or \(N_{R}\) to characterize each epoch. Figure 2: Consider two cubic boxes in contact with each other. We fill the box on the left with \(N\) hard spheres. Let us assume that we are in the semi-classical limit where we can localize particles. At time \(t=0\), we poke a small hole on their common wall so that the gas leaks slowly into the right box. Let us denote the instantaneous number of particles in the boxes by \(N_{L}\) and \(N_{R}\). #### 2.3.1 Krylov Complexity during an epoch Within an epoch, there is no net exchange of particles between the boxes. Consequently, when examining the full Hamiltonian, it becomes evident that during each epoch, the Hamiltonian takes on the following factorized structure: \[H\simeq H_{L}\otimes\mathds{1}_{R}+\mathds{1}_{L}\otimes H_{R} \tag{53}\] Consider an operator \(P_{1}\), which measures the momentum of one of the particles in the left box. The operator has the following form \[P_{1}=P_{1,L}\otimes\mathds{1}_{R}. \tag{54}\] Here \(P_{1,L}\) is an operator that acts only on the left box. Now let us look at the Krylov complexity of this operator. If we are computing the Krylov complexity w.r.t to the entire system, we will get the results of the previous section - a scrambling phase followed by linear growth. This is because the left and right boxes form a closed chaotic system whose Krylov complexity is expected to be universal. However, we are after the Krylov complexity of the left box alone, which is an open quantum system. The evolution of an operator \(O_{L}\) acting on the left box is given by the master equation: \[\dot{O_{L}}(t)=\frac{i}{\hbar}\left(\mathcal{L}_{L}+\mathcal{L}_ {\mathcal{D}}\right) \tag{55}\] Here, \(\mathcal{L}_{L}\) denotes the Liouvillian of the left box, and \(\mathcal{L}_{D}\) represents a dissipative term arising from the interaction between the two boxes. The dissipative term makes the evolution non-Hermitian, and there is no consensus on extending Krylov complexity calculations to open quantum systems [22; 23; 24]. However, the "effective" Hamiltonian has no interaction term during an epoch. Therefore, the evolution of an operator \(O_{L}\) would be controlled only by \(\mathcal{L}_{L}\), or equivalently, the Hamiltonian of the left box. This allows us to carry over our definition of moments (22) to the two-box system: \[\left(\mu_{2n}^{E}\right)_{L}=\frac{1}{\mathcal{C}(E_{L})}\int_{ -2E_{L}}^{2E_{L}}d\omega\ \rho_{0}(E_{L},\omega_{L})\left|\left\langle E_{L}+\frac{\omega_{L}}{2}\left| O_{L}\right|E_{L}-\frac{\omega_{L}}{2}\right\rangle\right|^{2}\omega_{L}^{2n} \tag{56}\] where \(E_{L}\pm\frac{\omega_{L}}{2}\) are the energy eigenvalues of the left box. Now let us look at the operator \(P_{1}\). The operator acting on the left box is given by tracing out the degrees of freedom of the right box. This gives us \(\mathrm{Tr}_{R}(P_{1})\). The moments of this operator are given by \[\left(\mu_{2n}^{E}\right)_{L}=\frac{1}{\mathcal{C}(E_{L})}\int_{ -2E_{L}}^{2E_{L}}d\omega\ \rho_{0}(E_{L},\omega_{L})\left|\left\langle E_{L}+\frac{\omega_{L}}{2}\left| \mathrm{Tr}_{R}(P_{1})\right|E_{L}-\frac{\omega_{L}}{2}\right\rangle\right|^{2 }\omega_{L}^{2n} \tag{57}\] Using (54), we get \[\left(\mu_{2n}^{E}\right)_{L}=\frac{1}{\mathcal{C}(E_{L})}\int_{ -2E_{L}}^{2E_{L}}d\omega\ \rho_{0}(E_{L},\omega_{L})\left|\left\langle E_{L}+\frac{\omega_{L}}{2}\left| P_{1,L}\right|E_{L}-\frac{\omega_{L}}{2}\right\rangle\right|^{2}\omega_{L}^{2n} \tag{58}\] which is precisely the same expression we had in (2.22). Therefore, the calculations in the previous section will go through - The K-complexity will grow linearly, following a quadratic scrambling phase. #### 2.3.2 Stitching Together Epochs From (2.52), we can see that the late time linear growth rate of Krylov complexity during an epoch is given by the average energy of the left box, which we will denote by \(E_{L}\). Using \(E=\frac{3}{2}Nk_{B}T\), we get \[\frac{dK_{th}(t)}{dt}\sim\frac{E_{L}}{\hbar}=\frac{3}{2\hbar}N_{L}k_{B}T \tag{2.59}\] Now, we will compute the time dependence of \(N_{L}\). Suppose the right box is sufficiently large for the gas to leak out completely. If the area of the hole is given by \(A\), the leakage rate is given by [30] \[\frac{dN}{dt}=-\frac{A}{2L^{3}}\sqrt{\frac{kT}{m}}N. \tag{2.60}\] We can immediately integrate the above equation if we assume that the temperature of the left box remains constant. This gives us \[\begin{split} N(t)&=Ne^{-\frac{A}{2L^{3}}\sqrt{ \frac{kT}{m}}t}\\ &\equiv Ne^{-\frac{t}{t_{L}}}\end{split} \tag{2.61}\] Figure 3: The figure shows the thermal Krylov complexity of a slowly leaking gas as a function of time. The complexity continues to rise, but eventually levels off. The inset figures show how K-complexity grows during two different epochs. As we move into future epochs, we observe a decrease in the late time growth rate. where \(t_{L}\) is the leakage time of the system. Treating \(N(t)\) as an instantaneous value, we can integrate (59) to obtain \[K_{th}(t)\sim\frac{N\sqrt{mkTL}^{3}}{A\hbar}\left(1-e^{-\frac{t}{t_ {L}}}\right) \tag{62}\] Plotting the above function, we get figure 3. Krylov complexity keeps increasing and eventually levels off when \(t=t_{L}\). ## 3 Holographic Complexity of an Evaporating Black Hole In this section, we will study the holographic complexity of a slowly evaporating black hole using the Complexity=Volume prescription [7; 8]. To make the calculations tractable, let us model the black hole by patching together a sequence of \(k\) static Schwarzschild spacetimes across negative energy null shock waves. The metric of the \(d+1\) dimensional black hole is then given by \[ds^{2}=-F(r,v)dv^{2}+2dvdr+r^{2}d\Omega_{d-1} \tag{63}\] where \[F(r,v)=1-\frac{f(v)}{r^{d-2}}. \tag{64}\] We will choose \(f\) to have the following profile \[f(v)=\begin{cases}\omega_{1}^{d-2},&v<v_{1}\\ \cdots&\\ \omega_{i}^{d-2},&v_{i-1}<v<v_{i}\\ \cdots&\\ 0,&v_{k-1}<v<v_{k}\end{cases} \tag{65}\] The mass of each patch is given by \[M_{i}=\frac{(d-1)\Omega_{d-1}}{16\pi G_{N}}\omega_{i}^{d-2}, \tag{66}\] where \[M_{1}>M_{2}>\cdots>0. \tag{67}\] For the patched-up spacetime to be a good approximation to an evaporating black hole, we will assume the width of each patch to be much smaller than the time scales at which the black hole mass changes considerably. Moreover, we will also assume that the width is larger than the scrambling time of the black hole. Each patch corresponds to a period where the black hole is effectively not evaporating. Therefore, we will adopt the terminology from our previous section and refer to each patch as an epoch. In particular, the patch between \(v=v_{i-1}\) and \(v=v_{i}\) shock waves will be labeled as the \(i\)-th epoch. During an epoch, the black hole has a constant mass, and we can rewrite \(F(r,v)\) as \(F_{i}(r)\). ### Penrose Diagram To understand the structure of the spacetime, it is instructive to draw its Penrose diagram. To simplify the discussion, we will restrict ourselves to \(3+1\)-dimensions in this subsection. During an epoch, the tortoise coordinate is given by \[r_{i}^{*}(r)=\int^{r}\frac{dr^{\prime}}{F_{i}(r^{\prime})}=r+2G_{N} M_{i}\log\left(\Big{|}\frac{r}{2G_{N}M_{i}}-1\Big{|}\right) \tag{3.6}\] The corresponding outgoing Eddington-Finkelstein coordinates \(u_{i}\) are defined as follows \[u_{i}\equiv v-2r_{i}^{*}(r) \tag{3.7}\] It is easy to see that \(u\) is discontinuous across the boundaries of the epochs. Therefore, if we go along a continuous curve, the coordinate \(u\) has a "jump" in its value as soon as we cross a shock wave. Consequently, employing identical \(u\) and \(v\) coordinates throughout all epochs would render continuous curves discontinuous in the corresponding Penrose diagram. This discontinuity is the cost we have to pay to keep the Penrose undeformed [31; 32]. Figure 4: Penrose diagram of the black hole spacetime when there are three epochs. We will assume the mass of each epoch to satisfy the relation \(M_{1}>M_{2}>M_{3}\neq 0\). The horizon is indicated by the 45-degree line. The shock waves are marked by the red lines. A spacelike surface, indicated by the teal color, will be disconnected in the diagram. The end points of these disconnected surfaces follow the ordering (3.10). During an epoch, the outgoing null Kruskal coordinate \(U_{i}\) can be defined as follows [33]: \[U_{i}=\begin{cases}-e^{-\frac{u_{i}}{4G_{N}M}},&\text{Outside the horizon}\\ e^{-\frac{u_{i}}{4G_{N}M}},&\text{Inside the horizon}\end{cases} \tag{3.8}\] Since \(v\) is globally defined, we can define the outgoing null Kruskal coordinate \(V\) everywhere as \[V=e^{\frac{v}{4G_{N}M}}. \tag{3.9}\] Using the definition of Kruskal coordinates, we can see that the horizon of the epochs will always be at \(U_{i}=0\). Now let us figure out what happens to a continuous curve as it crosses a shock wave. We will denote the radial coordinate of the surface at the location of the shock wave by \(r_{S}\). Using (3.7) and (3.8), it is easy to verify that when \(r_{S}\) is either in the interior or sufficiently far away from the horizon, we have the relation \[U_{i}(r_{S})>U_{i+1}(r_{S}) \tag{3.10}\] Now let us draw the Penrose diagram of the spacetime. We will use the same \(U\) and \(V\) coordinates across all the epochs. This gives us figure 4. A connected surface will be discontinuous in this diagram. We can locate the end points of these disconnected pieces by using (3.10). ### Complexity=Volume We can study the growth of complexity using the Complexity=Volume conjecture [7; 8]. Let us assume that there are only three epochs, with masses satisfying the relation \(M_{1}>M_{2}>M_{3}\neq 0\). We will see that extending our results to an arbitrary number of epochs is straightforward. The Penrose diagram of this spacetime is given in figure 4. Consider spherically symmetric spacelike codimension-1 surfaces anchored onto a cutoff surface in the asymptotic region. We will assume that the cutoff surface is at \(r=r_{\infty}\) and the boundary anchoring time is denoted by \(t\). The volume of this surface is given by \[\mathcal{V}=\Omega_{d-1}\int d\lambda r^{d-1}\sqrt{-F(r,v)\dot{v}^{2}+2\dot{v} \dot{r}} \tag{3.11}\] Here \(\Omega_{d-1}\) is the dimensionless area of the \((d-1)\)-dimensional unit sphere. Let us rewrite the volume integral as the following summation \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.12}\] where \(\dot{v}\) is the volume of the \((d-1)\)-dimensional unit sphere. We can write the volume integral as \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.13}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.14}\] where \(\mathcal{V}_{i}\) is the volume of the \((d-1)\)-dimensional unit sphere. We can write the volume integral as \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.15}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.16}\] where \(\mathcal{V}_{i}\) is the volume of the \((d-1)\)-dimensional unit sphere. We can write the volume integral as \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.17}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.18}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.19}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.20}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.21}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.22}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.23}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.24}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.25}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.26}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.27}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.28}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.29}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.30}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.31}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.32}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.33}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.34}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.35}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.36}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.37}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.38}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.39}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.30}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.31}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.32}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.33}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.34}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.35}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.36}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.37}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.38}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.39}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.40}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.41}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.42}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.43}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.44}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.45}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.46}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.47}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.48}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.49}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.50}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.51}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.52}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.53}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.54}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.55}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.56}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.57}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.58}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.59}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.60}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.61}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.62}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.63}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.64}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.65}\] and \[\mathcal{V}=\sum_{i}\mathcal{V}_{i} \tag{3.66}\] where \({\cal V}_{i}\) is the volume of the portion of the surface located within the \(i\)-th epoch. We can then express each of these terms as follows: \[{\cal V}_{i}=\Omega_{d-1}\int d\lambda r^{d-1}\sqrt{-F_{i}(r)\dot{v}^{2}+2\dot{ v}\dot{r}}\equiv\Omega_{d-1}\int d\lambda{\cal L}_{i} \tag{3.13}\] We can see that \({\cal L}_{i}\) is independent of \(v\). Therefore, there is a conserved quantity associated with the integral, which we will denote by \(E_{i}\): \[E_{i}=\frac{\partial{\cal L}_{i}}{\partial\dot{v}}=\frac{r^{d-1}(\dot{r}-F_{i} \dot{v})}{\sqrt{-F_{i}\dot{v}^{2}+2\dot{v}\dot{r}}} \tag{3.14}\] The volume integral is reparametrization invariant, allowing us to choose \[r^{d-1}\sqrt{-F_{i}\dot{v}^{2}+2\dot{v}\dot{r}}=1 \tag{3.15}\] The maximal volumes are given by extremizing the action in (3.11). This gives us the following equations of motion: \[\begin{split} E_{i}&=r^{2(d-1)}(\dot{r}-F_{i}(r) \dot{v})\\ r^{2(d-1)}\dot{r}^{2}&=F_{i}(r)+r^{-2(d-1)}E_{i}^{ 2}\end{split} \tag{3.16}\] The part of the surface within an epoch can have a _turning point_ where \(\dot{r}\) vanishes. We will denote this point by \(r_{\rm i,min}\). We will see later that it is _not_ necessary for the surface to have a turning point in an epoch. However, these turning points will be crucial in calculating late time growth rates during an epoch. To characterize various features of the extremal volume surfaces, it is convenient to define an effective potential as follows: \[V_{i}(r)=F_{i}(r)r^{2(d-1)}+E_{i}^{2}. \tag{3.17}\] In particular, we can obtain the turning point \(r_{\rm i,min}\) from the zero of the effective potential \[V_{i}(r_{\rm i,min})=0\quad\implies\quad F_{i}(r_{\rm i,min})r_{\rm i,min}^{2( d-1)}+E_{i}^{2}=0 \tag{3.18}\] Using the equations of motion, we can rewrite the volume integral as follows: \[{\cal V}=\Omega_{d-1}\sum_{i}\int dr\frac{r^{2(d-1)}}{\sqrt{F_{i}(r)r^{2(d-1) }+E_{i}^{2}}} \tag{3.19}\] The Complexity=Volume proposal suggests that the complexity of the black hole at time \(t\) is given by the volume of these extremal surfaces [8]: \[{\cal C}_{i}=\frac{{\cal V}(t)}{G_{N}\omega_{i}} \tag{3.20}\] where \(\omega_{i}\) is the horizon radius during the \(i\)-th epoch. **First Epoch**: Now let us look at the case where the boundary anchoring point is in the first epoch (Refer figure 5). This case reduces to the calculation of extremal volumes in a single-sided Schwarzschild black hole. Since we are in the first epoch, there will only be one term in the sum (3.19). The surface will always have a turning point. At late times, \(r_{1,\text{min}}\) approaches a constant radial surface in the interior of the black hole, located at the critical point of effective potential [8; 34]. We will refer to this surface as the _accumulation surface_. The location of this surface is given by \[V_{1}^{\prime}(R_{1,\text{min}})=0\implies R_{1,\text{min}}=\omega_{1}\left( \frac{d}{2d-2}\right)^{\frac{1}{d-2}}. \tag{3.21}\] Since the width of each epoch is much larger than the scrambling time, the turning points will necessarily approach the accumulation surface at late times. Now let us calculate the growth rate of (3.11) as a function of the boundary anchoring time. From (3.16), we can see that \[t+r_{\infty}^{*}-r^{*}\left(r_{1,\text{min}}\right)=\int_{v_{1, \text{min}}}^{v_{\infty}}dv=\int_{r_{1,\text{min}}}^{r_{\infty}}dr\left[ \frac{-E_{1}}{F_{1}(r)\sqrt{F_{1}(r)r^{2(d-1)}+E_{1}^{2}}}+\frac{1}{F_{1}(r)}\right] \tag{3.22}\] Figure 5: The left (right) figure shows the early (left) time behavior of the extremal volume surface when the boundary anchoring points are in the first epoch. We have not included the cutoff surface in the diagrams to avoid cluttering. The 45-degree line depicts the horizon. The grey dashed lines in the interior of the black hole correspond to the accumulation surfaces (Ref (3.21)) of the epochs. Using (3.22), we can rewrite the volume integral (3.19) as follows: \[\frac{\mathcal{V}}{\Omega_{d-1}}=\int_{r_{1,\min}}^{r_{\max}}dr \left[\frac{\sqrt{F_{1}(r)r^{2(d-1)}+E_{1}^{2}}}{F_{1}(r)}-\frac{E_{1}}{F_{1}(r )}\right]+E_{1}\left(t+r_{\infty}^{*}-r^{*}\left(r_{1,\min}\right)\right) \tag{3.23}\] Taking a derivative w.r.t \(t\) and using Leibniz integral rule, we find the simple relation \[\frac{d\mathcal{V}}{dt} =\Omega_{d-1}E_{1}. \tag{3.24}\] \[=\Omega_{d-1}\sqrt{-F_{1}(r_{1,\min})}r_{1,\min}^{d-1}\] where we have used (3.18) to rewrite the energy in terms of the metric components. At late times, \(r_{1,\min}\) approaches \(R_{1,\min}\), a constant. Using (3.21), we find that \[\frac{d\mathcal{V}}{dt}=c_{d}\Omega_{d-1}\omega_{1}^{d-1} \tag{3.25}\] where \[c_{d}=\sqrt{\frac{d-2}{d}}\left(\frac{d}{2d-2}\right)^{\frac{d-1 }{d-2}} \tag{3.26}\] Therefore, the late time complexity growth during the first epoch is given by \[\frac{d\mathcal{C}_{1}}{dt}=\frac{1}{G_{N}\omega_{1}}\frac{d \mathcal{V}}{dt}=\frac{16\pi c_{d}}{d-1}M_{1} \tag{3.27}\] **Second epoch**: Now let us look at the case where the boundary anchoring point is in the second epoch (see figure 6). Following the discussion in section 3.1, the extremal volume surfaces will be disconnected in the Penrose diagram. The endpoints satisfy the ordering relation (3.10). We will label the radial coordinate of the point where the extremal surface intersects the epoch's boundary by \(r_{S}\). The volume functional (3.19) is the sum of two terms: \[\mathcal{V}=\Omega_{d-1}\int_{r_{1,\min}}^{r_{S}}dr\frac{r^{2(d- 1)}}{\sqrt{F_{1}(r)r^{2(d-1)}+E_{1}^{2}}}+\Omega_{d-1}\int_{r_{S}}^{r_{\infty }}dr\frac{r^{2(d-1)}}{\sqrt{F_{2}(r)r^{2(d-1)}+E_{2}^{2}}} \tag{3.28}\] The surface will always have a turning point in the first epoch. Moreover, the turning point will be at the accumulation surface of the first epoch \(R_{1,\min}\). Three possibilities arise when we consider the turning point of the second epoch. The surface will not have a turning point during the very early stages. However, as boundary anchoring time increases, the surface will develop a turning point in the interior of the black hole. At late times, the turning point \(r_{2,\min}\) will approach the accumulation surface of the epoch, given by \[V_{2}^{\prime}(R_{2,\min})=0\implies R_{2,\min}=\omega_{2}\left( \frac{d}{2d-2}\right)^{\frac{1}{d-2}}. \tag{3.29}\] Now let us calculate the growth rate of these volumes. As in the previous section, we have \[v_{1}-r^{*}\left(r_{1,\text{min}}\right)=\int_{v_{1,\text{min}}}^{v_{1}}dv=\int_{r _{1,\text{min}}}^{r_{S}}dr\left[\frac{-E_{1}}{F_{1}(r)\sqrt{F_{1}(r)r^{2(d-1)}+ E_{1}^{2}}}+\frac{1}{F_{1}(r)}\right] \tag{3.30}\] and \[t+r_{\infty}^{*}-v_{1}=\int_{v_{1}}^{v_{\infty}}dv=\int_{r_{S}}^{r_{\infty}} dr\left[\frac{-E_{2}}{F_{2}(r)\sqrt{F_{2}(r)r^{2(d-1)}+E_{2}^{2}}}+\frac{1}{F_{2}(r)}\right] \tag{3.31}\] This gives us \[\begin{split}\frac{\mathcal{V}}{\Omega_{d-1}}&= \int_{r_{1,\text{min}}}^{r_{S}}dr\left[\frac{\sqrt{F_{1}(r)r^{2(d-1)}+E_{1}^{2 }}}{F_{1}(r)}-\frac{E_{1}}{F_{1}(r)}\right]+E_{1}\left(v_{1}-r^{*}\left(r_{1, \text{min}}\right)\right)\\ &+\int_{r_{S}}^{r_{\text{max}}}dr\left[\frac{\sqrt{F_{2}(r)r^{2( d-1)}+E_{2}^{2}}}{F_{2}(r)}-\frac{E_{2}}{F_{2}(r)}\right]+E_{2}\left(t+r_{ \infty}^{*}-v_{1}\right)\end{split} \tag{3.32}\] Taking a time derivative w.r.t \(t\), we get \[\frac{1}{\Omega_{d-1}}\frac{d\mathcal{V}}{dt} =E_{2} \tag{3.33}\] \[+\frac{dr_{S}}{dt}\left[\frac{\sqrt{F_{1}(r)r^{2(d-1)}+E_{1}^{2}} }{F_{1}(r)}-\frac{E_{1}}{F_{1}(r)}-\frac{\sqrt{F_{2}(r)r^{2(d-1)}+E_{2}^{2}}} {F_{2}(r)}-\frac{E_{2}}{F_{2}(r)}\right]_{r=r_{S}}^{\text{(\ref{eq:v_1})}}\] Figure 6: The left (right) figure shows the early (left) time behavior of the extremal volume surface when the boundary anchoring points are in the second epoch. The extremal surface is disconnected, and the endpoints of the surface are located using the relation (3.10). rom (3.16), it is easy to see that the term in the second line is proportional to \(\dot{v}_{1}(r_{S})-\dot{v}_{2}(r_{S})\). Since \(v\) is continuous across each epoch, this function vanishes, and we get the simple expression: \[\frac{d\mathcal{V}}{dt}=\Omega_{d-1}E_{2}=\Omega_{d-1}\sqrt{-F_{2}(r_{2,\min})}r _{2,\min}^{d-1} \tag{3.34}\] At late times, \(r_{2,\min}\) approaches the accumulation surface (3.29). Therefore, the late time complexity growth during the second epoch is given by \[\frac{d\mathcal{C}_{2}}{dt}=\frac{1}{G_{N}\omega_{2}}\frac{d\mathcal{V}}{dt} \simeq\frac{16\pi c_{d}}{d-1}M_{2} \tag{3.35}\] **i-th epoch**: It is easy to extend the results to the \(i\)-th epoch. Complexity will undergo an initial transitional phase, after which it will settle to a linear growth characterized by the growth rate: \[\frac{d\mathcal{C}_{i}}{dt}\simeq\frac{16\pi c_{d}}{d-1}M_{i} \tag{3.36}\] Let us look at the continuum limit, where the width of each epoch goes to zero. This allows us to replace \(M(i)\) by \(M(t)\), which is the instantaneous mass of the black hole. Let us assume that we are \(3+1\)-dimensions. If the black hole is a perfect black body that satisfies the Stefan-Boltzmann law, then we have [35] \[M(t)=M_{0}\left(1-\frac{t}{t_{E}}\right)^{1/3} \tag{3.37}\] where \(t_{E}=\frac{5120\pi G_{N}^{2}M_{0}^{3}}{\hbar c^{4}}\) is the lifetime of the black hole. Integrating the expression, Figure 7: Holographic complexity of an evaporating black hole obtained by taking the continuum limit of equation (3.36). Here \(t_{E}\) is the lifetime of the black hole. we get \[\boxed{C(t)=2\sqrt{3}\pi M_{0}t_{E}\left(1-\left(1-\frac{t}{t_{E}}\right)^{4/3} \right)} \tag{3.38}\] Plotting this function, we get figure 7. ## 4 Discussion Let us briefly review the calculations in section 2. In conjunction with Berry's conjecture, the slow leakage assumption played a crucial role in rendering the Krylov complexity calculation analytically tractable. At first glance, the former assumption might appear limiting. However, slow leakage is _required_ if one wants to use intensive quantities like temperature at every instant of the process. The appearance of an ensemble-averaged semiclassical description in the context of black holes can be attributed to the existence of such quasi-static equilibriums [17]. Working out details, we saw that the Krylov complexity had the following behavior: * During an epoch, complexity goes through a scrambling phase and then transitions to linear growth. * Complexity keeps increasing even as we cross the boundary of each epoch. However, the late time linear growth rate decreases with each successive epoch. * When the gas has completely leaked out of the box, complexity levels off. We claim that these results carry over to any chaotic quasi-static open quantum system if the operator under consideration satisfies ETH. Let us briefly outline this calculation by examining a system interacting with its environment. Consider an operator \(O\) which satisfies the ETH ansatz. The off-diagonal elements of this operator are given by \[\left\langle E_{i}\left|O\right|E_{j}\right\rangle\approx F\left(E,\omega \right)R_{ij} \tag{4.1}\] where \(E_{i,j}\) are the energy eigenstates of the system. \(R_{ij}\) is a zero mean, unit variance random matrix whereas \(E\) and \(\omega\) are given by (2.4). During an epoch, the Krylov complexity can be calculated using the Liouvillian of the system, following the assumptions in 2.3.1. If we assume \(F\) to decay as \(\omega\rightarrow\infty\), then we can use the arguments in section 3.1 of [9] to see that the Krylov complexity undergoes a scrambling phase, followed by linear growth. As we have observed in the case of the slowly leaking gas, the linear growth rate will be proportional to the degrees of freedom of the system. Therefore, complexity will increase as we go from one epoch to the other, and it will eventually level off if all the degrees of freedom leak out of the system. This reproduces the advertised behavior. The holographic complexity, computed using a gravity calculation, displays the same behavior we described earlier. We can push this parallel further by comparing the late time growth rates (59) and (60). From the laws of black hole thermodynamics, we can see that the mass of the black hole plays the role of energy [36]. Therefore, both calculations result in the same late time linear growth, provided we correctly identify the thermodynamic quantities on the black hole side. This provides further evidence to the claim that black holes can be described by a chaotic open quantum mechanical system with finite degrees of freedom when observed from the outside [37, 38]. Another manifestation of this proposal can be found in [17], where the entanglement entropy of the slowly leaking gas model matched the gravitational path integral calculation in [39]. We can put these statements on a firmer footing by thinking of it as a consequence of black hole complementarity [37], which posits that the interior of a black hole can be thought to be described by a finite number of quantum mechanical degrees of freedom living on the stretched horizon of the black hole. The hard spheres in the left box assume the role of these degrees of freedom, while the particles in the right box model the outgoing Hawking radiation, allowing us to make a one-to-one map between an evaporating black hole and our two-box system. Returning to the growth rate (60), we find that \[\frac{dC}{dt}\sim M\propto ST \tag{61}\] where \(S\) and \(T\) are the entropy and temperature of the black hole during that epoch. Our results are in tandem with [40] and the 2d gravity calculation performed in [41]. One can also calculate complexity using the Complexity=Action (CA) prescription [42, 43, 44]. However, to obtain meaningful results, the inclusion of a counterterm for null boundaries, similar to the one used in [31, 32], might be necessary. ###### Acknowledgments. I thank Chethan Krishnan, Watse Sybesma, and Larus Thorlacius for their insights and helpful discussions. This work was supported by the Icelandic Research Fund grant 228952-052.
2307.11206
Towards Ontologically Grounded and Language-Agnostic Knowledge Graphs
Knowledge graphs (KGs) have become the standard technology for the representation of factual information in applications such as recommendation engines, search, and question-answering systems. However, the continual updating of KGs, as well as the integration of KGs from different domains and KGs in different languages, remains to be a major challenge. What we suggest here is that by a reification of abstract objects and by acknowledging the ontological distinction between concepts and types, we arrive at an ontologically grounded and language-agnostic representation that can alleviate the difficulties in KG integration.
Walid S. Saba
2023-07-20T19:48:55Z
http://arxiv.org/abs/2307.11206v1
# Towards Ontologically Grounded and Language-Agnostic Knowledge Graphs ###### Abstract Knowledge graphs (KGs) have become the standard technology for the representation of factual information in applications such as recommendation engines, search, and question-answering systems. However, the continual updating of KGs, as well as the integration of KGs from different domains and KGs in different languages, remains to be a major challenge. What we suggest here is that by a reification of abstract objects and by acknowledging the ontological distinction between concepts and types, we arrive at an ontologically grounded and language-agnostic representation that can alleviate the difficulties in KG integration. ## 1 Introduction Knowledge graphs are by now the standard representation of knowledge repositories that are used in various applications, such as search, recommendation engines, and question-answering systems. While there are powerful KG tools, the semantic and conceptual side of KG technology is still partially ad-hoc. In particular, the continuous update and KG integration remain to be a challenge. A Knowledge graph (KG) is a graph structure that can be viewed as a set of triples \(\langle e_{1}\), \(r\), \(e_{2}\rangle\) relating real-world entities \(e_{1}\) and \(e_{2}\) by a relation \(r\) to represent a real-world fact, as in the following examples: 1. \(\langle\)_RogerVaters, BornOn, 01/08/1955_\(\rangle\) 2. \(\langle\)_PinkFloyd, StartedIn, London_ 3. \(\langle\)_BarakObama, LivesIn, WhiteHouse_\(\rangle\) From the triples above that we might have in some knowledge graph KG we can immediately point to several issues that pose major challenges in constructing and maintaining KGs. We discuss these issues next. ## 2 Alignment and Continuous Change Here are the main issues in the triples (1) through (3) above: First, in another knowledge graph KG\({}_{2}\) that we might want to integrate with KG\({}_{1}\) there might be another _Roger Waters_ where the two entities might or might not be the same and thus an entity alignment must occur with the triple in (1). Another issue here is that the triple in (2) uses "_StartedIn_" to represent the fact that the Pink Floyd band started in London. Another KG might, instead, use the relation "_FormedIn_" and a match and an alignment between the two relations is needed. Finally, the integration of KG1 with another KG might reveal that the triple in (3) is no longer valid and must thus be fused with new and updated information. At a minimum, then, the process of fusing together two or more KGs will first of all involve a tedious process of entity alignment (EA) (Zhang et. al., 2022), but more generally it will involve a process of continuous updating of information (Wang, et. al., 2022). Note that updating information and entity alignment both involve identifying if entities are the same (or not), where in one case we will perform a'merge' and in the second an update. Clearly then entity alignment is the most basic operation in any KG integration, and as such it has received the most attention. To match an entity \(e_{1}\) in KG1 with an entity \(e\) in KG2 embeddings in low dimensional space for both entities are constructed using neighboring information: related entities, immediate relations, and attributes. Entities \(e_{1}\) and \(e\) are considered to have a match if their vector similarity is above a certain threshold. As such, different alignment techniques mainly differ in how the embeddings are constructed. In particular, they differ in what information is bundled in the embedding, and how far in the graph are other entities, relations and attributes are still considered to be in the "neighborhood". Zhu et. al. (2021), for example, report that spreading entity information across all relations, gathering information, and bringing it back to an entity's embedding, improves embedding similarity and entity alignment. In (Lin, Y. and Liu, Z. et. al., 2016) it is further suggested that including all attributes and their values will also improve an entity's embedding. Other approaches (e.g., Zhu et. al., 2023) will also include, besides attribute values, all string information corresponding to entity, relation, and attribute names. In all these approaches the ultimate goal is to improve the construction of entity embeddings, in the hope of improving the accuracy of entity alignment (i.e., entity matching). See (Zhang, R. et. al., 2022) for a good survey of various alignment techniques. ## 3 Reifying Abstract Objects Regardless of the novelty and the progress made by various entity alignment algorithms, the accuracy of merging different knowledge graphs, especially ones that are continuously updated, will remain to be less than desired. In this section we will argue that the problem is to be handled not with constructing ever more reliable embeddings leading to more accurate alignments, but with how knowledge graphs are constructed in the first place. Specifically, we suggest that the answer lies in proposals that have been made in the study of semantics and formal ontology. In particular, we will appeal to conceptualism and the conceptual realism of Cocchiarella (2001), where we reify (or 'object-ify') abstract concepts in a manner that is consistent with our basic "cognitive capacities that underlie our use of language". This is essentially an extension of Davidson semantics (Davidson, 1967; Larson, 1998) where events are treated as entities, and is also in line with Moltmann's (2013) arguments that the ontology of natural language admits references to "tropes", which are particular instances of properties. Let us make all of this clear with an example. Consider the knowledge graphs in figure 1 where we are representing the facts expressed by "_The musician Roger Waters was born in Great Bookham on 01/08/1955_". The knowledge graph in figure 0(b) has the same facts expressed in figure 0(a) but in an ontologically grounded and linguistically agnostic representation. First, note that instead of the ad-hoc naming of relations in 0(e.g, **bornIn** and **bornOn**), in 0(b) we have primitive and language-agnostic relations where events are entities (e.g., "Birth") that have two essential properties, a time and a location and where these properties have specific values of specific _types_1. Note also that we are assuming here that these canonical names are done in the process of KG construction, and thus a 'Birth' event, regardless how it was named, will in the end translate to the same event. In our representation, therefore, everything is an entity and the relations come from a fixed set of primitive and linguistically agnostic set of relations (the set of primitive relations are shown in figure 2). How we come up with these relations is beyond the scope of this short paper but see Smith (2005) for a discussion. Besides the primitive and linguistically agnostic representation, entities and attribute values in the knowledge graph of figure 0(b) are strongly-typed, where the types are assumed to exist in a strongly-typed hierarchy along the lines suggested in Saba (2020). Note that by making all entities typed we resolve the issue of separating knowledge graphs into two parts, one that has continuously updated information (_\(\langle\)RogerWaters, LivesIn, London\(\rangle\)_) and one that has more static conceptual information such as _\(\langle\)RogerWaters, IsA, Musician\(\rangle\)_ (see Hao et. al., 2019 for a discussion on this issue). Figure 1: (a) A KG representing the facts expressed in “_The musician Roger Waters was born in Great Bokham on 01/08/1955_”; and (b) a language-agnostic KG representing the same facts. Figure 2: The set of primitive and linguistically agnostic relations that are used in the knowledge graph. These are the only relations used and all other abstractions are entities (e.g., events, properties, states, etc. all of which are reified/object-ified), Moreover, entity alignment will now be more accurate since the embedding of [_RogerWaters_: **Musician**] will only match the same musician in another knowledge graph, even if the entity was labeled differently, e.g. [_GeorgeRogerWaters_: **Musician**]. Besides adding semantic constraints that will improve knowledge integration, types are language agnostic and thus, like primitive relations, are easy to translate across languages. In figure 3 we show the isomorphic Arabic and French equivalents of the KG in figure 0(b) above. ## 4 Evaluation Aside from the simple alignment of knowledge graphs written in different languages or different domains, we show here how the ontologically grounded and linguistically agnostic representation helps in the problem of entity alignment. First, we construct embeddings for triples where a change is made in one of the entities or in the relation: \(e_{1}=\) EMBED(\(\langle\)_RogerWaters, LivesIn, London_\(\rangle\)) \(e_{2}=\) EMBED(\(\langle\)_RogerWaters, PlaceOfResidence, London_\(\rangle\)) \(e_{3}=\) EMBED(\(\langle\)_RogerWaters, LivesIn, Chelsea_\(\rangle\)) \(e_{4}=\) EMBED(\(\langle\)_RogerWaters, PlaceOfResidence, Chelsea_\(\rangle\)) EMBED(\(\langle\)\(e_{1}\), \(r\), \(e_{2}\)) returns an embedding that is the sum of the vectors of \(e_{1}\), \(r\), and \(e_{2}\). In table 1 below we show the cosine similarity \(\mathbf{cosim}(e_{i}\), \(e_{i})\) for i, j = 1,2,3,4 and for i \(\neq\) j. The triples with a different entity (a different real-world fact) matched better than those with slightly different but semantically similar relation (i.e., same real-world fact). Similar results were obtained by changing various semantically similar relations (e.g., **bornIn** vs. **placeOfBirth**, etc.) The above shows that entity alignments across knowledge graphs would fail simply because of the ad-hoc labeling of relations in the knowledge graph. On the other hand, changing the location in the knowledge graph in 0(b) amounts to changing one embedding out of several that remain Figure 3: Since entity names, types, attribute values, and primitive relations are language agnostic, there’s a straightforward automatic translation of the KG in figure 0(b) into isomorphic Arabic and French KGs. constant. In the example of figure 0(b), a change in the location would result in a similarity of 0.688 only, and the alignment would clearly fail, as it should. That is, an entity that is a participant in a birth event that happened in London should not match with an entity that is a participant in a birth event that happened in Chelsea, regardless of the entity name. Note that this true even in knowledge graphs in different languages (see figure 3), assuming, of course, that the embeddings of [London : **City**] and [] have a good cosine similarity, as one would expect. ## 5 Discussion One important aspect to the representation we are suggesting is that it is language agnostic. This we claim is based on the fact that our representation has entities and primitive relations between them and that both of these are language agnostic. Thus the claim of universality is based two assumptions: (i) we are assuming that entities, including abstract entities such as those corresponding to properties, events, states, etc. are language-agnostic; (ii) we are assuming that our primitive relations (see figure 2) are also language agnostic. If both of these assumptions are correct, then our representation is language-agnostic, and the only remaining question would be "how universal are the primitive relations in figure 2?" A final answer to this question requires further experimentation. Another important issue we could not discuss here for lack of space are the types that are associated with every entity and attribute value. These types are assumed to exist in a hierarchy of types that must also be language agnostic (that is, we are assuming that "the types of things we talk about/express facts about" are the same across languages). Admittedly, however, this claim might not be uncontroversial and further work needs to be done in this regard, although we believe the work of Saba (2020) is a step in the right direction. Another issue that should also be addressed is related to the mapping from natural language to our representation. As noted to us by one of anonymous reviewers, a fact such as "John sold the car to Bill" should, in theory, translate into the same sets of relations in the KG as the fact "Bill bought the car from John". While in both cases we will have a language agnostic representation with reified abstract objects for the 'buying' and'selling' events where Bill and John are participants, these two facts will only be equivalent if there were some meaning postulate that relates the'selling' and 'buying' events. \begin{table} \begin{tabular}{|l|c|} \hline COSINE\_SIMILARITY(_emb1, emb2_) & 0.8853 \\ COSINE\_SIMILARITY(_emb1, emb3_) & **0.9298** \\ COSINE\_SIMILARITY(_emb1, emb4_) & 0.7989 \\ COSINE\_SIMILARITY(_emb2, emb3_) & 0.8219 \\ COSINE\_SIMILARITY(_emb2, emb4_) & **0.9204** \\ COSINE\_SIMILARITY(_emb3, emb4_) & 0.8849 \\ \hline \end{tabular} \end{table} Table 1: Triples with different facts (locations) matched better than triples with the same facts (locations) but a relation that is worded slightly. ## 5 Concluding Remarks In this short paper we suggested an ontologically grounded and linguistically agnostic representation for knowledge graphs. This representation, we believe will solve the major challenges facing knowledge graphs today, namely the difficulty in continuous updating of factual information (which requires static conceptual information to be separated from the more dynamic information), and the difficulty of knowledge graph integration which requires very accurate entity and relation alignment. We argued that our representation offers a solution to these (essentially semantic) problems. A final remark we would like to make is related to an excellent point made by one the anonymous reviewers, name that the representation and the method we propose will work if the construction of every KG follows our methodology. This is true, and so in essence the representation we are suggesting can be thought of as a new standard for a semantically rigorous knowledge graph methodology. Although this is part of future work, this will entail building a natural language interpreter that will ensure the translation of every KG into the canonical and language agnostic representation suggested in this paper. ## Acknowledgements The feedback of colleagues at the Institute for Experiential AI as well as the suggestions of three anonymous reviewers are greatly appreciated.
2301.12481
Pascal Determinantal Arrays and a Generalization of Rahimpour's Determinantal Identity
In this paper, we shall first introduce the Pascal determinantal array of order $k$ as a generalization of the standard Pascal array. We also present an algorithm to produce determinantal arrays of any order. Then we investigate geometric properties of these new determinantal arrays. As a by product, we give a proof of the correctness of our proposed algorithm. Then we give a geometric interpretation of the determinant of any $k$ by $k$ subarray of the Pascal array. Finally, we will give a generalization of Rahimpour's determinantal identity, using the above geometric interpretation.
H. Teimoori, H. Khodakarami
2023-01-29T16:25:32Z
http://arxiv.org/abs/2301.12481v1
# Pascal Determinantal Arrays and a Generalization of Rahimpour's Determinantal Identity ###### Abstract. In this paper, we shall first introduce the Pascal determinantal array of order \(k\) as a generalization of the standard Pascal array. We also present an algorithm to produce determinantal arrays of any order. Then we investigate geometric properties of these new determinantal arrays. As a by product, we give a proof of the correctness of our proposed algorithm. Then we give a geometric interpretation of the determinant of any \(k\) by \(k\) subarray of the Pascal array. Finally, we will give a generalization of Rahimpour's determinantal identity, using the above geometric interpretation. Key words and phrases:Determinantal Array, Pascal Array, Dodgson's Condensation, Star of David determinant of a \(k\) by \(k\) sub-array of the Pascal array, starting from it's \((i,j)\)-entry, as follows \[P_{i,j}^{(k)}:=\left|\begin{array}{ccc}P_{i,j}&\ldots&P_{i,j+k-1}\\ \vdots&\ddots&\vdots\\ P_{i+k-1,j}&\ldots&P_{i+k-1,j+k-1}\end{array}\right|.\] Clearly, the determinantal array of order 1 is exactly the Pascal array (see Figure 1). In Figure 2, we see a Pascal determinantal array of order 2. This is a well-known array which is the squared-form of the so-called Narayana triangle (see A001263 in [5]). ### The Rahimpour Determinantal Identity Consider the Pascal infinite array, \(PD_{1}=\binom{i+j}{i}\), \(i,j\geq 0\), in Figure 1. Now consider the sub-arrays, shown by squares in Figure 1, of which the left edges lie in the first column (starting from zero) and the other edges are free to lie in any rows or columns of this array of numbers. She conjectured that the determinants of this square sub-arrays are always equal to the top right entry of that square (see Figure 2). More precisely, we have the following result. **Theorem 1** (See [1]).: _Let \(P=\left[P_{i,j}=\binom{i+j}{i}\right]_{i,j\geq 0}\) be the Pascal infinite array. Then, we have the following determinantal identity_ \[P_{i,j}^{(1)}=P_{i,1}^{(j)},\ (i,j\geq 0). \tag{1}\] The following question naturally arises that, if we let the left edges of square sub-arrays are lie in the \(k\)-th column (\(k\geq 1\)) what will be the extension of the above conjecture? While working on Rahimpour's determinantal identity, the second author of this paper came up with the following conjecture. Figure 1. The Pascal array. _Conjecture 2_.: The determinant of the \(n\times n\) square sub-array in Pascal array where the left edge lie in the \(k\)-th column is equal to the determinant of the top right \(k\) by \(k\) square sub-array (see Figure 3). More precisely, we have \[P_{i,j}^{(k)}=P_{i,k}^{(j)},\ (i,j\geq 0,\ k\geq 1). \tag{2}\] The main result of our paper is to give a proof of Conjecture 2. A proof of Theorem 1 in [1] is based on the linear algebra properties of Pascal functional matrices [2] and the well-known Cramer's rule for the inverse of a nonsingular matrix. Figure 3. The Pascal array and the second author’s conjecture Figure 2. Pascal determinantal array of order 2. Here we will use a completely different approach. We will present an _geometric proof_ which relies heavily on the geometric properties of the Pascal array [3], besides using the Dodgson's condensation formula for determinants [4]. ## 3. A recursive algorithm to produce Pascal determinantal arrays In this section, we give a simple recursive algorithm to produce Pascal determinantal arrays from the standard Pascal array. The algorithm works, as follows 1. Set \(PD_{1}:=\) The Pascal Array. 2. Remove the zeroth row and the column of \(PD_{k-1}\) and rename the remaining infinite array \(RD_{k}=(R_{i,j}^{(k)})_{i,j\geq 0}\). 3. Set \(QD_{i,j}^{(k)}:=\frac{R_{i,j}^{(k)}}{R_{0,i+j}^{(k)}}\), \(i,j=0,1,2,\ldots\). 4. Put \(P_{i,j}^{(k)}:=Q_{i,j}^{(k)}.P_{i,j}^{(1)}\), \(i,j=0,1,2,\ldots\). Now, \(PD_{k}=(P_{i,j}^{(k)})_{i,j\geq 0}\), is the desired Pascal determinantal array of order \(k\). Alternatively, we can give the following recursive definition for the entries of the Pascal determinantal array \(PD_{k}=\left[P_{i,j}^{(k)}\right]_{i,j\geq 0}\). 1. \(P_{i,j}^{(1)}=\binom{i+j}{i}\), 2. \(P_{i,j}^{(k+1)}:=\frac{P_{i+1,j+1}^{(k)}P_{i,j}^{(1)}}{P_{1,i+j+1}^{(k)}}\), \((k\geq 1)\). **Example 3**.: _The following steps show how to produce \(PD_{3}\)._ Step1. \begin{tabular}{|c c c c c c|} \hline 3 & 6 & 10 & 15 & 21 \\ 6 & 20 & 50 & 105 & 196 \\ 10 & 50 & 175 & 490 & 1176 \\ 15 & 105 & 490 & 1764 & 5292 \\ 21 & 196 & 1176 & 5292 & 19404 \\ \multicolumn{5}{c}{Step2.} \\ \hline 1 & 1 & 1 & 1 & 1 \\ 1 & 2 & 10/3 & 5 & 7 \\ 1 & 10/3 & 25/3 & 35/2 & 98/3 \\ 1 & 5 & 35/2 & 49 & 588/5 \\ 1 & 7 & 98/3 & 588/5 & 1764/5 \\ \multicolumn{5}{c}{Step3.} \\ \hline 1 & 1 & 1 & 1 & 1 \\ 1 & 4 & 10 & 20 & 35 \\ 1 & 10 & 50 & 175 & 490 \\ 1 & 20 & 175 & 980 & 4116 \\ 1 & 35 & 490 & 4116 & 24696 \\ \hline \end{tabular} Of course, we need to show the correctness of the above algorithm. To do this, we first investigate some algebraic and geometric properties of these determinantal arrays. ## 4. Dodgson's Condensation of Determinants Condensation of determinants is a method of computing the determinant of a square matrix due to Charles Dodgson (1866) [4]. For a matrix \(A\) of order \(k\), Dodgson's condensation formula states that: \[A_{k-2}(2,2)A_{k}(1,1)=A_{k-1}(1,1)A_{k-1}(2,2)-A_{k-1}(1,2)A_{k-1}(2,1),\ (k \geq 3)\] where \(A_{r}(i,j)\) denote the \(r\) by \(r\) minor consisting of \(r\) contiguous rows and columns of \(A\), beginning with row \(i\) and column \(j\). Note that \(A_{k}(1,1)=det(A)\), \(A_{k-2}(2,2)\) is the central minor; \(A_{k-1}(1,1)\), \(A_{k-1}(2,2)\), \(A_{k-1}(1,2)\), \(A_{k-1}(2,1)\) are the northwest, southeast, northeast and southwest minors, respectively. For example, for the matrix \[A=\begin{bmatrix}a_{1}&a_{2}&a_{3}&a_{4}\\ b_{1}&b_{2}&b_{3}&b_{4}\\ c_{1}&c_{2}&c_{3}&c_{4}\\ d_{1}&d_{2}&d_{3}&d_{4}\end{bmatrix},\] we get \[A_{2}(2,2) = \begin{bmatrix}b_{2}&b_{3}\\ c_{2}&c_{3}\end{bmatrix},\hskip 28.452756ptA_{3}(1,1)=\begin{bmatrix}a_{1}&a_{2 }&a_{3}\\ b_{1}&b_{2}&b_{3}\\ c_{1}&c_{2}&c_{3}\end{bmatrix}\] \[A_{3}(2,2) = \begin{bmatrix}b_{2}&b_{3}&b_{4}\\ c_{2}&c_{3}&c_{4}\\ d_{2}&d_{3}&d_{4}\end{bmatrix},\ A_{3}(1,2)=\begin{bmatrix}a_{2}&a_{3}&a_{4} \\ b_{2}&b_{3}&b_{4}\\ c_{2}&c_{3}&c_{4}\end{bmatrix},\ A_{3}(2,1)=\begin{bmatrix}b_{1}&b_{2}&b_{3} \\ c_{1}&c_{2}&c_{3}\\ d_{1}&d_{2}&d_{3}\end{bmatrix}.\] In other words, every \(k\) by \(k\) minor can be computed in terms of \(k-1\) by \(k-1\) and \(k-2\) by \(k-2\) minors. As an immediate consequence this formula, we have the following recurrence relation between the entries of Pascal determinantal arrays of order \(k\) (\(k\geq 1\)): \[P_{i,j}^{(k)}:=\frac{P_{i+1,j+1}^{(k-1)}.P_{i,j}^{(k-1)}-P_{i+1,j}^{(k-1)}.P_{ i,j+1}^{(k-1)}}{P_{i+1,j+1}^{(k-2)}},\ (i,j=0,1,2,\ldots), \tag{3}\] with the convention that the initial values are \(PD_{0}:=J\) and \(PD_{1}:=P\) in which \(J\) and \(P\) are all-ones matrix and the Pascal array, respectively. ## 5. The Weighted Version of Star of David Rule Consider the Pascal array (see Figure 4). We can draw an arbitrary rectangle whose vertices are entries of the Pascal array. We identify a vertex (the circled vertex) as the anchor of this rectangle. Now we define a weight for this rectangle denoted by \(W\), as follows \[W:=\frac{P_{i+m,j+l}.P_{i,j}}{P_{i+m,j}.P_{i,j+l}},\ m,l\geq 1,\ i,j\geq 0.\] Hilton and Pedersen proved [4] that when we move the anchor of the rectangle through the diagonal of the Pascal array (indicated by the arrow \(d\) in Figure 4), the weight remains constant. We will call this property as the weighted version of star of David rule. We can define a symmetric cross of size \(k\) in the Pascal array (See Figure 5) as an \(2k\)-tuples \((c_{1},\ldots,c_{k},r_{1},\ldots,r_{k})\), where \(c_{i},r_{i},\ (1\leq i\leq k)\) are the entries of the Pascal array. We then define the weight of such a cross, \(W_{c}\), as \[W_{c}:=\frac{c_{1}.c_{2}.....c_{k}}{r_{1}.r_{2}.....r_{k}}.\] _Remark 4_.: An immediate consequence of the weighted version of star of David rule in the Pascal array is that if we move the center of a cross through the diagonal \(d\), then the weight \(W_{c}\) remains constant. We will call this property the _sliding cross rule_ in the Pascal array. We also note that if the sliding cross rule holds for \(m=l=1\), then by an straight forward induction on \(m\) and \(l\), it can be easily seen that it holds in general for every positive integers \(m\) and \(l\). Figure 4. A rectangle with the Pascal array entries Our next step is to show that the above sliding rule is also true for any Pascal determinantal array \(PD_{k}\) (\(k\geq 1\)). To do this, we apply mathematical induction on \(k\). The basis step, \(k=1\), is already true. For the inductive step, we assume that every Pascal determinantal array of order less than \(k+1\) has the sliding property and we show that it is also true for \(PD_{k+1}\). Consider the Pascal determinantal array \(PD_{k}\), as in Figure 6, where the circled-points show the entries of this array and the squared entries show the corresponding entries of the Pascal determinantal array of order \(k-1\). Consider the previous discussions, the entries of \(PD_{k+1}\) can be obtained, as follows (see Figure 7). Figure 5. A cross of size \(k\) Figure 6. Pascal determinantal array \(PD_{k}\) \[\alpha = \frac{ae-bd}{A},\ \beta=\frac{bf-ec}{B},\ \alpha^{\prime}=\frac{a^{ \prime}e^{\prime}-b^{\prime}d^{\prime}}{A^{\prime}},\ \beta^{\prime}=\frac{b^{\prime}f^{\prime}-e^{\prime}c^{\prime}}{B^{\prime}} \tag{4}\] \[\gamma = \frac{dh-ge}{C},\ \theta=\frac{ek-fh}{D},\ \gamma^{\prime}=\frac{d^{ \prime}h^{\prime}-g^{\prime}e^{\prime}}{C^{\prime}},\ \theta^{\prime}=\frac{e^{\prime}k^{\prime}-f^{\prime}h^{\prime}}{D^{\prime}}.\] Therefore, it is sufficient to show that \(\frac{\alpha\theta}{\beta\gamma}=\frac{\alpha^{\prime}\theta^{\prime}}{ \beta^{\prime}\gamma^{\prime}}\). This is equivalent to \[\frac{\frac{(ae-bd)(ek-fh)}{AD}}{\frac{(bf-ec)(dh-ge)}{BC}}=\frac{\frac{(a^{ \prime}e^{\prime}-b^{\prime}d^{\prime})(e^{\prime}k^{\prime}-f^{\prime}h^{ \prime})}{A^{\prime}D^{\prime}}}{\frac{(b^{\prime}f^{\prime}-e^{\prime}c^{ \prime})(d^{\prime}h^{\prime}-g^{\prime}e^{\prime})}{B^{\prime}C^{\prime}}} \tag{5}\] By induction hypotheses, we have \[\frac{ae}{bd}=\frac{a^{\prime}e^{\prime}}{b^{\prime}d^{\prime}},\ \frac{bf}{ec}=\frac{b^{ \prime}f^{\prime}}{e^{\prime}c^{\prime}},\ \frac{dh}{ge}=\frac{d^{\prime}h^{\prime}}{g^{\prime}e^{\prime}},\ \frac{ek}{fh}=\frac{e^{ \prime}k^{\prime}}{f^{\prime}h^{\prime}},\ \frac{AD}{BC}=\frac{A^{\prime}D^{\prime}}{B^{ \prime}C^{\prime}},\] or equivalently \[\frac{ae-bd}{bd}=\frac{a^{\prime}e^{\prime}-b^{\prime}d^{\prime}}{b^{\prime}d ^{\prime}},\ \frac{ek-fh}{fh}=\frac{e^{\prime}k^{\prime}-f^{\prime}h^{\prime}}{f^{\prime}h^{ \prime}},\] \[\frac{bf-ec}{ec}=\frac{b^{\prime}f^{\prime}-e^{\prime}c^{\prime}}{e^{\prime}c ^{\prime}},\ \frac{dh-ge}{ge}=\frac{d^{\prime}h^{\prime}-g^{\prime}e^{\prime}}{g^{\prime}e^ {\prime}},\] \[\frac{AD}{BC}=\frac{A^{\prime}D^{\prime}}{B^{\prime}C^{\prime}}\] which can be simply written as \[\frac{\frac{(ae-bd)(ek-fh)}{(bf\hbar)AD}}{\frac{(bf-ec)(dh-ge)}{(cege)BC}}= \frac{\frac{(a^{\prime}e^{\prime}-b^{\prime}d^{\prime})(e^{\prime}k^{\prime}- f^{\prime}h^{\prime})}{(b^{\prime}d^{\prime}f^{\prime}h^{\prime})A^{\prime}D^{ \prime}}}{\frac{(b^{\prime}f^{\prime}-e^{\prime}c^{\prime})(d^{\prime}h^{ \prime}-g^{\prime}e^{\prime})}{(e^{\prime}c^{\prime}g^{\prime}e^{\prime})B^{ \prime}C^{\prime}}}.\] Therefore by (2), we only need to show that Figure 7. Pascal determinantal array \(PD_{k+1}\) \(\frac{ecge}{bdfh}=\frac{e^{\prime}c^{\prime}g^{\prime}e^{\prime}}{b^{\prime}d^{ \prime}f^{\prime}h^{\prime}}\), or equivalently \[\frac{baecge}{babdfh}=\frac{b^{\prime}a^{\prime}e^{\prime}c^{\prime}g^{\prime}e^ {\prime}}{b^{\prime}a^{\prime}b^{\prime}d^{\prime}f^{\prime}h^{\prime}}\] But we already know, by induction hypothesis, that \[\frac{ae}{bd}=\frac{a^{\prime}e^{\prime}}{b^{\prime}d^{\prime}},\ \frac{ce}{ bf}=\frac{c^{\prime}e^{\prime}}{b^{\prime}f^{\prime}},\ \frac{bg}{ah}=\frac{b^{\prime}g^{\prime}}{a^{\prime}h^{\prime}}.\] Thus considering the above remark, we complete the the proof by induction. ## 6. The Correctness of the Algorithm Now we are at the position to prove that the algorithm mentioned in the section 2 will produce every Pascal determinantal array of order \(k\). To do this, we apply mathematical induction on \(k\). The basis step, \(PD_{1}\), is obviously true. Assume that the algorithm is true for \(PD_{k}\), as inductive step. Then, we have \[P_{i,j}^{(k)}=\frac{P_{i+1,j+1}^{(k-1)}P_{i,j}^{(1)}}{P_{1,i+j+1}^{(k-1)}}. \tag{6}\] Now we show that the algorithm is also true for the Pascal determinantal array \(PD_{k+1}\). By the sliding cross property for \(PD_{k}\), we have (see Figure 8): \[\frac{P_{i+1,j}^{(k)}P_{i,j+1}^{(k)}}{P_{i,j}^{(k)}P_{i+1,j+1}^{(k)}}=\frac{P_ {1,i+j}^{(k)}P_{0,i+j+1}^{(k)}}{P_{0,i+j}^{(k)}P_{1,i+j+1}^{(k)}}, \tag{7}\] Considering the fact that \(P_{0,i+j+1}^{(k)}=P_{0,i+j}^{(k)}=1\), it is not hard to see that the equality in (7) is equivalent to \[\frac{P_{1,i+j}^{(k)}}{P_{i+1,j}^{(k)}P_{i,j+1}^{(k)}}=\frac{P_{1,i+j+1}^{(k-1 )}}{P_{i,j}^{(k+1)}P_{i+1,j+1}^{(k-1)}}, \tag{8}\] Now, by reversing the both fraction in (8) and multiply them by \(P_{i,j}^{(1)}\), we obtain \[\frac{P_{i,j}^{(1)}P_{i+1,j}^{(k)}P_{i,j+1}^{(k)}}{P_{1,i+j}^{(k)}}=\frac{P_{i,j}^{(1)}P_{i,j}^{(k+1)}P_{i+1,j+1}^{(k-1)}}{P_{1,i+j+1}^{(k-1)}}. \tag{9}\] Using relation (6), we immediately get \[\frac{P_{i,j}^{(1)}P_{i+1,j}^{(k)}P_{i,j+1}^{(k)}}{P_{1,i+j}^{(k)}}=P_{i,j}^{( k+1)}P_{i,j}^{(k)}. \tag{10}\] Next, by dividing both sides of (10) to \(P_{i,j}^{(k)}\) and also multiplying the left-hand side by \(\frac{P_{i+1,j+1}^{(k)}}{P_{i+1,j+1}^{(k)}}\), we have \[\frac{P_{i+1,j+1}^{(k)}P_{i,j}^{(1)}P_{i+1,j}^{(k)}P_{i,j+1}^{(k)}}{P_{i+1,j+1} ^{(k)}P_{i,j}^{(k)}P_{1,i+j}^{(k)}}=P_{i,j}^{(k+1)}. \tag{11}\] By using the equation (6) one more time, we obtain \[\frac{P_{i+1,j+1}^{(k)}P_{i,j}^{(1)}P_{1,i+j}^{(k)}P_{0,i+j+1}^{(k)}}{P_{0,i+j }^{(k+1)}P_{1,i+j+1}^{(k)}P_{1,i+j}^{(k)}}=P_{i,j}^{(k+1)}. \tag{12}\] Finally, after simplifications based on the fact \(P_{0,i+j+1}^{(k)}=P_{0,i+j}^{(k+1)}=1\) and canceling out the similar terms, we thus obtain \[\frac{P_{i+1,j+1}^{(k)}P_{i,j}^{(1)}}{P_{1,i+j+1}^{(k)}}=P_{i,j}^{(k+1)},\] as required. Figure 8. The sliding property for \(PD_{k}\) ## 7. A Geometric Interpretation of \(P_{i,j}^{(k)}\) In this section we present a geometric interpretation of \((i,j)\)-entry of the Pascal determinantal array of order \(k\). To do this, we first need to state a new identity for Pascal array entries which is a direct consequence of the sliding cross rule in Pascal array. Indeed for every positive \(k\) and nonnegative \(j\), we have \[P_{0,j+2k+1}.P_{1,j+2k-1}.\ldots.P_{k-1,j+3}.P_{k,j+1}=P_{0,j}.P_{1,j+1}.\ldots.P_{k-1,j+k-1}.P_{k,j+k}.\] The corresponding entries are shown in Figure 9. For simplicity, we show the entries on left-hand side (LHS) of the above identity with \(a_{i}\), \((1\leq i\leq k+1)\) and the entries on the right-hand side (RHS) with \(A_{i}\), \((1\leq i\leq k+1)\). It is worth to note that \(a_{1}=A_{1}=1\). The LHS entries are on the line with slope \(\frac{-1}{1}\), the line \(L_{a}\), and the RHS entries are on the line with slope \(+\frac{1}{2}\), the line \(L_{A}\). Now we give a geometric proof of the above identity based on induction on \(k\) and sliding cross rule in Pascal array. The basis step is obviously true. By induction hypothesis the identity holds for entries \(b_{1},b_{2},\ldots,b_{k}\) and \(A_{1},A_{2},\ldots,A_{k}\). So, we have \[b_{1}b_{2}\ldots b_{k}=A_{1}A_{2}\ldots A_{k}. \tag{13}\] On the other hand, considering two parallel crosses \((b_{1},b_{2},\ldots,b_{k},r_{1},r_{2},\ldots,r_{k})\) and \((a_{2},\ldots,a_{k+1},r_{2},\ldots,r_{k+1})\) of size \(k\), by sliding cross rule, we get \[\frac{b_{1}b_{2}\ldots b_{k}}{r_{1}r_{2}\ldots r_{k}}=\frac{a_{2}a_{3}\ldots a _{k+1}}{r_{2}r_{3}\ldots r_{k+1}}, \tag{14}\] or equivalently \[b_{1}b_{2}\ldots b_{k}A_{k+1}=a_{1}a_{2}\ldots a_{k+1}. \tag{15}\] Now, (13) and (15) imply \[a_{1}a_{2}\ldots a_{k+1}=A_{1}A_{2}\ldots A_{k+1},\] Figure 9. A geometric interpretation of the new identity as required. In the next step, we use the algorithm of section 2, to find a closed-form formula for \(P_{i,j}^{(k)}\) and it's geometric interpretation. We already know that \[P_{i,j}^{(k+1)}=\frac{P_{i+1,j+1}^{(k)}P_{i,j}^{(1)}}{P_{1,i+j+1}^{(k)}}.\] By repeated application of this formula and substitution in the right hand side, after \(k-1\) iterations, we get \[P_{i,j}^{(k+1)}=\frac{P_{i+k,j+k}^{(1)}\cdot P_{i+(k-1),j+(k-1)}^{(1)}.....P_{ i+1,j+1}^{(1)}.P_{i,j}^{(1)}}{P_{1,i+j+2k-1}^{(2)}.P_{1,i+j+2k-3}^{(2)}.....P_{1,i+j+3} ^{(k-1)}P_{1,i+j+1}^{(k)}}.\] On the other hand, by Rahimpour's theorem [1], we have \[P_{1,j}^{(k)}=P_{k,j}^{(1)}\ (j\geq 0,k\geq 1).\] Thus, we obtain \[P_{i,j}^{(k+1)}=\frac{P_{i+k,j+k}^{(1)}\cdot P_{i+(k-1),j+(k-1)}^{(1)}.....P_{ i+1,j+1}^{(1)}.P_{i,j}^{(1)}}{P_{1,i+j+2k-1}^{(1)}.P_{2,i+j+2k-3}^{(1)}.....P_{k-1,i+j+3} ^{(1)}P_{k,i+j+1}^{(1)}},\] or equivalently, \[P_{i,j}^{(k+1)}=\frac{P_{i+k,j+k}.P_{i+(k-1),j+(k-1)}.....P_{i+1,j+1}.P_{i,j}}{ P_{1,i+j+2k-1}.P_{2,i+j+2k-3}.....P_{k-1,i+j+3}.P_{k,i+j+1}}.\] Now, using the new identity with \(j\) replaced by \(i+j\), we can alternatively obtain the following formula \[P_{i,j}^{(k+1)}=\frac{P_{i+k,j+k}.P_{i+(k-1),j+(k-1)}.....P_{i+1,j+1}.P_{i,j}} {P_{1,i+j+1}.P_{2,i+j+2}.....P_{k-1,i+j+(k-1)}.P_{k,i+j+k}}.\] Considering the two parallel symmetric crosses \((P_{i,j},\ldots,P_{i+k,j+k},P_{i,j+k},\ldots,P_{i+k,j})\) and \((P_{0,i+j},\ldots,P_{k,i+j+k},P_{k,i+j},\ldots,P_{0,i+j+k})\) of size \(k+1\), using the sliding property, we obtain \[P_{i,j}^{(k+1)}=\frac{P_{i+k,j}.P_{i+(k-1),j+1}.....P_{i+1,j+(k-1)}.P_{i,j+k} }{P_{0,i+j+k}.P_{1,i+j+(k-1)}.....P_{k-1,i+j+1}.P_{k,i+j}}.\] Finally, by replacing \(k\) with \(k-1\), we get \[P_{i,j}^{(k)}=\frac{P_{i+(k-1),j}.P_{i+(k-2),j+1}.....P_{i+1,j+(k-2)}.P_{i,j+( k-1)}}{P_{i+j+(k-1),0}.P_{i+j+(k-2),1}.....P_{i+j+1,k-2}.P_{i+j,k-1}}.\] Considering the above closed-form formula for \(P_{i,j}^{(k)}\), we give the following geometric interpretation of \(k\) by \(k\) determinants in Pascal array. We will show a double stick (see Figure 10) by \((b_{1},b_{2},\ldots,b_{k}|r_{1},r_{2},\ldots,r_{k})\) where \(b_{i}\),\(r_{i}\) (\(1\leq i\leq k\)) are entries of Pascal array which lie on the discrete line \(x+y=i+j+(k-1)\), the line \(L\), where \[b_{1}:=P_{i+(k-1),j},\,b_{k}:=P_{i,j+(k-1)},\] and \[r_{1}:=P_{i+j+(k-1),0},\,r_{k}:=P_{i+j,k-1}.\] We define the weight of the double stick, \(W^{k}_{i,j}\), as follows \[W^{(k)}_{i,j}=\tfrac{b_{1}b_{2}\ldots b_{k}}{r_{1}r_{2}\ldots r_{k}}.\] Now, clearly \(P^{(k)}_{i,j}\) can be interpreted as the weight of double stick, i.e. \(W^{(k)}_{i,j}\). Indeed the following simple fact that the overlapping area of the double stick will increase by increasing the value of \(k\), is the essence of the proof of our main conjecture. Now, we are at the position to give a geometric proof of our conjecture, besides it's algebraic proof, based on the above geometric interpretation of \(k\) by \(k\) determinants in Pascal array. Proof of the conjecture.Based on the language of Pascal determinantal arrays, the conjecture is equivalent to the following statement (see Figure 11). For any positive integers \(i\),\(k\) and \(n\), \(n\geq k\), we have \[P^{(n)}_{i,k}=P^{(k)}_{i,n}\] Figure 10. A geometric interpretation of \(P^{(k)}_{i,j}\) Now let's look at the geometric meaning of \(P_{i,k}^{(n)}\). It is equal to \(W_{i,k}^{(n)}\), the weight of the double stick in Figure 12 (solid line). Clearly, it lies on the line \(L:x+y=i+k+(n-1)\). So we get \[P_{i,k}^{(n)}:=W_{i,k}^{(n)}=\frac{P_{i+(n-1),k\cdots P_{i,k+(n-1)}}}{P_{i+k+(n -1),0\cdots P_{i+k,n-1}}},\] or equivalently, \[P_{i,k}^{(n)}:=W_{i,k}^{(n)}=\frac{(P_{i+(n-1),k\cdots P_{i+k,n-1}})(P_{i+(k-1 ),n\cdots P_{i,n+(k-1)}})}{(P_{i+n+(k-1),0\cdots P_{i+n,k-1}})(P_{i+(n-1),k \cdots P_{i+k,n-1}})}.\] Figure 11. Sub-arrays related to the main conjecture Figure 12. A geometric proof of the conjecture Now, by canceling the common factor out of numerator and denumerator of the above fraction which is geometrically equivalent to omit the overlapping area of the original double stick, we obtain \[P_{i,k}^{(n)}:=\frac{P_{i+(k-1),n}\cdots P_{i,n+(k-1)}}{P_{i+n+(k-1),0}\cdots P_{ i+n,k-1}},\] which is clearly the weight of the new double stick shown by the dash line. As it is easily seen from Figure 12. and the above algebraic formula, this is equal to \(W_{i,n}^{(k)}\) or \(P_{i,n}^{(k)}\), and this completes the proof.
2304.03352
ImaGen: A General Framework for Generating Memory- and Power-Efficient Image Processing Accelerators
Image processing algorithms are prime targets for hardware acceleration as they are commonly used in resource- and power-limited applications. Today's image processing accelerator designs make rigid assumptions about the algorithm structures and/or on-chip memory resources. As a result, they either have narrow applicability or result in inefficient designs. This paper presents a compiler framework that automatically generates memory- and power-efficient image processing accelerators. We allow programmers to describe generic image processing algorithms (in a domain specific language) and specify on-chip memory structures available. Our framework then formulates a constrained optimization problem that minimizes on-chip memory usage while maintaining theoretical maximum throughput. The key challenge we address is to analytically express the throughput bottleneck, on-chip memory contention, to enable a lightweight compilation. FPGA prototyping and ASIC synthesis show that, compared to existing approaches, accelerators generated by our framework reduce the on-chip memory usage and/or power consumption by double digits.
Nisarg Ujjainkar, Jingwen Leng, Yuhao Zhu
2023-04-06T19:59:45Z
http://arxiv.org/abs/2304.03352v1
ImaGen: A General Framework for Generating Memory- and Power-Efficient Image Processing Accelerators ###### Abstract. Image processing algorithms are prime targets for hardware acceleration as they are commonly used in resource- and power-limited applications. Today's image processing accelerator designs make rigid assumptions about the algorithm structures and/or on-chip memory resources. As a result, they either have narrow applicability or result in inefficient designs. This paper presents a compiler framework that automatically generates memory- and power-efficient image processing accelerators. We allow programmers to describe generic image processing algorithms (in a domain specific language) and specify on-chip memory structures available. Our framework then formulates a constrained optimization problem that minimizes on-chip memory usage while maintaining theoretical maximum throughput. The key challenge we address is to analytically express the throughput bottleneck, on-chip memory contention, to enable a lightweight compilation. FPGA prototyping and ASIC synthesis show that, compared to existing approaches, accelerators generated by our framework reduce the on-chip memory usage and/or power consumption by double digits. ImaGen code is available at: [https://github.com/horizon-research/imagen](https://github.com/horizon-research/imagen). 2021 Accelerator, Line Buffer, Image Processing, Constrained Optimization, Synthesis, Compiler + Footnote †: c) 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-0095-8/2306-515.00[https://doi.org/10.1145/3759731.3589076](https://doi.org/10.1145/3759731.3589076) + Footnote †: c) 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM. ISBN 979-8-4007-0095-8/2306-515.00[https://doi.org/10.1145/3759731.3589076](https://doi.org/10.1145/3759731.3589076) + Footnote †: c) 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM. ISBN 979-8-4007-0095-8/2306-515.00[https://doi.org/10.1145/3759731.3589076](https://doi.org/10.1145/3759731.3589076) + Footnote †: c) 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM. ISBN 979-8-4007-0095-8/2306-515.00[https://doi.org/10.1145/3759731.3589076](https://doi.org/10.1145/3759731.3589076) + Footnote †: c) 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM. ISBN 979-8-4007-0095-8/2306-515.00[https://doi.org/10.1145/3759731.3589076](https://doi.org/10.1145/3759731.3589076) + Footnote †: c) 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM. ISBN 979-8-4007-0095-8/2306-515.00[https://doi.org/10.1145/3759731.3589076](https://doi.org/10.1145/3759731.3589076) + Footnote †: c) 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM. ISBN 979-8-4007-0095-8/2306-515.00[https://doi.org/10.1145/3759731.3589076](https://doi.org/10.1145/3759731.3589076) + Footnote †: c) 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM. ISBN 979-8-4007-0095-8/2306-515.00[https://doi.org/10.1145/3759731.3589076](https://doi.org/10.1145/3759731.3589076) ## 1. Introduction Image processing has become ever more important with a plethora of emerging visual computing domains such as Augmented/Virtual Reality, computational photography, and smart cameras. These application domains all present stringent resource and power constraints, leading to many research efforts in building specialized accelerators for image processing (Beng et al., 2016; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). Manually building accelerators, however, is not only time-consuming, error-prone, but also relies heavily on empirical heuristics that do not always deliver optimal designs. A recent trend is automatically generating accelerators from high-level algorithm descriptions (Beng et al., 2016; Wang et al., 2017; Wang et al., 2017). Prior approaches to generating image processing accelerators either have narrow applicability or yield inefficient designs -- for two main reasons (Sec. 3). First, they optimize for simple, single-consumer algorithms where each producer stage has only one consumer. When facing multiple-consumer algorithms such as unsharp filtering (Wang et al., 2017) and denoising (Beng et al., 2016), they either have to artificially transform the multiple-consumer algorithm to a single-consumer arrangement, which increases the on-chip memory usage, or increase the total on-chip memory accesses, which increases the power consumption. Second, there is a large, algorithm-dependent trade-off space between on-chip memory requirement and power consumption that prior work fails to explore. This is because prior work assumes one single memory structure and, critically, use the same memory structure for _all_ algorithms and for _all_ stages in an algorithm. For instance, FixyNN (Wang et al., 2017) could generate designs using only single-port SRAMs, and SODA (Beng et al., 2016) could generate designs using only FIFOs (dual-port SRAMs). The actual design space is much larger: given an algorithm with \(N\) stages and \(M\) memory structures, there are \(MN\) design points, each providing a unique power-vs-area trade-off. This paper proposes a compiler framework that generates memory- and power-efficient accelerators (in the form of synthesizable RTL) for image processing (Sec. 4). Instead of artificially restricting algorithm and/or on-chip memory structures, we allow specifying generic algorithms and memory configurations (in terms of size and number of ports). Given the algorithm and hardware specifications, our compiler formulates a constrained optimization problem that, while maintaining theoretically maximum throughput, minimizes the on-chip memory usage and reduces total power consumption. A key challenge we address is to generate accelerators that consistently deliver theoretically maximum throughput (frame rate) _for every frame_; after all, saving on-chip area and power consumption is of little use when an image processing accelerator has a low frame rate. The central difficulty is to analytically express the throughput bottleneck, i.e., on-chip memory contention, which
2306.15349
SSC-RS: Elevate LiDAR Semantic Scene Completion with Representation Separation and BEV Fusion
Semantic scene completion (SSC) jointly predicts the semantics and geometry of the entire 3D scene, which plays an essential role in 3D scene understanding for autonomous driving systems. SSC has achieved rapid progress with the help of semantic context in segmentation. However, how to effectively exploit the relationships between the semantic context in semantic segmentation and geometric structure in scene completion remains under exploration. In this paper, we propose to solve outdoor SSC from the perspective of representation separation and BEV fusion. Specifically, we present the network, named SSC-RS, which uses separate branches with deep supervision to explicitly disentangle the learning procedure of the semantic and geometric representations. And a BEV fusion network equipped with the proposed Adaptive Representation Fusion (ARF) module is presented to aggregate the multi-scale features effectively and efficiently. Due to the low computational burden and powerful representation ability, our model has good generality while running in real-time. Extensive experiments on SemanticKITTI demonstrate our SSC-RS achieves state-of-the-art performance.
Jianbiao Mei, Yu Yang, Mengmeng Wang, Tianxin Huang, Xuemeng Yang, Yong Liu
2023-06-27T10:02:45Z
http://arxiv.org/abs/2306.15349v1
# SSC-RS: Elevate LiDAR Semantic Scene Completion with Representation Separation and BEV Fusion ###### Abstract Semantic scene completion (SSC) jointly predicts the semantics and geometry of the entire 3D scene, which plays an essential role in 3D scene understanding for autonomous driving systems. SSC has achieved rapid progress with the help of semantic context in segmentation. However, how to effectively exploit the relationships between the semantic context in semantic segmentation and geometric structure in scene completion remains under exploration. In this paper, we propose to solve outdoor SSC from the perspective of representation separation and BEV fusion. Specifically, we present the network, named SSC-RS, which uses separate branches with deep supervision to explicitly disentangle the learning procedure of the semantic and geometric representations. And a BEV fusion network equipped with the proposed Adaptive Representation Fusion (ARF) module is presented to aggregate the multi-scale features effectively and efficiently. Due to the low computational burden and powerful representation ability, our model has good generality while running in real-time. Extensive experiments on SemanticKITTI demonstrate our SSC-RS achieves state-of-the-art performance. Code is available at [https://github.com/Jieqianyu/SSC-RS.git](https://github.com/Jieqianyu/SSC-RS.git). ## I Introduction In recent years, 3D scene understanding, one of the most important functions of perception systems in autonomous driving, has attracted extensive studies and achieved rapid progress. When working with large-scale outdoor scene understanding, Semantic Scene Completion (SSC) aims to predict the semantic occupancy of each voxel of the entire 3D scene from the sparse LiDAR scans, including the completion of certain regions. Due to the ability to recover geometric structure, SSC can facilitate further applications like 3D object detection, which usually suffer from the sparsity and incompleteness (caused by occlusions or far distance from sensors) of the LiDAR point cloud. However, it's challenging to precisely estimate the semantics and geometry of the whole 3D real-world scene from partial observations due to the complex outdoor scenarios such as various shapes/sizes and occlusions. Following the pioneering work SSCNet [1], some existing outdoor SSC methods [2, 3] exploit a single U-Net network, e.g., a heavy dense 3D convolution network to predict semantics and geometry jointly. However, they usually involve unnecessary calculations and extra memory and computation overhead, especially when the input voxel resolution is large since there are lots of empty voxels in the 3D scene. On the other way, some methods [4, 5, 6, 7] utilize the semantic information in the segmentation to assist outdoor SSC by combining the semantic completion network with the segmentation network. We found that most outdoor SSC methods consider the semantic context (semantic representation) and geometry structure (geometric representation) in a hybrid (Fig. 1 (a)) or semi-hybrid manner (Fig. 1 (b)). And how to effectively learn semantic/geometric representations and exploit their relationship remains unexplored. In this paper, we explore the solutions to the outdoor SSC from the perspective of representation separation and BEV (Bird's-Eye View) fusion (Fig. 1 (c)). We propose to explicitly disentangle the learning procedure of semantic context and geometric structure and fuse them in the BEV, which is demonstrated to be a kind of success in 3D object detection and segmentation [8, 9, 10, 11, 12, 13]. Our main insights are: (1) Semantic context and geometric structure complement each other and are vital for SSC tasks. Recovering the geometry details according to the semantics is easy, and the completed shapes help identify semantic categories. (2) Explicitly disentangling the representations can facilitate and accelerate the learning procedure of semantic context and geometric structure. (3) Compared to dense feature fusion in 3D space, BEV fusion is more convenient and efficient. Specifically, we design separate branches, i.e., semantic and completion branches, for semantic/geometric representations according to their intrinsic properties. We also develop a BEV fusion network to aggregate the two types of representations from the two branches. We use a sparse 3D CNN [14] to encode the multi-level semantic context and a tiny dense 3D CNN to obtain the multi-scale geometric Fig. 1: (a) Consider the semantic context and geometric structure in a hybrid manner. (b) Consider the semantic context and geometric structure in a semi-hybrid manner. (c) Disentangling the learning procedure of semantic context and geometric structure explicitly. structures. In addition, we apply deep supervision on both branches to facilitate representation learning. Furthermore, to obtain selective cues from semantic/geometric representations and fuse the semantic context and geometric details sufficiently, we propose an Adaptive Representation Fusion (ARF) module in the BEV fusion network. Due to the lower computational burden and more powerful representation ability, our model runs in real-time and has good generality. Experiments on SemanticKITTI dataset (hidden test) show that our approach achieves state-of-the-art performance (rank the \(2^{nd}\) by mIoU and \(1^{nd}\) in terms of completion metric (IoU) on the public SSC benchmark1). Our contributions are summarized as follows: Footnote 1: [https://codalab.lsln.upsaclay.fr/competitions/7170#results](https://codalab.lsln.upsaclay.fr/competitions/7170#results) \(\bullet\) We develop the SSC-RS network to solve the outdoor SSC problem from the perspective of representation (semantic/geometric) separation and BEV fusion. \(\bullet\) We design two separate branches for multi-level semantic context and multi-scale geometric structures according to their properties. And deep supervision is applied to facilitate the learning procedure. \(\bullet\) We use 2D CNN as the BEV fusion network to aggregate semantic/geometric representations. And an Adaptive Representation Fusion (ARF) module in the BEV fusion network is proposed to fuse the semantic context and geometric details sufficiently. \(\bullet\) Due to the lightweight design and more powerful representation ability, SSC-RS has low decay and achieves state-of-the-art performance on the SemanticKITTI benchmark. ## II Related Work ### _Semantic scene completion_ The indoor SSC method has developed rapidly with the emergence of indoor benchmarks such as SUNCG [1] and NYU [15]. Existing methods use different types of geometrical inputs cooperated with corresponding network architectures to complete indoor SSC. For example, [1, 16, 17] process the depth maps with 3D CNNs end-to-end. [18, 19, 20, 21, 22] take the RGB-D images with 2D-3D CNNs to explore the modality complementarity. [6, 23, 24, 25] encode the truncated signed distance function (TSDF) representations with the volume network architectures. [26, 27, 28, 29, 30] process the point clouds with the point-based network to achieve semantic scene completion continuously. Since SemanticKITTI [31] introduces a large-scale outdoor benchmark for SSC tasks, several outdoor SSC methods have emerged. Following the pioneering work SSCNet [1, 2] exploits a single U-Net framework to process segmentation and completion simultaneously, resulting in extra computation overhead of empty voxels. Some methods [3, 32] use sparse convolutions or introduce 2D CNN to solve the above problem. For example, LMSCNet [3] appends a 3D decoder after the lightweight 2D backbone, and Zhang et al. [32] designs a sparse CNN with dense deconvolution layers. Some solutions focusing on multi-view fusion [6], and local implicit functions [30] are also explored. Besides, some methods [4, 5] exploit semantic segmentation to assist SSC. JS3C-Net [4] inserts a semantic segmentation network before SSC, and SSA-SC [5] injects the features from the segmentation branch into the completion branch hierarchically. In this work, we propose SSC-RS for large-scale outdoor SSC from the perspective of representation separation and BEV fusion. And different from [33], which designs a cascaded network to implement the complementary between scene completion and semantic segmentation for indoor SSC, our SSC-RS exploits the multi-scale context. And our parallel feature fusion in BEV is more convenient and efficient. ### _BEV perception in segmentation_ BEV perception indicates vision algorithms in the sense of the BEV view representation for autonomous driving [34], which has been explored for a variety of tasks such as LiDAR detection [11, 35], LiDAR segmentation [9, 10, 13], and sensor fusion [12, 36]. SalsaNet [8] projects point clouds into BEV feature maps, and PolarNet [9] proposes a polar BEV representation for semantic segmentation. Panoptic-PHNet [10] exploits BEV features to enhance the segmentation and perform instance grouping in BEV. Panoptic-Polarnet [13] uses a polar BEV representation to implement semantic segmentation and class-agnostic instance clustering. In SSC tasks, S3CNet [6] designs a 2D S3CNet to predict the 2D SSC Image in the BEV of the input point cloud. And SSA-SC [5] take the 2D CNN as the semantic completion network to simultaneously predict the semantics and geometry. Similar to SSA-SC, we also use the 2D CNN as the BEV fusion network to efficiently provide the semantic occupancy of the entire 3D scene. And different from SSA-SC, we explicitly disentangle the learning process of semantic context and geometric structure and take the BEV network as a fusion network. Also, we design an adaptive representation module for aggregating the judicious cues in BEV sufficiently. ## III Method ### _Overview_ In this paper, we explore the solutions to LiDAR semantic scene completion from the perspective of representation separation and BEV fusion. Specifically, we design two separate branches to encode semantic and geometric representations, respectively (Sec. III-B). Both branches are compact and lightweight. The semantic branch is a stack of 3D sparse convolutions for learning multi-scale semantic context. The completion branch uses several dense 3D convolutions to acquire multi-scale geometry structures from different stages. Based on the representation separation, the BEV fusion network equipped with the proposed ARF module is presented to aggregate informative multi-level features from semantic/completion branches for final semantic scene completion results (Sec. III-C). Fig. 2 illustrates the overall architecture of the proposed SSC-RS. ### _Semantic-completion Representation Separation_ As discussed above, semantic context and geometry structures are vital cues for semantic scene completion tasks. Thus, according to the inherent properties of these two types of clues, we design separate architectures for learning semantic and geometric representation, respectively. **Semantic Representation:** To encode multi-scale semantic context and improve semantic accuracy, we introduce a compact semantic branch consisting of a voxelization layer and three sparse encoder blocks sharing a similar architecture as shown in Fig. 3 (a). The voxelization layer takes the point cloud \(P\in\mathbb{R}^{N\times 3}\) in the range of \([R_{x},R_{y},R_{z}]\) as input and outputs sparse voxel features \(F_{V}\in\mathbb{R}^{M\times C}\) with a dense spatial resolution of \(L\times W\times H\). It discretizes a point \(p_{i}=(x_{i},y_{i},z_{i})\) to its voxel index \(V_{i}\) through: \[V_{i}=(\lfloor x_{i}/s\rfloor,\lfloor y_{i}/s\rfloor,\lfloor z_{i}/s\rfloor) \tag{1}\] where \(s\) is the voxelization resolution and \(\lfloor\cdot\rfloor\) is a floor function. Since a occupied voxel could contain multiple points, the voxel features \(f_{V_{m}}\) indexed by \(V_{m}\in\mathbb{Z}^{L\times W\times H}\) are aggregated by: \[f_{V_{m}}=\text{R}_{f}\left\{\begin{matrix}\text{A}_{f}&(\text{MLP}(f_{p})) \\ \text{V}_{p}=V_{m}&(\text{MLP}(f_{p}))\end{matrix}\right\} \tag{2}\] where A\({}_{f}\) is aggregation function (e.g. max function) and R\({}_{f}\) denotes MLPs for dimension reduction. We concatenate the point coordinates, distance offset from the center of the voxel where the point locates, and reflection intensity as the point features \(f_{p}\). After the voxelization layer, the voxel features are fed into three cascade sparse encoder blocks to obtain sparse semantic features (\(F_{s,1},F_{s,2},F_{s,3}\)). Each sparse encoder block consists of a residual block [37] with sparse convolutions and an SGFE module developed in [38]. The SGFE module exploits multi-scale sparse projections and attentive scale selection to enhance the voxel-wise features with more geometric guidance and downscales the features' dense resolution by factor 2. Also, similar to [38], we adopt multi-scale sparse supervision to facilitate the learning of semantic context, as shown in Fig. 2. Specifically, during the training stage, we attach lightweight MLPs as the auxiliary heads after each encoder block to get the semantic predictions of valid voxels. The voxelized semantic labels at different scales are generated by SSC labels according to occupancy grids. Note that point-wise semantic labels are unnecessary since we only apply voxel-wise supervision and compute loss on the valid voxels to avoid unnecessary computation and memory usage. We use lovasz loss [39] and cross-entropy loss to optimize the semantic branch. The semantic loss \(L_{s}\) is the summation of the loss of each stage, which can be expressed as: \[L_{s}=\sum_{i=1}^{3}\left(L_{lowsize,i}+L_{ce,i}\right) \tag{3}\] Note that the auxiliary heads are removed on the inference stage for efficiency, and our semantic branch only contains 1.45 M parameters. **Geometric Representation:** The completion branch takes the occupancy voxels \(O_{V}\in\mathbb{R}^{1\times L\times W\times H}\) generated by the LiDAR point cloud, indicating if voxels are occupied by laser measurements. It outputs multi-scale dense completion features (\(F_{c,1},F_{c,2},F_{c,3}\)) for more geometry details. Since the completion branch aims only to complete the semantic-agnostic scene, i.e., binary completion, we design a shallow architecture with dense 3D convolutions to obtain the geometry details of the scene. As shown in Fig. 3 (b), the com Fig. 2: The overview of the proposed SSC-RS. Two branches (semantic/completion branches) are used to learn semantic and geometric representations separately. Both branches are supervised by multi-level auxiliary losses, which will be removed during inference. The multi-scale semantic representations from the semantic branch (blue, sparse 3D CNN) and geometric representations from the completion branch (red, dense 3D CNN) will be fused by the adaptive representation (ARF) module in the BEV fusion network (purple). F’ denotes the ARF module, and ’C’ indicates concatenation along the channel dimension. V2B represents the projection from voxel to BEV. pletion branch consists of an input layer and three residual blocks. The input layer is a dense 3D convolution with kernel size \(7\times 7\times 7\) for a large receptive field, and the residual block is a stack of dense 3D convolutions with kernel size \(3\times 3\times 3\). Also, the max-pooling layers are applied before each residual block to downscale the size of the feature map by factor 2. Similar to the semantic branch, deep supervision is used to enhance the multi-scale geometric representation. To this end, we attach MLPs as auxiliary heads after each block to obtain the binary prediction indicating the occupancy of the completed scene. And the training loss \(L_{c}\) for this branch is computed by: \[L_{c}=\sum_{i=1}^{3}\left(L_{\textit{dovasz},i}+L_{\textit{bce},i}\right) \tag{4}\] where \(i\) denotes the \(i\)-th stage of the completion branch and \(L_{\textit{bce}}\) indicates the binary cross-entropy loss. During the inference, the auxiliary heads are removed. And due to the lightweight design (0.31 M parameters), the computational overhead of the completion branch is small (7.93G MACs with input shape \(256\times 256\times 32\)). ### _BEV Fusion Network_ Since using dense 3D convolutions to fuse dense 3D feature maps brings a significant overhead on memory usage and slows down the running speed greatly, inspired by the success of BEV perception in 3D object detection and semantic segmentation, we develop a BEV fusion network to aggregate the multi-scale sparse semantic representations (\(F_{V},F_{s,1},F_{s,2},F_{s,3}\)) and dense geometric representations (\(O_{V},F_{c,1},F_{c,2},F_{c,3}\)) from the BEV. We first elaborate on the BEV projection of features from semantic/completion branches. For sparse semantic features \(F_{s,*}\), we first generate the BEV indices from the voxel indices. Then similar to the voxelization layer in the semantic branch, we use the aggregation function (max function) to aggregate the features with the same BEV index to get the sparse BEV features. Finally, according to the BEV indices and sparse BEV features, we generate the dense BEV features (\(F_{s,0}^{b}\in\mathbb{R}^{C_{0}\times H\times W},F_{s,1}^{b}\in\mathbb{R}^{C_{1 }\times(H/2)\times(W/2)},F_{s,2}^{b}\in\mathbb{R}^{C_{2}\times(H/4)\times(W/4) },F_{s,3}^{b}\in\mathbb{R}^{C_{3}\times(H/8)\times(W/8)}\)). Compared to [5], which stacks the dense 3D semantic features along the z-axis for BEV features, our used projection method is more efficient and requires less memory overhead. For dense features \(F_{c,*}\) from the completion branch, we simply stack the dense 3D features along the z-axis and reduce the feature dimensions with 2D convolutions for dense BEV features (\(F_{c,0}^{b},F_{c,1}^{b}\), \(F_{c,2}^{b}\), \(F_{s,3}^{b}\)), which keep the same dimensions as (\(F_{s,0}^{b},F_{s,1}^{b}\), \(F_{s,2}^{b}\), \(F_{s,3}^{b}\)). Similar to [5], our BEV fusion network is U-Net architecture with 2D convolutions. The encoder consists of an input layer and four residual blocks. Each residual block reduces the resolution size of input features by 2 to keep the same resolution as the semantic/completion features. The concatenation of features \(F_{s,0}^{b}\) and \(F_{c,0}^{b}\) is first fed into the input layer and then into the first residual block. Before the next residual block, an Adaptive Representation Fusion (ARF, detailed below) module takes the previous stage's output and semantic/geometric representations at the same scale as inputs and outputs the fused features containing informative semantic context and geometric structure. The decoder upscales the compressed features from the encoder three times by a factor of two at a time through skip connections. And the last convolution of the decoder outputs the SSC prediction \(Y\in\mathbb{R}^{((C_{n}+1)\times L)\times H\times W}\), where \(C_{n}\) is the number of semantic classes. The prediction \(Y\) is further reshaped as the size of (\((C_{n}+1)\times L\times H\times W\)), representing the semantic occupancy prediction of each voxel of the completed scene. Unlike the semantic/completion branches, we only apply supervision to the final prediction. Both lovasz loss and cross-entropy loss are used to compute the BEV loss \(L_{\textit{bev}}\): \[L_{\textit{bev}}=\left(L_{\textit{dovasz}}+L_{\textit{ce}}\right) \tag{5}\] **Adaptive Representation Fusion Module:** Directly concatenating representations from different sources (semantic/completion/BEV branches) similar to SSA-SC implies an equal preference for these representations. However, we Fig. 4: Our designed adaptive representation fusion module. GAP denotes global average pooling. Fig. 3: (a) The architecture of the semantic branch. ’SE’ denotes the sparse encoder block. (b) The overview of the completion branch. The max-pooling layers are inserted before each residual block. usually need selective cues from different sources. To better fuse the semantic and geometric representations, we design an adaptive representation fusion module for the BEV fusion network. Fig. 4 illustrates the detailed procedure of our ARF module. Let \(F_{prev},F_{sem},F_{com}\) represent features from the previous stage, features from the semantic branch, and features from the completion branch, respectively. We first compute channel attention for features \(F_{prev}/F_{sem}/F_{com}\) to weight the feature channels adaptively. Then the weighted features are summed and passed into a \(1\times 1\) convolution to obtain the fused features \(F_{f}\). The procedure is formulated as: \[\begin{split} F_{f}&=\phi\{\sigma[\text{MLP}(\text{ AvgPool}(F_{prev}))]*F_{prev}\\ &\quad+\sigma[\text{MLP}(\text{AvgPool}(F_{sem}))]*F_{sem}\\ &\quad+\sigma[\text{MLP}(\text{AvgPool}(F_{com}))]*F_{com}\}\end{split} \tag{6}\] where \(\sigma\) denotes the _sigmoid_ function. \(\phi\) is the \(1\times 1\) convolution. ### _Multi-task learning_ We train the whole network end-to-end. The multi-task loss \(L_{total}\) is expressed as: \[L_{total}=3\cdot L_{bev}+L_{s}+L_{c} \tag{7}\] where \(L_{bev}\) is the BEV loss defined in Sec. III-C. \(L_{s}\), \(L_{c}\) are the semantic loss and completion loss defined in Sec. III-B. ## IV Experiments In this section, we introduce the implementation details of the proposed SSC-RS and conduct extensive experiments on the large-scale outdoor scenarios dataset SemanticKITTI [31] to show that SSC-RS achieves state-of-the-art performance. Also, we provide visualizations and qualitative analysis to demonstrate the effectiveness of our model. Moreover, ablation studies on semantic/geometric representation, ARF module, and multi-scale supervision are given to validate proposed components. ### _Datasets and Metrics_ **Datasets** SemanticKITTI [31] is based on the KITTI odometry dataset [40], which collects 22 LiDAR sequences with 20 classes in the scenes of autonomous driving using a Velodyne HDL-64 laser scanner. According to the official setting for semantic scene completion, sequences from 00 to 10, except 08 (3834 scans), are for training, sequence 08 (815 scans) is for validation, and the rest (3901 scans) is for testing. The voxelized groud-truth labels with resolution \(256\times 256\times 32\) of train and validation set are provided for the users. **Metrics** Following [1], we compute the Intersection-over-Union (IoU) for scene completion (ignoring semantics) and mIoU of \(C_{n}=19\) classes (no "unlabeled" class) for semantic scene completion as the evaluation protocol. The mIoU is calculated by: \[mIoU=\frac{1}{C_{n}}\sum_{c=1}^{C_{n}}\frac{TP_{c}}{TN_{c}+FP_{c}+FN_{c}} \tag{8}\] where \(TP_{c}\), \(TN_{c}\), \(FP_{c}\), and \(FN_{c}\) denote true positive, true negative, false positive, and false negative for class \(c\). ### _Implementation Details_ According to the official protocols, the range \([R_{x},R_{y},R_{z}]\) of input point cloud is set \([0\sim 51.2m,-25.6\sim 25.6m,-2\sim 4.4m]\), the voxelization resolution \(s\) is \(0.2m\), and the spatial resolution is \((L=256,W=256,H=32)\). The input point cloud is augmented by randomly x-y flipping during the training procedure. And we use Adam optimizer [41] with an initial learning rate of 0.001 (\(\beta_{1}=0.9,\beta_{2}=0.999\)) to train SSC-RS end-to-end. The model is trained for 40 epochs on a single NVIDIA 3090 with batch size 2. ### _Comparison with the state-of-the-art._ **Quantitative Results.** We compare with the state-of-the-art on SemanticKITTI test set. We submit the results to the official test server to evaluate the performance of our proposed SSC-RS. Table I shows that our SSC-RS achieves the best performance on the completion metric IoU (59.7%) and ranks \(2^{d}\) in terms of the scene completion metric mIoU (24.2%). Our SSC-RS also has low latency and runs in real-time (16.7 fps). UDNet [2], which adopts dense 3D CNN, has comparable performance on IoU to SSC-RS, while SSC-RS surpasses UDNet by 4.7% on mIoU and has lower latency. And compared to the semantic segmentation-assistant method SSA-SC [5], SSC-RS obtains 0.9% improvement on IoU and 0.7% on mIoU, which demonstrates the effectiveness of our proposed semantic/geometric representation separation. J3SC-Net [4] attaches semantic scene completion after segmentation. And our SSC-RS also outperforms J3SC-Net by 0.4% on mIoU, especially 3.1% on IoU. We notice that our SSC-RS has lower performance on mIoU than S3CNet [6]. We explain that the local geometric loss in S3CNet helps a lot, especially on small objects such as persons, bicycles, and motorcycles, while we don't make a special design on that. Notably, SSC-RS performs better on IoU (14.1%) and runs \(\sim 14\times\) faster than S3CNet. **Qualitative Results.** We provide the visualizations on SemanticKITTI validation as illustrated in Fig. 5. We also visualize the results of SSA-SC [5] and JS3C-Net [4] for comparison. From Fig 5, we can see that our SSC-RS predicts more accurate SSC results, especially on "plane" classes and large objects such as cars, consistent with the results in Table. I. While we also notice our SSA-SC fails to complete some hard samples, such as small objects, which is also difficult for most methods. We believe that some special designs for local geometry, such as geometry loss in S3CNet [6], help solve the problem, which also will be our future work. 1 vs. line 4). Due to the sparse design, the inference speed is still fast (16.7 fps as shown in Table I) when equipped with the semantic branch. And compared with the baseline model with the BEV network only, the semantic representation and ARF module improve the performance by 1.9% on mIoU and 1.2% on IoU (line 2 vs. line 3). It shows that semantic representation plays a vital role in SSC tasks. **Effect of the geometric representation.** We further show the effect of the geometric representation. Line 3 of Table II provides the results without the completion branch. The completion branch brings 0.3% improvement on IoU and 0.9% gains on mIoU (line 1 vs. line 3) with 0.31 M parameters only. And comparing line 4 with line 2, when fusing with geometric representation using the ARF module, the performance is boosted by 1% on IoU, demonstrating its effectiveness in improving the accuracy of scene completion. **Ablation study on ARF module.** We remove the ARF module and directly concatenate the features from different sources as the fused features to show the effectiveness of our ARF module. As shown in Table. II, the ARF module (line 1 vs. line 5) boosts the performance by 0.4% on mIou and 0.2% on IoU, which demonstrates the ARF model can select judicious cues from different sources and fuse the representations effectively. **Multi-scale supervision (deep supervision).** Finally, we demonstrate the multi-scale supervision (MSS) on both branches is vital to our SSC-RS. The last line of Table. II shows the detailed results. Without multi-scale supervision, the performance drops a lot (2.1% on mIou, 0.5% on IoU). It shows that MSS can effectively facilitate the learning procedure of representation separation, which is important for SSC tasks. ## V Conclusion In this paper, we develop SSC-RS to solve outdoor large-scale semantic scene completion from the perspective of representation separation and BEV fusion. Two separate branches with deep supervision are devised to disentangle the learning procedure of semantic/geometric representations explicitly, And the BEV fusion network is designed to fuse the multi-level features effectively and efficiently. Furthermore, an adaptive representation fusion module in the BEV fusion network is proposed to facilitate the fusion procedure. Extensive experiments demonstrate our SSC-RS achieves state-of-the-art performance and runs in real time. We hope our work can provide a new perspective for the SSC community. And in the future, we will focus on local geometry learning to improve the performance on small objects and extend the work to more scenarios, such as indoor and monocular scenes. ## Acknowledgment This work was supported by a Grant from the National Natural Science Foundation of China (No. U21A20484).
2305.14507
Deduction under Perturbed Evidence: Probing Student Simulation Capabilities of Large Language Models
We explore whether Large Language Models (LLMs) are capable of logical reasoning with distorted facts, which we call Deduction under Perturbed Evidence (DUPE). DUPE presents a unique challenge to LLMs since they typically rely on their parameters, which encode mostly accurate information, to reason and make inferences. However, in DUPE, LLMs must reason over manipulated or falsified evidence present in their prompts, which can result in false conclusions that are valid only under the manipulated evidence. Our goal with DUPE is to determine whether LLMs can arrive at these false conclusions and identify whether the dominant factor influencing the deduction process is the encoded data in the parameters or the manipulated evidence in the prompts. To evaluate the DUPE capabilities of LLMs, we create a DUPEd version of the StrategyQA dataset, where facts are manipulated to reverse the answer to the question. Our findings show that even the most advanced GPT models struggle to reason on manipulated facts - showcasing poor DUPE skills - with accuracy dropping by 45% compared to the original dataset. We also investigate prompt settings inspired from student simulation models, which mitigate the accuracy drop to some extent. Our findings have practical implications for understanding the performance of LLMs in real-world applications such as student simulation models that involve reasoning over inaccurate information.
Shashank Sonkar, Richard G. Baraniuk
2023-05-23T20:26:03Z
http://arxiv.org/abs/2305.14507v1
# Deduction under Perturbed Evidence: ###### Abstract We explore whether Large Language Models (LLMs ) are capable of logical reasoning with distorted facts, which we call Deduction under Perturbed Evidence (DUPE). DUPE presents a unique challenge to LLMs since they typically rely on their parameters, which encode mostly accurate information, to reason and make inferences. However, in DUPE, LLMs must reason over manipulated or falsified evidence present in their prompts, which can result in false conclusions that are valid only under the manipulated evidence. Our goal with DUPE is to determine whether LLMs can arrive at these false conclusions and identify whether the dominant factor influencing the deduction process is the encoded data in the parameters or the manipulated evidence in the prompts. To evaluate the DUPE capabilities of LLMs, we create a DUPEd version of the StrategyQA dataset, where facts are manipulated to reverse the answer to the question. Our findings show that even the most advanced GPT models struggle to reason on manipulated facts - showcasing poor DUPE skills - with accuracy dropping by 45\(\%\) compared to the original dataset. We also investigate prompt settings inspired from student simulation models, which mitigate the accuracy drop to some extent. Our findings have practical implications for understanding the performance of LLMs in real-world applications such as student simulation models that involve reasoning over inaccurate information. ## 1 Introduction Over the last several years, Transformer models have played a significant role in shaping the field of Natural Language Processing (NLP) Vaswani et al. (2017); Devlin et al. (2018); Liu et al. (2019); Brown et al. (2020); Ouyang et al. (2022); OpenAI (2023). Their exceptional ability to reason across a broad range of NLP tasks Shi et al. (2022); Zhou et al. (2022); Bubeck et al. (2023) has been a key factor contributing to their success. The success of LLMs on challenging datasets like HellaSwag Zellers et al. (2019), AI2 Reasoning Challenge (ARC) Clark et al. (2018), WinoGrande Sakaguchi et al. (2021), and GSM-8K Cobbe et al. (2021) is a testament to their advanced reasoning skills and their potential to address challenging NLP tasks. In this paper, we investigate the reasoning abilities of LLMs models under a novel paradigm we dub Deduction under Perturbed Evidence (DUPE for short). By testing LLMs' capacity to reason with flawed or perturbed evidence, we aim to determine whether LLMs can generate logically sound yet erroneous conclusions when presented with misleading information. Strong DUPE skills are critical in NLP applications like student simulations Piech et al. (2015); Liu et al. (2022), where models simulate student responses to understand how they may respond in certain scenarios. As student responses often contain inaccuracies and misconceptions, it is important for a model to analyze and utilize these inaccuracies and misconceptions as evidence to arrive at the same conclusion as the student. For instance, a student may have the misconception that the heavier an object is, the faster it falls, leading them to conclude that a bowling ball will fall faster than a ball bearing. If we provide LLMs with evidence that a heavier object falls faster, would LLMs also arrive at the conclusion that a bowling ball will fall faster than a ball bearing? We introduce DUPE as our approach to investigate this question. **Contributions:** This paper develops a novel reasoning paradigm - Deduction under Perturbed Evidence (DUPE) - to examine whether LLMs arrive at different conclusions when presented with distorted initial facts. To test the DUPE capabilities of LLMs, we create a DUPEd version of StrategyQA dataset (Figures 1, 2). StrategyQA Geva et al. (2021) is an open-domain QA dataset that is characterized by its explicit provision of the necessary facts required to answer each _yes-no_ question. In the DUPEd version of the dataset, we manipulate the facts provided in a way that results in a different answer to the original question. Our findings reveal that state-of-the-art LLMs,, including GPT3.5 and GPT4, struggle significantly on the newly introduced DUPEd-StrategyQA dataset. The accuracy of these models dropped drastically by approximately \(45\%\), falling from an impressive \(91.9\%\) on the original dataset to only \(46.7\%\) on the DUPEd-StrategyQA dataset. In addition, we conduct an ablation study on the DUPEd-StrategyQA dataset by categorizing it into two distinct parts based on the type of manipulation used - one involving language perturbations and the other involving mathematical manipulations. Furthermore, our results demonstrate that the accuracy drop can be mitigated by using prompt settings inspired by student simulation models. This approach reduced the accuracy drop to \(29\%\), with the models achieving an accuracy of \(62.7\%\) on the DUPEd-StrategyQA dataset. Our findings carry crucial implications for practical LLMs applications, particularly in the realm of student simulation models that demand reasoning over erroneous information. ## 2 Methodology, Dataset, and Prompting In this section, we overview the DUPE reasoning framework, provide details on the DUPEd version of AllenAI's StrategyQA dataset, and then explore customized prompt settings designed to assess the DUPE skills of LLMs. ### Dupe Given a _true-false_ question \(q\), the correct response \(r_{q}\in\{true,false\}\) and facts \(F_{q}\) that determine the truth or falsehood of \(Q\) (\(r_{q}\)), we change \(F_{q}\) to \(F_{q}^{\prime}\) s.t. the correct response to \(q\) flips to \(\neg r_{q}\) under altered facts \(F_{q}^{\prime}\), \[\begin{split}\text{DUPE}\big{(}(q,F_{q},r)\big{)}=(q,F_{q}^{ \prime},r^{\prime})\\ \text{s.t.}\ r^{\prime}=\neg r\,\ \mathrm{edit}_{\mathrm{ dist}}(F_{q},F_{q}^{\prime})<\tau,\end{split} \tag{1}\] where \(\mathrm{edit}_{\mathrm{dist}}\) ensures that the edit distance between the fact strings \(F_{q}\) and \(F_{q}^{\prime}\) is less than a threshold \(\tau\). The threshold \(\tau\) is generally set to two to three words to ensure minimal changes to underlying facts (examples in figure 2). The new DUPEd-tuple \((q,F_{q}^{\prime},r^{\prime})\) can be used to probe the DUPE capabilities of LLMs as shown in Figure 1. ### DUPEd-StrategyQA We use AllenAI's StrategyQA dataset [10] to assess the DUPE skills of LLMs. StrategyQA dataset provides explicit facts for answering open-domain questions. We create a DUPEd version of StrategyQA dataset composed of a total of 325 examples, of which 173 introduce natural language perturbations, while the remainder introduce mathematical errors (refer to examples in figure 2). While designing the DUPEd version, we were careful to modify the facts in the most minimal way possible As a result, we made a conscious effort to only alter one or two words in the original facts whenever possible, in order to preserve the overall meaning and context of the original question. Additionally, we refrained from using explicit negation, Figure 1: Setup of the Deduction under Perturbed Evidence (DUPE) reasoning framework. On the left is a question-fact pair in StrategyQA dataset. To test DUPE skills of a model, we change facts provided with each question such that the response to the question flips. On the right is a prompting setup to probe DUPE skills of LLMs. We use a custom prompt tailored to student simulation setting that takes in the input question, perturbed (DUPEd) facts, and requests a _yes/no_ response from LLMs. Perturbed facts represent a realistic student simulation setting since they mirror the inaccurate nature/ misconceptions of students’ responses. such as the word _not_, to modify the facts, since our intent is not to evaluate the reasoning proficiency of LLMs in handling negation. ### Student Simulation and Prompt Design DUPE is highly relevant to _student simulation models_Piech et al. (2015); Sonkar et al. (2020); Liu et al. (2022), which are widely used in education and cognitive psychology research. These models help in predicting and understanding student responses to various tasks, and thus their ability to reason over false information is critical to their success. Given this strong connection between simulation models and DUPE, these models can inspire innovative approaches to prompt design, which can be used to probe DUPE skills of LLMs Zhou et al. (2022); Bommarito II and Katz (2022). An example of such a prompt is illustrated in figure 1 and section 3. **DUPE and Counterfactual Reasoning:** Counterfactual reasoning and student simulation models require different types of reasoning. In counterfactual reasoning, the focus is on exploring hypothetical scenarios that may or may not correspond to actual reality. The fact that the information being considered is hypothetical or counterfactual is usually known beforehand. In contrast, a student simulation model needs to reason about both true and false information, and may not know beforehand whether the information being considered is true or false. For example, in figure 2, the model lacks prior knowledge about which facts are true and which ones are perturbed. The model must identify incorrect answers from the student to make inferences about future questions, which requires robust and nuanced reasoning capabilities beyond those needed for counterfactual reasoning. ## 3 Experiments We evaluate the DUPE capabilities of the two largest GPT models - GPT3.5 (version gpt-3.5-turbo-0301) and the latest GPT4 model (version gpt-4-0314) - via experiments under two different prompt settings, P1) "You are a question answering model. Your task is reason on provided evidence Figure 2: Six examples from our DUPEd-StrategyQA dataset. We flip the answer to a _yes-no_ question by altering facts provided with each question. First three questions on the top are examples of natural language perturbations, while the bottom three questions involves manipulating numerical digits. The DUPEd version was designed with minimal modifications to the facts, usually involving only one to two word changes in the original facts. Additionally, we refrained from using explicit negation words like _not_. to answer a YES or NO question", and P2) "You are a student simulation model. Your task is reason on student's responses to accurately measure the student's current knowledge state and predict the student's response to a YES or NO question based on the student's current knowledge state" from section 2.3. An example is illustrated in Figure 1. ### Main Results In the prompt setting P1, both GPT3.5 and GPT4 performed poorly on the DUPEd version of the dataset, with a decrease in accuracy by \(46.0\%\). and \(45.2\%\) respectively. As expected, the latest GPT4 model demonstrates superior performance to GPT3.5 on both the original and the DUPEd StrategyQA dataset. #### 3.1.1 Student Simulation Prompt Prompt P2 inspired by student simulation setting informs/ primes the models that the provided evidence may be incorrect since the evidence reflects the erroneous nature of students' responses. We found that prompt setting P2 performs significantly better than P1 by a margin of \(16.0\%\) for the GPT4 model. However, there was still a significant \(29.2\%\) drop in accuracy compared to GPT4's performance on the original dataset. #### 3.1.2 Language vs. Math Perturbations While curating the DUPEd-StrategyQA dataset, we divided the perturbations introduced into two distinct categories - one that involved language perturbations, while the other manipulated mathematical information (see figure 2). Our finding suggest that both GPT models are more resilient to math perturbations compared to language perturbations. E.g. for GPT3.5 there was accuracy drop of \(58.7\%\) and \(32.4\) for language and math Perturbations respectively, while for GPT4 the accuracy drops were \(50.3\%\) and \(39.4\). ### Root Cause of Poor DUPE Skills To explain the GPT models' poor performance on the DUPEd dataset, we need to identify the main factor influencing their reasoning process, i.e., whether it is the encoded information in parameters or the manipulated evidence in prompts. Recent studies have shed light on this issue, suggesting that factual information encoded in the parameters of LLMs plays a dominant role in governing the generated output. For instance, the feed-forward layers in transformer models function as key-value memories, which implies that they encode factual information, as noted by Geva et al. (2020). Moreover, Meng et al. (2022) demonstrated that localized computations, such as Rank-One Model Editing (ROME), can modify these factual associations, leading to alternative conclusions. These findings suggest that the encoded information in parameters has a significant impact on LLMs' reasoning process; further investigation is left for future work. ## 4 Conclusions In this paper, we have introduced a new reasoning paradigm we call Deduction under Perturbed Evidence (DUPE for short). Through DUPE, we have assessed the ability of LLMs models to arrive at logically sound yet erroneous conclusions when faced with distorted initial facts. Our study, which used a carefully curated dataset to evaluate DUPE abilities, has revealed that even the most advanced GPT models struggle with logical reasoning in the presence of falsified information. Moving forward, we plan to investigate into the performance of different LLMs with our dataset in varied prompt settings. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Dataset** & **Model** & **Prompt** & **Accuracy (Overall)** & **Accuracy (NLP)** & **Accuracy (Math)** \\ \hline StrategyQA & GPT3.5 & P1 & 84.6 & 94.1 & 74.4 \\ \hline DUPEd-StrategyQA & GPT3.5 & P1 & 38.6 (46.0\(\downarrow\)) & 35.4 (58.7\(\downarrow\)) & 42.0 (32.4\(\downarrow\)) \\ \hline StrategyQA & GPT4 & P1 & 91.9 & 94.1 & 89.4 \\ \hline DUPEd-StrategyQA & GPT4 & P1 & 46.7 (45.2\(\downarrow\)) & 43.8 (50.3\(\downarrow\)) & 50.0 (39.4\(\downarrow\)) \\ \hline DUPEd-StrategyQA & GPT4 & P2 & 62.7 (29.2\(\downarrow\)) & 63.1 (31.0\(\downarrow\)) & 62.2 (27.2\(\downarrow\)) \\ \hline \end{tabular} \end{table} Table 1: We evaluate the DUPE capabilities of the two largest GPT models under two different prompt settings using the DUPEd-StrategyQA dataset. Prompt P1 asks GPT models to answer a question based on provided evidence. Under Prompt P1 setting, both GPT3.5 and GPT4 perform poorly on DUPEd version of the dataset with around \(45\%\) accuracy drop. We also find that both models are more robust to mathematical perturbation compared to natural language perturbations. Prompt P2 is inspired from student simulation settings. P2 primes the models that evidence provided may be incorrect. We find that prompt P2 achieves better accuracy than Prompt P1 by \(16.0\) points for GPT4, but we still see a substantial \(29.2\%\) drop in accuracy compared to GPT4’s accuracy on original dataset. ## 5 Limitations Due to limitations in both financial and computational resources, we had to limit our testing to only the most advanced LLMs - the GPT models. Consequently, we directed our attention towards developing a dataset for evaluating proposed reasoning scenarios. As a result of these limitations, we chose to focus specifically on the evaluation of the two largest models offered by OpenAI. While we recognize that other LLMs may produce different outcomes, we believe that our dataset could serve as a valuable resource for further research into the capabilities and limitations of LLMs.
2302.01813
Leveraging weak complementary labels to improve semantic segmentation of hepatocellular carcinoma and cholangiocarcinoma in H&E-stained slides
In this paper, we present a deep learning segmentation approach to classify and quantify the two most prevalent primary liver cancers - hepatocellular carcinoma and intrahepatic cholangiocarcinoma - from hematoxylin and eosin (H&E) stained whole slide images. While semantic segmentation of medical images typically requires costly pixel-level annotations by domain experts, there often exists additional information which is routinely obtained in clinical diagnostics but rarely utilized for model training. We propose to leverage such weak information from patient diagnoses by deriving complementary labels that indicate to which class a sample cannot belong to. To integrate these labels, we formulate a complementary loss for segmentation. Motivated by the medical application, we demonstrate for general segmentation tasks that including additional patches with solely weak complementary labels during model training can significantly improve the predictive performance and robustness of a model. On the task of diagnostic differentiation between hepatocellular carcinoma and intrahepatic cholangiocarcinoma, we achieve a balanced accuracy of 0.91 (CI 95%: 0.86 - 0.95) at case level for 165 hold-out patients. Furthermore, we also show that leveraging complementary labels improves the robustness of segmentation and increases performance at case level.
Miriam Hägele, Johannes Eschrich, Lukas Ruff, Maximilian Alber, Simon Schallenberg, Adrien Guillot, Christoph Roderburg, Frank Tacke, Frederick Klauschen
2023-02-03T15:35:54Z
http://arxiv.org/abs/2302.01813v1
Leveraging weak complementary labels to improve semantic segmentation of hepatocellular carcinoma and cholangiocarcinoma in H&E-stained slides ###### Abstract In this paper, we present a deep learning segmentation approach to classify and quantify the two most prevalent primary liver cancers - hepatocellular carcinoma and intrahepatic cholangiocarcinoma- carcinoma - from hematoxylin and eosin (H&E) stained whole slide images. While semantic segmentation of medical images typically requires costly pixel-level annotations by domain experts, there often exists additional information which is routinely obtained in clinical diagnostics but rarely utilized for model training. We propose to leverage such weak information from patient diagnoses by deriving complementary labels that indicate to which class a sample _cannot_ belong to. To integrate these labels, we formulate a complementary loss for segmentation. Motivated by the medical application, we demonstrate for general segmentation tasks that including additional patches with solely weak complementary labels during model training can significantly improve the predictive performance and robustness of a model. On the task of diagnostic differentiation between hepatocellular carcinoma and intrahepatic cholangiocarcinoma, we achieve a balanced accuracy of 0.91 (CI 95%: \(0.86-0.95\)) at case level for 165 hold-out patients. Furthermore, we also show that leveraging complementary labels improves the robustness of segmentation and increases performance at case level. ## 1 Introduction Primary liver cancer is one of the most frequently diagnosed cancers worldwide. Among primary liver cancer, hepatocellular carcinoma (HCC) and intrahepatic cholangiocarcinoma (CCA) are the most frequent types accounting for roughly 70% and 15% of cases, respectively. Together they are among the most frequent cancers worldwide [21]. The diagnostic distinction between these entities has unique implications for prognosis and medical treatment. For example, certain treatment options that are regularly used in the treatment of CCA are to date ineffective and even potentially harmful for HCC, and vice versa. The most critical part for the classification of liver cancer is its histopathological evaluation which forms the basis for medical treatment decisions. But at the same time the histopathological classification of HCC and CCA can be challenging in some cases, even for experienced gastrointestinal pathologists [1; 10]. While reliable case predictions on histological hematoxylin and eosin (H&E) stained slides can already add value for the routine pathological workflow, for example by quickly reaching a decision for the necessity of ancillary tests, like immunohistochemistry, and thus saving time and resources, there is much more information hidden in the tissue composition. We hypothesize that part of this information can be accessed by using a semantic segmentation approach. In particular, the size of an area covered by a specific morphological tissue type might be correlated with relevant clinical parameters. Conventional parameters such as tumor diameter are already well-established prognostic factors for survival [28]. Additionally, treatment response, for instance to immunotherapy, might depend on factors such as the area covered by certain immune cells and their distance to corresponding tumor cells, which can be assessed with a segmentation approach. Furthermore, segmentation approaches have the advantage of inherently providing the possibility for practitioners to verify class predictions via segmentation maps. Semantic segmentation in medical imaging [2] relies on pixel-wise annotations by domain experts. As labeling efforts can be extremely costly and time-consuming, annotations are often sparse and only capture a relatively small fraction of the available, usually heterogeneous data. Especially in digital pathology, where samples are in the range of gigapixels, often large parts of the data get neglected and only a very limited number of pixels are used for model training. However, there often exists additional information about the patient's whole slide images (WSIs) that is mostly ignored for training segmentation models. For example, for cases in the training set clinical diagnoses are already available from routine diagnostics and do not require further manual labeling efforts. Therefore we can incorporate this additional information at case level during model training. For the task of tumor segmentation, we cannot directly use the patient's diagnosis as weak label [20] due to the presence of normal tissue components even on tumorous slides. Due to the mutually exclusivity of the diagnoses, we can however assign the opposite diagnosis as complementary label, i.e. stating which class a case and therefore its corresponding patches do not belong to. This way we can derive complementary labels at pixel level for all cases in our training set, independent of manual labeling efforts. Our contribution in this work is two-fold: First, we propose a segmentation approach to classify and quantify hepatocellular carcinoma and intrahepatic cholangiocarcinoma carcinoma in H&E-stained whole slide images. In contrast to classification, this approach provides more informative insights into the tissue composition such as the localization and quantification of the tumor. Additionally, the corresponding segmentation maps allow visual verification of class predictions by pathologists. Second, we extend the segmentation approach by formulating a loss function that enables us to leverage weak complementary labels derived from patients' diagnoses. While our motivation is derived from the medical use case, our contribution regarding the utilization of complementary labels for segmentation tasks is general. We demonstrate that if only a limited number of annotated samples is available, segmentation performance can be significantly improved by a large margin via leveraging weak complementary labels on additional, not manually annotated patches. Such complementary labels at patch level are often available without further manual expenditure or at least require less skills and time. We extend this analysis for scenarios where one class is not available as complementary label. Finally, we demonstrate these benefits of leveraging complementary labels on our medical use case for semantic segmentation of hepatocellular carcinoma and cholangiocarcinoma in H&E-stained slides. ## 2 Related work Regarding segmentation the U-Net [25] has become the de facto standard neural network architecture in medical imaging, including its successful application to tissue segmentation in histopathology (e.g. [7; 27; 8]). Considering the extensive annotation efforts typically required for semantic segmentation in medical imaging, there have been several proposals in the literature to reduce this manual burden for domain experts. For example, [7] use co-registered immunohistochemical (IHC) stained WSIs to extract segmentation masks. Others suggest to generate additional synthetic training patches, for example via using generative adversarial networks (GANs) [6; 11; 19]. Another approach is to make use of complementary labels, indicating which class a sample does _not_ belong to, which are often easier to obtain than ordinary labels [13; 14; 29]. The use of complementary labels has recently been studied in the context of classification tasks. In particular, [13] investigate learning solely from complementary labels. They have extended their initial approach, which had some restrictions concerning the loss function (in particular its requirement to satisfy a symmetry condition), to arbitrary models and losses [14]. In their approach they assume that each complementary label has the same probability of being selected given the ground truth label, that is each complementary class is sampled with a probability of \(1/(k-1)\) with \(k\) being the number of classes. In practice, however, complementary labels might not always be distributed uniformly across classes, for example due to selection biases of the annotators. For this reason, [29] propose a loss function that allows the probability of the complementary labels to be skewed. To the best of our knowledge, there is only one work integrating complementary labels into semantic segmentation [24]. Whereas the focus of [24] is to explore recurrent generative adversarial models in the context of medical image segmentation, the authors additionally probe the integration of the inverse of the ground truth map as complementary labels in order to mitigate the commonly encountered class imbalance between foreground and background pixels in medical image analyses. In contrast, we aim to derive complementary labels also for pixels for which we do not have manual annotations which enables us to explore additional cases during model training. Regarding machine learning-based whole slide image analyses of liver disorders, there only exist few prior works. Whereas most focus either on the classification between benign and malignant tissue in HCC (e.g. [9; 4; 12]) or on the classification of histological grades in HCC [3], we only found one study aiming to differentiate between HCC and CCA. In particular, [15] develop a deep learning-based assistant to help pathologists differentiate between these two most common liver cancer types on H&E-stained whole slide images. Their approach is based on the classification of patches extracted from tumor regions which were previously annotated by a pathologist. The slide-level accuracy is subsequently reported as the mean of the patch-level probabilities. Despite its great potential for medical imaging the full capability of biased complementary labels for semantic segmentation has not yet been demonstrated. With this work, we aim to fill this gap. ## 3 Methods In this section, we formulate a loss function for incorporating complementary labels into semantic segmentation. The proposed loss function extends the idea of biased complementary labels for classification [29] to segmentation tasks. Biased complementary labels thereby refer to labels which state an incorrect class for a pixel and where the complementary labels are not distributed uniformly across the ground truth classes. The general idea of the loss is to maximize the probability of the possible ground truth classes (i.e. all classes minus Figure 1: Mitigating the bottleneck of manual annotations by incorporating the diagnoses of patients as complementary label into the training workflow of tumor segmentation models on H&E-stained whole slide images. In addition to the sparse expert annotations, complementary labels from additional whole slide images are derived from the patients diagnosis. These are incorporated into model training via a composite loss, consisting of the supervised cross-entropy part \(\mathcal{L}_{s}\) and the complementary part \(\mathcal{L}_{compl}\). For validation the segmentation models are evaluated at both the level of expert annotations as well as on measures derived from segmentation maps on unannotated data. In particular, we compute the classification performance (CCA vs. HCC) at case level based on predictions per patient which are derived from the dominance of predicted pixels. Furthermore we use the area of the complementary class as additional indication of segmentation quality. the complementary class) weighted by the estimated probabilities of how likely the classes are given a particular complementary label. These probabilities need to either be estimated from a small annotated dataset or can in our case be derived from the distribution of diagnoses (for more details cf. Section 4.3). The weighted sum of the probabilities is again a probability distribution, this time across all possible ground truth classes, and can thus be optimized with standard loss functions such as the cross-entropy loss. To formulate the complementary loss function formally, let \(\{x_{n},\bar{y}_{n}\}_{n=1}^{N}\) be the set of image patches \(x_{n}\in\mathbb{R}^{P\times P\times 3}\) with corresponding complementary label masks \(\bar{y}_{n}\in\mathbb{R}^{P\times P}\). Hereby, \(k\) denotes the number of classes and \(P\) the patch size. Assuming \(y_{n_{p}}\!=\!c\) is the (unknown) ground truth label of pixel \(x_{n_{p}}\), then the complementary label lies in \(\bar{y}_{n_{p}}\in\{1,...,k\}\setminus\{c\}\). The estimated probabilities of assigning a complementary class \(j\) given the true label \(i\), i.e. \(Q_{ij}=P(\bar{Y}\!=\!j|Y\!=\!i)\), are summarized in a transition matrix \(Q\in\mathbb{R}^{k\times k}\). Hence, the rows of the transition matrix describe the transition probabilities of all complementary labels for the respective ground truth label. Therefore probabilities over individual rows should sum up to one. Mind that all entries on the diagonal of \(Q\) will be zero, as complementary labels indicate incorrect classes. The benefit of capturing the probabilities of the complementary classes in such a transition matrix \(Q\) is that the conditional probability of the true label \(P(Y\!=\!i|X)\) can be approximated by multiplying \(P(\bar{Y}\!=\!i|X)\) with the transposed transition matrix \(Q^{T}\). Thus, we can apply standard loss functions for optimization. Suppose \(\hat{y}_{n_{p}}\) denotes the predicted softmax probabilities of pixel \(x_{n_{p}}\) for some model, and \(\bar{y}_{n_{p}}\) the corresponding one-hot encoded complementary label, we formulate the complementary loss as \[\mathcal{L}_{\text{compl}}(X,\bar{Y})=-\frac{1}{NP}\sum_{n=1}^{N}\sum_{p=1}^{P }\bar{y}_{n_{p}}\log(Q^{T}\hat{y}_{n_{p}}) \tag{1}\] Note that the logarithm is applied element-wise here. Given that the transition matrix \(Q\) describes all transition probabilities between complementary labels and ground truth labels, the matrix multiplication \(Q^{T}\hat{y}_{n_{p}}\) consequently represents the conditional probability of the possible true labels. We can further extend this loss to a focal version [17] by inserting the multiplicative factor \((1-Q^{T}\hat{y}_{n_{p}})^{\gamma}\) with \(\gamma>0\). This penalizes the hard-to-classify pixels more strongly. We then define the overall loss as the weighted sum of the supervised loss and the complementary loss, \[\mathcal{L}=\mathcal{L}_{s}+\alpha\,\mathcal{L}_{\text{compl}} \tag{2}\] where \(\mathcal{L}_{s}\) denotes the categorical cross-entropy loss on the annotated pixels. Note that in contrast to the cross-entropy loss, the complementary loss is not masked and can be applied to both sparsely annotated as well as completely unannotated patches. ### Proof of concept: MNIST ablation study Here, we use the well-studied MNIST dataset to investigate the behavior of the proposed complementary loss under controlled conditions. In particular, we explore different conditional probability distributions of the complementary labels. To this end we use a subset (N=1,000) of the MNIST dataset to segment and classify the digits "3", "4" and all others. Only 10% of the data contain supervised labels. Complementary labels for all samples were distributed according to two different conditional probability distributions where in (i) the complementary labels are biased towards respective classes and in (ii) there is no complementary label information for one of the classes. The two scenarios can formally be expressed by the two following transition matrices: \[\text{(i)}\quad Q_{1}=\begin{pmatrix}0&.7&.3\\.3&0&.7\\.7&.3&0\end{pmatrix};\quad\text{(ii)}\quad Q_{2}=\begin{pmatrix}0&1.&0\\ 1.&0&0\\.5&.5&0\end{pmatrix}\] For segmenting the digits, we train a small U-Net model and report performance over five random seeds per condition. The results shown in Fig. 2 demonstrate that including complementary labels from additional samples in the form of the suggested complementary loss (1) significantly increases performance over the supervised baseline trained on the small labeled dataset (10% of the dataset). The expected upper bound is given by a supervised model trained on the complete dataset, thus assuming we would have access to the ground truth labels for all data points. Regarding the different distributions of complementary labels, restricting the full number of complementary classes as with \(Q_{2}\) has a slight negative effect on performance as expected, though only slightly. Overall, we can see that utilizing complementary labels together with only 10% supervised labels already comes close to the upper performance bound of using completely supervised labels, thus demonstrating the benefit of our proposed complementary loss for segmentation. ## 4 Experiments In this section we will outline the segmentation approach to differentiate and quantify HCC and CCA as the primary tumor types in liver specimens, as well as applying the proposed complementary loss in this real-world scenario. In the first part of the section we will describe the machine learning-based experimental setup before introducing the dataset, which was digitized and curated for the purpose of this work. The rest of the section delineates the specific parameters and details of the model training for reproducibility. ### Experimental setup We aim to segment a given whole slide image into the following three tissue types: CCA, HCC, and non-carcinoma tissue (hereafter referred to as _Other_) which contained both annotations from healthy tissue (e.g. normal liver epithelium, lymphocytes) as well as image artifacts. Furthermore, we study the benefits of the proposed complementary loss which allows to include more patients into model training without further manual annotation expenditure. We evaluate our approach with respect to three different criteria: (i) Pixel-wise segmentation performance on the annotated test set, (ii) binary classification at case level, and (iii) quantitative evaluation of the segmentation maps on the hold-out test cohort. While evaluation against the manual annotations on a test set is the standard approach to estimate the model's generalization to unseen data, additional evaluation on not manually annotated WSIs allows for an evaluation on a much larger cohort and therefore covering more of the naturally occurring heterogeneity of hepatic liver tissue. For these cases only weak labels at case level (i.e. the patients' diagnoses) were available. For (iii) we use the segmentation maps to report the pixel share of the complementary class as an additional pixel-wise measure which evaluates the entirety of the whole slide image. An overview over the experimental setup is shown in Fig. 1. ### Data We conduct our experimental evaluation using digital whole slide images of H&E-stained slides from formalin-fixed, paraffin-embedded (FFPE) primary hepatic tumor resections of either HCC or CCA. For this, anonymized archival tissue samples were retrieved from the tissue bank of Charite Universitatsmedizin Berlin. All data were collected in accordance with the Declaration of Helsinki and the International Ethical Guidelines for Biomedical Research Involving Human Subjects. We included tissue samples from adult patients (aged 18 and older) between 2016 and 2018 for HCC and between 2010 and 2019 for CCA, resulting in a total of 262 patients (124 CCA, 138 HCC). The histopathological classification was derived by first evaluating the morphological features in H&E stainings. In case of diagnostic uncertainty - e.g. HCC vs. CCA, HCC vs. healthy liver parenchyma, CCA vs. healthy bile duct - additional analyses like gomori reticulin staining or immunohistochemistry staining (e.g. CK7, CK19, HepPar1, Glypican 3) were used. Two pathologists annotated the digitized histological slides (N=124) from 97 patients (47 CCA, 50 HCC) according to Figure 2: Proof of concept on MNIST. Only a small set of the dataset is labeled whereas the rest of the samples only contain (biased) complementary labels. Leveraging additional samples with solely complementary labels improves segmentation performance over the baseline of supervised training on the small labeled dataset. Transition matrices \(Q_{1}\) and \(Q_{2}\) correspond to two different underlying conditional distributions of the complementary labels. The upper bound of supervised training on the full dataset, assuming we are given supervised labels for all samples, is depicted on the right. the respective carcinoma and other tissue components such as healthy liver parenchyma, healthy bile duct epithelium, connective tissue, also covering commonly occurring artifacts. The majority of annotations were collected based on corrections of segmentation maps of previous preliminary models. Therefore annotations focus on difficult regions of the slides. From the polygon annotations we extracted patches of size \(340\!\times\!340\,\mathrm{px}\) at a resolution of \(0.5\mu m\), which resulted in a total of 44,088 patches. Exemplary patches for both tumor entities are shown in Fig. 2(a). For the additional complementary data, we derive complementary labels from the patients diagnoses. For example, if a patient is diagnosed with CCA, no patch of the patient's WSI should contain HCC and vice versa. We use the complementary label for both the sparsely annotated patches as well as patches from additional 49 unannotated patients (stratified according to diagnosis and tumor grade). These additional patches on unannotated slides were extracted on a regular grid with a stride of ten patch lengths. In total this resulted in an additional 6,143 patches. Example patches are depicted in Fig. 2(b). The 165 patients (77 CCA, 88 HCC) for which we did not gather annotations, were kept as hold-out test cohort for the evaluation at case level. The distribution of tumor grades of this test cohort is provided in Tab. 2. ### Model training In order to segment the different tissue types, we train a U-Net [25] with a ResNet18 backbone2. The cross-entropy loss is optimized using Adam with weight decay regularization [18] on mini-batches of 128 patches. The learning rate is experimentally chosen to be \(1e-05\) and the weight decay set to \(1e-05\). For the supervised part, the cross-entropy loss is class-weighted and masked (similar to [5]) which is necessary due to the sparsity of annotated pixels on the patches. To prevent overfitting, early stopping is performed on the averaged per class \(F_{1}\)-score with a patience of 50 epochs. Footnote 2: Implementation taken from github.com/qubvel/segmentation_models Besides common geometric augmentations such as translation and rotation, we address the large stain color heterogeneity in the dataset (cf. Fig. 2(a)) by augmentations in the \(L\alpha\beta\) color space[26]. The advantage of such perceptual color spaces is that euclidean distances in this space are perceptually perceived as equally distant by humans. Inspired by color normalization of [23], we use the mean and standard deviation in the \(L\alpha\beta\) color space to translate and scale the color values respectively. During training, we normalize the patch with the corresponding cases' mean and standard deviation per axis respectively, before transforming it with values randomly drawn from the fitted Gaussian distributions over the data. In order to include patches from cases which were not manually annotated, the complementary labels are derived from the mutually exclusive diagnoses. While for classes _HCC_ and _CCA_ this could be computed analytically, we estimated the probabilities of complementary labels for the ground truth class _Other_ from the distribution of patches of both tumor types. The underlying assumption is that the share of patches displaying healthy tissue is the same for CCA and HCC cases. Inspired by the results on the MNIST dataset (cf. Fig. 2), we additionally gathered few complementary labels for class _Other_. This was achieved by assigning patches, which according to the annotation were fully covered with tumor cells, the complementary label _Non-Other_. This affected about 2% of the annotated patches. From this we Figure 3: Examples of patches of tumorous regions of hepatocellular carcinoma (HCC) and intrahepatic cholangiocarcinoma-cinoma (CCA) (left) and extracted on a regular grid from unannotated cases (right). The latter are leveraged for model training by deriving complementary labels from the patients’ diagnoses. The examples illustrate the high heterogeneity in morphology and staining across the dataset and thus, the associated difficulty of diagnostic differentiation. The patches are extracted from the corresponding whole slide images with a resolution of \(0.5\mu m\) /pixel. can derive the following transition matrix \[Q=\begin{pmatrix}0&.998&.002\\.980&0&.020\\.430&.570&0\end{pmatrix}\] The additional hyperparameters such as the weight of the complementary loss and the focal loss parameter were determined experimentally and were set to \(\alpha=0.3\) and \(\gamma=2\) throughout all experiments. ## 5 Results We evaluate our models at pixel level on the annotated test set as well as at case level on a larger, not manually annotated cohort. For evaluation at pixel level, results are reported as average over an outer 5-fold cross-validation, therefore taking into consideration the large heterogeneity of the dataset. This way each patient is contained in the test set once. Both the evaluation at case level as well as a quantification of segmentation maps in terms of complementary class shares are performed on the unannotated, hold-out test cohort. To this end, we compute segmentation maps and derive their case-level prediction from the most dominant class in the segmentation map. If cases contain multiple slides, their predicted class shares are aggregated for the case prediction. ### Evaluation on the annotated test set Segmentation performance is evaluated on the annotated, hold-out test set in order to assess the generalization capability to unseen patients. From a more practical point of view, the segmentation performance can also be interpreted as an estimate of the segmentation map quality at the granular level of cell groups. Naturally, large annotations and patients Figure 4: Segmentation maps illustrating the benefits of including the complementary loss. The left column shows the segmentation map of a baseline model while the right column shows the segmentation map of corresponding model trained with the complementary loss. Note that the prediction of class _Other_ is rendered transparent in these heatmaps to keep the focus on the respective tumorous regions. with numerous annotations will dominate pixel-level scores. To mitigate this impact, we compute a case-averaged \(F_{1}\)-score for CCA and HCC which is achieved by determining the respective \(F_{1}\)-scores per patient before averaging. This way the metric better represents the generalization to new patients instead of new annotated pixels. The performance over the outer five-fold cross-validation is \(80.20\pm 7.07\) for CCA and \(68.97\pm 7.07\) for HCC. Out of these five models, the model which achieves median performance is used for further evaluation on the large, unannotated test set. This model achieves an \(F_{1}\)-score of \(78.5\) for CCA and \(69.6\) for HCC. With this standard segmentation evaluation, we specifically target the performance at pixel level which is depend on the quality and representativity of the respective annotations. Therefore the scores are heavily influenced by the process of how annotations where gathered. The corrective fashion way of labeling therefore leads to annotations which are particularly focused on difficult regions of the slides (cf. Section 4.2). ### Evaluation on the unannotated test set In contrast, we additionally evaluate the models at case level. This is approached by deriving the case-level prediction from the predominantly predicted cancer type in the segmentation map. For instance, if the model predicts mostly HCC (in comparison to CCA), we derive the case-level label HCC. Due to the fact that this case-level evaluation is independent of manual annotations, we can assess the generalization of our model on the remaining patients for which we do not have manual annotations. For these patients only their diagnosis is available from clinical reports. As it is computationally expensive to compute gigapixel segmentation maps, we only evaluate a single model on these 165 cases. In particular, we chose the model with median performance. For case-level discrimination between HCC and CCA our model achieves a balanced accuracy of 0.905 (CI 95%: 0.861-0.947). The reported confidence interval was obtained by bootstrapping with 1,000 resamples. Regarding the confusion between the diagnoses at case level, we observe that the model tends to confuse HCC cases as CCA. The respective confusion matrix is depicted in Fig. 5. Falsely predicted cases were reviewed by pathologists in order to identify common patterns. Several of these cases were poorly differentiated tumors, meaning they lost the morphological characteristics of the original healthy cells, had a high percentage of artefacts or consisted of morphologically atypical tumors with mixed features in the H&Estainings. Moreover, in some HCC cases the tumor area was quite small (e.g. due to necrosis) and around the tumor bile duct proliferations with partly dysplastic cells had occurred, which were falsely predicted as CCA (cf. Fig. 6). Overall, our model outperforms previously reported results by [15], namely 0.885 on 26 validation WSIs and 0.842 on 80 independent test WSIs (cf. Tab. 1). It should be noted that their task was slightly easier as classification was performed on manually selected tumor regions instead of the entire whole slide image. This difference in setup is due to the main focus of [15], where they investigated the impact of using model predictions to assist pathologists with the subtype classification. Furthermore, we use the segmentation maps to quantify the performance in terms of confusion between the cancer types at pixel level but independent of the annotations. Particularly, we measure the falsely predicted area of the complementary class separately for both carcinomas. The reported areas are relative to the slides size. This means \begin{table} \begin{tabular}{l|l|l} \hline & **Diagnostic accuracy (95\% CI)** & **Test set size (WSI)** \\ \hline Kiani et al. [15] & 0.885 (0.710-0.960) & 26\({}^{3}\) \\ & 0.842 (0.808-0.876) & 80\({}^{4}\) \\ Ours & **0.905** (0.861-0.947) & 165 \\ \hline \end{tabular} \({}^{1}\) Validation set, \({}^{2}\) Independent test set \end{table} Table 1: Diagnostic accuracy of discriminating HCC and CCA. Figure 5: Confusion matrix of derived case-level predictions from the corresponding segmentation maps for the hold-out test set. The prediction at case level is determined by the dominance of either CCA or HCC pixels. that we compute the ratio of for example predicted CCA pixels over slide pixels for a patient diagnosed with HCC. Both carcinoma types display a similar share of complementary class area of 6.00% and 6.15% for CCA and HCC, respectively. ### Complementary label improvement Figure 7 depicts the difference in segmentation performance when additionally leveraging the patients' diagnoses via the proposed complementary loss function. To not only compare approaches regarding predictive performance but also regarding robustness, performance is reported over five differently seeded runs. Due to the computational complexity of this evaluation, we evaluate the models on the (smaller) annotated test set. Besides the \(F_{1}\)-score for CCA and HCC, we also report the overall macro score, i.e. the average over all three segmentation classes (including _Other_). Although we waived to compute case-based scores but instead directly averaged scores per class, the observed baseline trend is similar to the reported one in Sec. 5.1. By providing additional information through complementary labels to the classifier, we observe that models' prediction variance is reduced substantially for all classes. Furthermore, we observe an increase in segmentation performance, especially prominent for HCC tissue. Here, the average test set \(F_{1}\)-score over the five randomly seeded models increases by 4%. The qualitative improvement of segmentation maps when leveraging complementary labels can be seen in Fig. 4. The left column shows the segmentation map of a baseline model while the right column shows the segmentation map of the corresponding model (i.e. using the same random seed) trained with additional complementary labels. We observe that for the HCC case, the prediction of CCA is reduced and vice versa. We additionally evaluate our models at case level to assess if complementary labels also improve the balanced accuracy regarding HCC and CCA discrimination. For this reason, we compute the corresponding baselines with the same seeds. Comparing these baseline models with the models which use complementary labels, we observe an increase in case-level balanced accuracy from \(0.86\pm 0.03\) to \(0.91\pm 0.03\) on the annotated test set. Overall, we observe that while the model has robust performance for both tumor types, it is more accurate in detecting CCA, both on a segmentation and case classification level. However, the inferior segmentation performance of HCC cases can be improved by integrating weak complementary labels (derived from the diagnoses) in terms of the proposed loss. ## 6 Discussion The benefits of leveraging additional patches with solely weak complementary labels during segmentation model training was explored both for a general segmentation task on MNIST and on our real-world dataset. Over all experiments the additional information reduces variance and improves performance over supervised models trained on the smaller Figure 6: Examples of falsely predicted cases. A. Artefacts (black staining) lead to false predictions. Areas without black artefacts are mainly predicted correctly as either HCC or healthy tissue. B. Dedifferentiated (G3) HCC, which lost the morphological characteristics of the original healthy cells, partly falsely predicted as CCA. C. Low number of vital HCC tumor cells due to necrosis, resulting in a small HCC tumor area. Additionally, bile duct proliferations with partly dysplastic cells around the tumor had occurred, which were falsely predicted as CCA. subset of annotated data. For the tumor segmentation task the performance improvement is especially observable in the mean performance of HCC tissue segmentation. Furthermore this improvement also reflects in the diagnostic classification between HCC and CCA at case level. Therefore complementary labels and the proposed loss function provide a way to include more patients without further manual annotations during training. This is especially relevant as medical segmentation datasets are often rather small while exhibiting large heterogeneity among patients. We extended the idea of complementary labels to segmentation tasks which had been proven to work well in classification [29, 13, 14]. Due to different properties of classification and segmentation tasks regarding low-density regions along class boundaries, some assumptions might be violated when transferring losses across these tasks. For example, the generalization of the very promising consistency regularization technique for classification [16] was hampered by the violation of the cluster assumption in input space for segmentation tasks [22]. Besides proving the benefits of the complementary loss in segmentation tasks, we additionally explore the situation where not all class labels are used as complementary labels and thus some classes do not have any complementary label information. While the performance increase persists, it is reduced compared to using the full range of classes as complementary labels. For the tumor segmentation task, we almost exclusively have binary complementary labels for a three class segmentation task. This means that we hardly (only for 2% of the data) have access to patches with complementary label _Non-Other_. We further hypothesize that the regular grid used to extract the patches of the not annotated whole slide images might not capture the most informative structures and include some redundancy and could be improved by more sophisticated sampling strategies. To better understand the performance and limitations of our tumor model, we analyzed specific subgroups with respect to tumor cell grading. Histological grading is a measure of the cell appearance in tumors, for liver tumors ranging from well differentiated (G1) over moderately differentiated (G2) to poorly differentiated (G3). In poorly differentiated tumors, cells lose their morphological characteristics, thus making it very difficult for pathologists to distinguish these liver tumors in H&E-staining. We observe that this also translates into the performance of our model (cf. Tab. 2). Whereas our model achieves a balanced accuracy of 0.93 for G2 HCC cases, the performance drops to 0.72 for poorly differentiated G3 HCC cases. In CCA this difference is not so pronounced, which is in line with clinical observations, since the morphology of CCA G3 is more similar to lower grade CCAs, than this is the case for HCC. A similar challenge to the above, but to a lesser extent, is that e.g. well differentiated tumor cells (G1) are difficult to distinguish from healthy cells, especially when only higher magnifications are used. Against this background, \begin{table} \begin{tabular}{l|l|l|l|l|l|l|l|l} \hline \hline Tumor grade per entity & \multicolumn{6}{c|}{**CCA**} & \multicolumn{6}{c}{**HCC**} \\ & G1 & G2 & G3 & n/a & G1 & G2 & G3 & n/a \\ \hline Balanced accuracy & 1.0 & 0.93 & 0.92 & 1.0 & 0.89 & 0.93 & 0.72 & 0.0 \\ Number of cases & 2 & 61 & 12 & 2 & 9 & 60 & 18 & 1 \\ \hline \hline \end{tabular} \end{table} Table 2: Balanced accuracy computed per tumor grade subgroups. _n/a_ hereby indicates that the grading could not be determined e.g. due to a lack of vital cells. Results depicted in gray are reported for completeness but we restrain from drawing conclusions as samples sizes are too small. Figure 7: Test set \(F_{1}\)-scores of models trained with and without complementary labels. The plots summarize the evaluations of five models with different random seeds per condition. What stands out is the reduced variance when including complementary labels. Furthermore, while mean performance increase is small for CCA and in total, we can observe an increase in mean HCC performance. pathologist usually use different zoom levels in their clinical routine. While single scale at \(0.5\mu m\) per pixel seems to be sufficient for good case-level predictions, segmentation of tissue areas is in some cases challenging and could only be correctly classified by combining different zoom levels. While this was expected by pathologists, the heatmaps of G1 tumors (despite having very low numbers) show good segmentation results, which might give a hint about patterns in higher magnification, which can be used for segmentation. Nonetheless, an approach which combines various zoom levels would likely improve the model performance further. While the current model shows robust performance in discriminating HCC and CCA, it is left for future work to include rare primary forms such as angiosarcomas and secondary forms of liver cancer, i.e. metastases from other cancers, in order to make it applicable in clinical practice. However, our segmentation approach has scientific and potential clinical value as it allows correlation of segmentation data with clinical data. This could enable personalized diagnostic and therapeutic pathways, e.g. by predicting response to specific treatment options depending on the tissue composition. Follow-up projects in this regard are already underway. ## 7 Conclusion We successfully applied a deep learning segmentation approach for diagnostic differentiation between hepatocellular carcinoma (HCC) and intrahepatic cholangiocarcinomaCCA. Our model achieved a balanced accuracy of 0.91 at case level. In order to alleviate the burden of manual, time-consuming segmentation annotations by domain experts, we proposed to leverage available information from patient's diagnoses during model training. We incorporate this weak information via complementary labels, indicating that if a patient was diagnoses with HCC there should not be a prediction for CCA for this patient and vice versa. For this we formulate a complementary loss function for semantic segmentation. We provide evidence that leveraging additional patches with solely weak, complementary labels improves predictive performance for the general segmentation task as shown under controlled conditions. Furthermore, we showed that complementary labels are even beneficial if single classes are excluded from the complementary labels. In our real-world setting, we demonstrated that by including patches from not annotated patients with regard to their complementary label during model training improves the robustness of tissue segmentation and increases performance at case level. ## Acknowledgements This work was supported in part by the German Ministry for Education and Research as BIFOLD - Berlin Institute for the Foundations of Learning and Data (ref. 01IS18025A and ref. 01IS18037A) and as BMBF DEEP-HCC consortium and the German Research Foundation (DFG SFB/TRR 296 and CRC1382, Project-ID 403224013). J. E. is participant in the BIH Charite Junior Digital Clinician Scientist Program funded by the Charite - Universitatsmedizin Berlin, and the Berlin Institute of Health at Charite.
2310.11811
ShapeGraFormer: GraFormer-Based Network for Hand-Object Reconstruction from a Single Depth Map
3D reconstruction of hand-object manipulations is important for emulating human actions. Most methods dealing with challenging object manipulation scenarios, focus on hands reconstruction in isolation, ignoring physical and kinematic constraints due to object contact. Some approaches produce more realistic results by jointly reconstructing 3D hand-object interactions. However, they focus on coarse pose estimation or rely upon known hand and object shapes. We propose the first approach for realistic 3D hand-object shape and pose reconstruction from a single depth map. Unlike previous work, our voxel-based reconstruction network regresses the vertex coordinates of a hand and an object and reconstructs more realistic interaction. Our pipeline additionally predicts voxelized hand-object shapes, having a one-to-one mapping to the input voxelized depth. Thereafter, we exploit the graph nature of the hand and object shapes, by utilizing the recent GraFormer network with positional embedding to reconstruct shapes from template meshes. In addition, we show the impact of adding another GraFormer component that refines the reconstructed shapes based on the hand-object interactions and its ability to reconstruct more accurate object shapes. We perform an extensive evaluation on the HO-3D and DexYCB datasets and show that our method outperforms existing approaches in hand reconstruction and produces plausible reconstructions for the objects
Ahmed Tawfik Aboukhadra, Jameel Malik, Nadia Robertini, Ahmed Elhayek, Didier Stricker
2023-10-18T09:05:57Z
http://arxiv.org/abs/2310.11811v2
# ShapeGraFormer: GraFormer-Based Network for Hand-Object Reconstruction from a Single Depth Map ###### Abstract 3D reconstruction of hand-object manipulations is important for emulating human actions. Most methods dealing with challenging object manipulation scenarios, focus on hands reconstruction in isolation, ignoring physical and kinematic constraints due to object contact. Some approaches produce more realistic results by jointly reconstructing 3D hand-object interactions. However, they focus on coarse pose estimation or rely upon known hand and object shapes. We propose the first approach for realistic 3D hand-object shape and pose reconstruction from a single depth map. Unlike previous work, our voxel-based reconstruction network regresses the vertex coordinates of a hand and an object and reconstructs more realistic interaction. Our pipeline additionally predicts voxelized hand-object shapes, having a one-to-one mapping to the input voxelized depth. Thereafter, we exploit the graph nature of the hand and object shapes, by utilizing the recent GraFormer network with positional embedding to reconstruct shapes from template meshes. In addition, we show the impact of adding another GraFormer component that refines the reconstructed shapes based on the hand-object interactions and its ability to reconstruct more accurate object shapes. We perform an extensive evaluation on the HO-3D and DexYCB datasets and show that our method outperforms existing approaches in hand reconstruction and produces plausible reconstructions for the objects. keywords: Computer Vision, Deep Learning, Graph Convolutional Network, Transformers, Hand-Object 3D Reconstruction, Pose Estimation + Footnote †: journal: Computer Vision ## 1 Introduction The most popular approach to reconstruct shapes is to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from template meshes from template meshes from template meshes from template meshes from template meshes. In this paper, we propose a novel approach to reconstruct shapes from ## 1 Introduction Understanding and reconstructing hand and object interactions in 3D is important for analyzing and imitating human behavior. Modeling hand-object interactions realistically has applications in a number of fields, including robotics, virtual reality, and augmented reality among others. The last decade has witnessed rapid advance in 3D hand pose [1; 2; 3; 4; 5; 6; 7; 8; 9] and object [10; 11; 12; 13; 14] estimation in isolation. In contrast, reconstructing a hand and an object simultaneously from a monocular image has received lesser attention. Besides the common issues from complex pose variation, clutter, and self-occlusion, methods for reconstructing hand-object in close contact have to additionally cope with _mutual occlusions_. Existing methods, dealing with challenging object manipulation scenarios, tend to focus on hand reconstruction alone [15; 16]. Recent approaches to jointly reconstruct hand and object, often neglect the intrinsic kinematic and physical correlation, which exists among the two [17; 18; 19; 20]. Approaches exploiting that mutual relation typically focus on coarse pose estimation or assume known hand and object shapes [21; 22; 23; 24; 25; 26; 27]. In this paper, we propose one of the first approaches to jointly reconstruct physically valid hand and object shapes from a single depth map. In contrast to most methods, ours can generalize to different hand models and unknown object shapes, by directly regressing mesh vertices, rather than model parameters. We avoid perspective distortion and scale ambiguities, typical for RGB image-based methods by working exclusively in the 3D domain. The input of our deep network is a 3D voxelized grid of a given depth map, centered around the hand-object interaction. The output consists in: (i) 3D heatmaps that describe the location of hand-object pose keypoints (ii) hand-object shape predictions in voxelized form, and (iii) the corresponding 3D hand-object mesh vertex coordinates. To effectively tackle the problem of simultaneous hand-object pose and shape reconstruction, we propose a novel architecture based on Graph Convolutional network and Multi-headed Attention layers. Specifically, we introduce the following novel modules: 1. \(PoseNet\)\(\&\)\(VoxelNet\): Two 3D-to-3D voxel-based networks for hand-object pose and shape estimation, respectively; 2. \(ShapeGraFormer\): State-of-the-art GraFormer (Transformers with Graph Convolutional layers) for Hand-Object shape reconstruction; 3. Positional Embedding layer based on the template meshes for the hand and the object; 4. Topologically consistent object mesh registration for optimal object modeling and shape prediction; We validate our design choices and evaluate our approach both quantitatively and qualitatively. Our approach outperforms previous work on popular datasets [28; 29], as also reported on the challenge website 1, with a minimal shape reconstruction improvement of 0.43 cm over the state-of-the-art. Footnote 1: [https://codalab.lisn.upsaclay.fr/competitions/4393#results](https://codalab.lisn.upsaclay.fr/competitions/4393#results) ## 2 Related Work In this section, we discuss the existing methods for joint hand-object reconstruction from challenging monocular object manipulation scenarios. For a survey of works focusing on the reconstruction of hands and objects in isolation, please refer respectively to [9] and [11]. Most methods that jointly reconstruct 3D hand-object from monocular take single RGB [24; 23; 26; 27; 30; 31; 20] or RGB+D input [21; 22; 17]. Very few recent approaches consider single depth input [32; 33; 34], due to its intrinsic challenges. Nevertheless, the availability of the depth information is a key factor to allow proper, in-scale and scene-dependent 3D shape reconstruction, required for _e.g._ virtual and augmented reality applications, especially for single frame, one-shot approaches. While RGB-based approaches can rely on large labeled data for training their models [35; 18; 23], typically based on the MANO hand model [36], methods that rely on the depth channel have access to only a limited amount of data, because labeling of real scenes in the 3D domain are impractical. For this reason, most depth as well as some RGB+D approaches build their own synthetic datasets as an attempt to improve their results [32; 23; 33; 34; 17]. Recently, some datasets have been introduced to bridge the gap between RGB and depth data availability, _i.e._ HO3D and DexYCB [28; 29]. Still, they only provide limited sample variation, especially in terms of number of objects considered. As a result, modeling and reconstructing the object of interaction remains an underconstrained, challenging problem. Many methods restrict themselves on objects with known shape [21; 22] or reconstruct them on the fly, under certain object shape and visibility assumptions [34; 37; 38]. The remaining approaches only output coarse object pose, _e.g._ represented with bounding boxes [24, 30, 19, 31]. Some more sophisticated methods, define a deformable object model, typically based on a 3D sphere, capable to coarsely adapt to virtually any convex shape [23, 37, 20]. The resulting object reconstruction typically lacks of surface details, due to over-smoothing. Nevertheless, we believe it to be a strong base with sufficient surface information to reliably reconstruct interactions with the hand surface. The remaining hand reconstruction as a standalone problem has been largely studied in the past [39]. However, modeling and reconstructing 3D hand-object interactions is still very challenging, especially on a monocular setting due to the large mutual occlusions. To simplify the task, many approaches focus on fewer model parameter estimation [23, 26]. In contrast, we directly regress hand-object vertices, which makes our method capable to generalize to different models or geometries. While many approaches independently reconstruct hand and object before putting them in context [23, 26], our approach is designed to simultaneously regress hand-object geometries, thus allowing to implicitly study the underlying physical and kinematic correlation existing among the two. Another advantage of direct vertex regression is the possibility to reconstruct extra, more realistic deformation, which cannot be synthesized via model-parameter tuning. In this case, to avoid distortions in the 3D reconstructions, though, particular care has to be given to the algorithm design, especially to the shape estimation components. To avoid perspective distortion from the start, we convert the input depth map to 3D pointcloud and base the remaining algorithmic steps to the corresponding voxelized domain, following the approach from Malik _et al._[9] (HandVoxNet). The core of our pipeline is based on Graph Convolutional neural network (GCN), which has been shown to effectively tackle shape reconstruction problems on graph-structured data, such as mesh topology [40]. In contrast to the RGB-based approach by Aboukhadra _et al._[20] (THOR-Net), our depth-based method reconstructs hand-object geometries in one-shot, thus significantly reducing computational costs and training time. We qualitatively and quantitatively demonstrate the effectiveness of our \(ShapeGraFormer\) component, comprising a combination of GCN and Multi-headed Attention layers, as in [41], in the simultaneous reconstruction of hand and object interaction as well as in shape refinement. For the latter, we demonstrate its effectiveness in improving physical hand-object interactions, without the explicit need for expensive physical simulation [21] or penetration and contact loss, as required in previous work [26; 23]. ## 3 Method We design a voxel-based 3D CNN along with a \(ShapeGraFormer\) network to reconstruct plausible 3D hand-object shapes in a single forward pass from an input depth image. Our pipeline is depicted in Figure 1. In a preprocessing step, we convert the input depth map to its voxelized form \(V_{D}\) by projecting the raw depth image pixels into a cubic binary 3D grid around the hand-object interaction space, similarly to [9]. Given \(V_{D}\), the first network component in the pipeline, \(PoseNet\), predicts 3D hand and object pose in the form of 3D heatmaps, resp. \(\hat{P}^{H}\) and \(\hat{P}^{O}\), see Section 3.2. The resulting heatmaps concatenated with \(V_{D}\) are forwarded to the second network component, \(VoxelNet\), which produces a voxelized shape representation of the hand and the object, resp. \(\hat{V}^{H}\) and \(\hat{V}^{O}\), see Section 3.3. The voxelized depth \(V_{D}\) along with the intermediate voxelized representation and the features of the \(VoxelNet\) serve as input to the next network component, \(ShapeGraFormer\), which regresses topologically-consistent hand and object vertices, see Section 3.4. We describe our hand and object models in the next Section. ### Hand and Object Models Hand ModelTo represent hands, we use MANO parametric hand model [36], which maps joint angles and shape parameters to a triangulated mesh representation \(S^{H}:=\{v^{H},f^{H}\}\), consisting of \(|v_{H}|=778\) 3D vertices, and a set of 3D hand-skeleton joints \(\mathcal{J}\), where \(|\mathcal{J}|=21\), representing the hand pose. Object ModelSince objects differ greatly in terms of shape and size, direct regression of object vertices from a set of topologically inconsistent meshes results in strong noise in the output. In order to bring all the known object geometries into the same topologically consistent representation \(S^{O}:=\{v^{O},f^{O}\}\), we deform a source mesh via a set of vertex displacements \(D^{j}:=\{d_{i}^{j},\forall v_{i}^{O}\}\), scale \(c_{j}\) and translation \(t_{j}\) to approximate all target Figure 1: Overview of our pipeline. Our method takes as input a single depth image, converted to a 3D voxelized representation, and outputs 3D realistic hand-object interactions. The input depth is first forwarded through a sequence of three components, namely \(PoseNet\), \(VoxelNet\), and \(ShapeGraFormer\), predicting respectively hand-object pose heatmaps, voxelized shape representations, and topologically-consistent shapes. objects \(S^{j}\) as: \[S^{j}\sim c^{j}\cdot(S^{O}+D^{j})+t^{j},\forall j \tag{1}\] Our source mesh is a sphere obtained performing 4 subdivisions of a unit, _i.e._radius equals to 1, regular icosahedron centered at the origin, each subdivision generating four new faces per face, resulting in a total of \(|v^{O}|=2562\) vertices and \(|f^{O}|=5120\) faces. The object pose is defined as the 3D bounding box, consisting of eight 3D corner coordinates. Object Approximation ProcedureIn order to approximate differently shaped objects, we first scale \((c^{j})\) and translate \((t^{j})\) the target mesh \(S^{j}\) to fit inside the sphere. Then, to learn the set of displacements \(D^{j}\), we minimize the Chamfer distance between the predicted and target mesh computed on a total of \(5,000\) surface samples. We additionally enforce surface smoothness by adding the following shape regularizers to the objective: (i) surface Laplacian smoothness, (ii) normals consistency across neighboring faces, and (iii) edge length consistency across the entire deformed mesh. We minimize the weighted summation of all mentioned terms using stochastic gradient descent (SGD). ### PoseNet: 3D Pose Estimation The first component of our network pipeline, \(PoseNet\), simultaneously estimates 3D hand joint location and 3D bounding box corners of the object from an input voxelized depth map. We modify the V2V-PoseNet architecture of [42] by introducing a new \(1\times 1\times 1\) volumetric convolutional back-layer, targeted to predict 3D object pose in parallel to the 3D hand pose. The input depth map is converted to a 3D binary voxelized representation \(V_{D}\) of size \(88\times 88\times 88\). Each voxel value is set to 1 when occupied and 0 otherwise, _i.e.V\({}_{D}\in\{0,1\}\)_. The cubic size is fixed to the empirically found value 200 mm. The output of the network consists in a set of 3D heatmaps: (i) One set for each \(j\) hand joints coordinates \(\hat{P}_{j}^{H}(v)\), and (ii) one for each \(b\) object bounding box corners coordinates \(\hat{P}_{b}^{O}(v)\). All heatmaps are discretized on a grid of size \(44\times 44\times 44\). Ground-truth heatmaps \(P_{j}^{H}(v)\) and \(P_{b}^{O}(v)\) are generated by applying a 3D Gaussian centered on the ground-truth locations with fixed standard deviation. The training loss between the predicted and targeted heatmaps is calculated by mean-squared error (MSE): \[\mathcal{L}_{pose}=\sum_{v}\sum_{j=1}^{|J|}||\hat{P}_{j}^{H}-P_{j}^{H}||+\sum_{ b=1}^{|B|}||\hat{P}_{b}^{O}-P_{b}^{O}|| \tag{2}\] where \(J=21\) is the number of hand joints and \(B=8\) is the number of object corners In the above formula, we omitted all dependencies on the voxels Figure 2: The complete collection of our sphere-based object approximations used as ground-truth. for brevity. ### VoxelNet: 3D Voxelized Shape Given \(V_{D}\), \(\hat{P}^{H}\) and \(\hat{P}^{O}\), the second component of our pipeline, \(VoxelNet\), predicts hand-object voxelized shapes \(\hat{V}^{H}(v)\) and \(\hat{V}^{O}(v)\) for all existing voxels \(v\) in the grid. The 3D CNN-based architecture of \(VoxelNet\) is inspired by the work of Malik _et al._[9]. We introduce an additional 3D convolutional layer to predict the voxelized object shape. Our voxelized hand-object shapes are defined in the range \([0,1]\), as done in the previous work. The predicted voxelized shapes represent complete surface representation for both the hand and the object, including both visible and occluded surface information, as learned from the dataset. Thus, entailing richer information for the next algorithmic steps. To train \(VoxelNet\), we use the per-voxel combined sigmoid activation function with binary cross-entropy loss for the voxelized hand shape (similarly for the voxelized object shape): \[\mathcal{L}^{H}_{voxel}(v)=-(V^{H}log(\hat{V}^{H})+(1-V^{H})log(1-\hat{V}^{H })) \tag{3}\] where \(V^{H}\) and \(\hat{V}^{H}\) are the ground truth and the estimated voxelized hand and object shapes, respectively. ### ShapeGraFormer To obtain topologically-coherent hand-object mesh representation, from the voxelized hand and object predictions obtained from \(VoxelNet\), we train a new \(ShapeGraFormer\)[41]. The \(ShapeGraFormer\) includes Graph Convolutional layers and Multi-headed Attention layers in order to convert depth features into a pose and a shape for both the hand and the object. Although it was originally designed to lift 2D poses to 3D, GraFormer's design which combines the advantages of Graph Convolutional Networks and Transformers making it capable of effectively solving any problem that can be represented as a graph. Both the hand and the spherical objects have consistent topology and hence can be represented as a graph. Therefore, in our method, we utilize three separate GraFormers, namely: hand GraFormer, object GraFormer, and refinement GraFormer, with two feature extractors: a hand feature extractor and an object feature extractor. as described in the following subsection. Specifically, the network outputs hand-object vertex coordinates \(v^{H}\) and \(v^{O}\), respectively from the MANO hand model \(S^{H}\) and the sphere-based object approximated shape \(S^{O}\). The hand and object vertices loss for training is defined using MSE: \[\mathcal{L}_{shape}=\frac{\sum_{i}(v^{H}_{i}-\hat{v}^{H}_{i})^{2}}{|v^{H}|}+ \frac{\sum_{j}(v^{O}_{j}-\hat{v}^{O}_{j})^{2}}{|v^{O}|} \tag{4}\] where \(\hat{v}^{H}_{i}\) and \(v^{H}_{i}\) are respectively the predicted and ground-truth \(i\)-th hand vertex coordinates, and \(\hat{v}^{O}_{j}\) and \(v^{O}_{j}\) are predicted and ground-truth object vertex coordinates. #### 3.4.1 Feature Extractor and Graph Initialization For the graph initialization, we use the outputs of the \(VoxelNet\) and the raw voxelized depth as shown in Figure 1. Namely, we extract featuremaps \(\mathcal{F}_{V}\) from the \(VoxelNet\) and pass them through an MLP that converts them into a feature vector of size 256. Additionally, we utilize a simple 3D CNN to extract features from the voxelized shapes and reduce them to 128 features. Furthermore, a 3D Max Pool layer reduces the size of the raw voxelized depth from \(44\times 44\times 44\) to \(11\times 11\times 11\) creating a 1331 feature vector. These features are then combined to create a hand feature vector \(\mathcal{F}^{H}_{1715}\) of size 1715 to initialize the graph vertices as shown in Figure 1. The same operation is repeated for the object's vertices resulting in \(\mathcal{F}^{O}_{1715}\). #### 3.4.2 Positional Embedding In order to generate distinct features for each vertex in the graph, we propose a positional embedding layer that converts the vertices of the template meshes for both the hand and object into positional vectors \(\mathcal{E}^{p}_{i}\) of the same size as the feature vector where \(i\) is the index of the _i_th vertex in the combined hand-object graph. The positional vectors \(\mathcal{E}^{p}_{i}\) are then accumulated with the shared feature vectors \(\mathcal{F}^{H}_{1715}\) and \(\mathcal{F}^{O}_{1715}\) depending on whether the _i_th vertex belongs to the hand or the object in order to create a unique representation for each vertex. For the hand mesh, we adopt the default MANO hand as the template, while for the object mesh, we use the sphere that is utilized for deforming objects, see Section 3.1. #### 3.4.3 GraFormer Details Each GraFormer consists of five consecutive GraAttention components followed by Chebyshev Graph Convolutional layers as described in [41] and shown in Figure 3. A GraAttention component consists of a Multi-headed Attention layer where the hidden dimension is 128 and the number of heads is 4, following the ablation study mentioned in [41] where they studied the impact of hidden dimension size and number of layers in the GraFormer on pose estimation. Compared to normal Transformers, the last fully-connected layer of the Multi-headed Attention in the GraFormer is a graph convolutional layer, not a feed-forward layer. The GraFormer also contains an input layer that maps the feature vector to the hidden dimension size and an output layer that maps the hidden dimension into the corresponding 3D coordinate value for each vertex in the graph. To initialize the adjacency matrix of the graph layers, we use the mesh faces of the MANO model along with the faces of the spherical mesh. #### 3.4.4 Refinement GraFormer To enhance the realistic appearance and correct minor shape-related artifacts in the predicted object shapes, we add an additional GraFormer for refinement. We utilize the initial shape produced by the Hand and Object GraFormer as input to another positional embedding layer while we use \(\mathcal{F}_{1715}^{H}\) and \(\mathcal{F}_{1715}^{O}\) to initialize the new graph. In the case of using a refinement GraFormer during training, we get 2 separate shapes from the Figure 3: Illustration of the GraFormer architecture. \(ShapeGraFormer\) and apply the same loss function mentioned in 3.4 on both of them. ### Training and Implementation Details We train all the components of our network on fully annotated public hand-object pose and shape datasets, namely, HO-3D [28] and DexYCB [29]. For training, we use the Adam [43] optimizer with a learning rate set to.001. For improved convergence, we train \(PoseNet\) separately, then fix the obtained weights to train the remaining network components. All learning and inference are implemented in PyTorch and are conducted on an NVIDIA A100 GPU. ## 4 Experiments In this section, we quantitatively and qualitatively evaluate the effectiveness of our 3D hand-object reconstruction pipeline on two popular datasets, namely HO-3D dataset (version v2 and v3) [28] and DexYCB [29]. We additionally compare our pipeline with state-of-the-art approaches in Section 4.2. For quantitative evaluation and comparisons, we use the following two metrics: (i) the average 3D joint location error and (ii) the mean vertex location error over all test frames. ### Datasets Ho-3DThe HO-3D dataset [28], is a publicly available dataset with 3D pose annotations for hands interacting with objects captured from third-person views. The dataset has multiple versions and we report the results on v2 and v3 of the dataset. For HO-3D (v3), the training set \(\mathcal{D}:=\{\mathcal{D}^{train},\mathcal{D}^{valid}\}\) contains annotations for \(|\mathcal{D}^{train}|=71,662\) and \(|\mathcal{D}^{valid}|=10,927\) images and a total of 55 sequences and 10 different objects from the YCB-Video dataset [44], 9 for the training set and 1 (unseen) for the evaluation set. The evaluation set \(\mathcal{D}^{eval}\) comprises 13 sequences with a total of \(|\mathcal{D}^{eval}|=20,137\) frames addressing challenging scenarios: namely, (i) 3 sequences with (2) seen objects and seen hands, (ii) 5 sequences with (1) seen object but unseen hands, and (iii) 5 sequences with seen hands but 1 unseen object. Hands in the evaluation set are just annotated with the wrist coordinates, while the full hand is not annotated. Object pose and shape are annotated over all available sets. Dexycb.To extend our evaluation, we also train our network on the DexYCB dataset [29]. DexYCB contains hand pose and shape and 6D object pose annotations for \(~{}582k\) frames recorded on 10 different subjects, using 20 different objects and 8 views. We use the S1 evaluation setup as specified by the authors where \(\mathcal{D}^{valid}\) contains 1 unseen subject and \(\mathcal{D}^{eval}\) contains 2 unseen subjects and \(\mathcal{D}^{train}\) contains 7 subjects and all different objects. The exact split sizes are: \(|\mathcal{D}^{train}|=407,088\), \(|\mathcal{D}^{valid}|=58,592\), and \(|\mathcal{D}^{eval}|=116,288\). In the next section, we show results on \(\mathcal{D}^{eval}\) for both datasets. ### Evaluation In this section, we evaluate hand-object shape and pose reconstruction and compare it to state-of-the-art approaches in challenging scenarios. Methods for ComparisonWe compare our work on HO-3D quantitatively with six different methods. The work of Hasson _et al._[23] and THOR-Net [20] are the most related to ours in terms of the goals. However, as they focus on RGB inputs, comparisons are made up to scale. Malik _et al._[9] on the other hand, is based on depth inputs and has a comparable voxel-based network pipeline to our design. However, they ignore the presence of an object and reconstruct hands in isolation. We include the representative RGB-based hand reconstruction approach of Hampali _et al._[28], and ArtiBoost [31] for completeness. For a fair comparison, all methods have been (re-)trained on the HO-3D dataset. The first two methods provide publicly available results (hand-only), which we report in Table 1. We re-implemented HandVoxNet following the authors' instructions and trained all the network components on HO-3D. In addition to that, we also report the root-relative pose estimation error in Table 2 and compare it to two benchmark methods mentioned by the DexYCB authors. We also show qualitative samples for hand reconstruction from DexYCB in Figure 6. the refinement stage has no impact on hand reconstruction as shown in Table 1. However, joint training for the hand and the object outperforms other methods. Figure 5 shows our results on a few frames of the challenging evaluation sequence compared to THOR-Net. With respect to THOR-Net, our approach reconstructs more accurate hand shapes, as it better exploits hand-object kinematic correlation and depth information. These are implicitly learned while predicting both interacting shapes simultaneously. In Table 2, we show an improvement of 0.53 cm in hand pose estimation compared to benchmark results on the DexYCB dataset. Figure 6 also shows the qualitative reconstruction results on DexYCB without the additional refinement network. Object ReconstructionWe found that topological consistency is the key factor allowing \(ShapeGraFormer\) to predict smooth vertices point clouds across all sequences, without the need for additional smoothness constraints, see Figure 4. After sphere-based registration, all objects share the same topology and number of vertices. We believe our choice for the sphere resolution to be a good trade-off between approximation quality and the number of vertices. Sphere-based approximation implicitly repairs irregularities in the \begin{table} \begin{tabular}{|l|c|c|} \hline Method & Joint err. (cm) & Mesh err. (cm) \\ \hline A2J [45] & 2.55 & - \\ Spurr _et al._[46] & 2.27 & - \\ **Ours** & **1.74** & **2.65** \\ \hline \end{tabular} \end{table} Table 2: Root-relative hand pose estimation results on \(\mathcal{D}^{eval}\) of DexYCB [29] dataset (S1) in comparison to benchmark results. target object shapes, making it best suited for hand-object prediction, at the cost of over-smoothed sharp edges and tiny surface details, see Figure 2. We evaluate object reconstruction on \(\mathcal{D}^{valid}\) and \(\mathcal{D}^{eval}\) of HO-3D dataset, Figure 4: Reconstruction results at different steps. Each row shows a different sample from \(\mathcal{D}^{valid}\) or \(\mathcal{D}^{eval}\) from HO-3D (v3). see Figure 7 and Table 3, as well as on the DexYCB dataset, see Figure 6. The additional hand-object refinement step improved object reconstruction error on all objects. This suggests that the refinement network utilizes hand information in order to improve object reconstruction. We notice that even in presence of inaccurate pose prediction as input, our approach recovers smooth object shapes, see Figure 4. Compared to THOR-Net our approach tend to oversmooth object edges, see Figure 5, possibly due to the simplified reconstruction approach. Figure 6: Reconstruction results at different steps. Each row shows a different sample from \(\mathcal{D}^{eval}\) from DexYCB. Figure 7: Object reconstruction error (in mm) on different objects from \(\mathcal{D}^{valid}\) and \(\mathcal{D}^{eval}\) of HO-3D. The results show that adding a refinement GraFormer improves object reconstruction. the GraFormer and study the impact on hand reconstruction. The ablation study in Table 4 shows that combining voxelized depth \(V_{D}\), \(\mathcal{F}_{V}\) and \(\hat{V}^{H}\) as input to the \(ShapeGraFormer\) yields the best results. Furthermore, the mixture of graph layers with transformers in the design of the GraFormer is critical to achieve best performance. Contact Loss: Penetration Avoidance and Contact EnforcementWe test the impact of differentiable contact loss, consisting of an attraction \(\mathcal{L}_{attraction}\) and a repulsion \(\mathcal{L}_{repulsion}\) term, similar to Hasson _et al._[23]. \(\mathcal{L}_{attraction}\) is aimed at enforcing hand-object contact, by penalizing the distance between the object and the fingertips, while \(\mathcal{L}_{repulsion}\) penalizes mesh interpenetration. As shown \begin{table} \begin{tabular}{|l|c|c|c|} \hline Experiment & Joint err. (cm) & Mesh err. (cm) \\ \hline ShapeGraFormer (w/o \(V_{D}\)) & 2.01 & 2.06 \\ ShapeGraFormer (w/o \(\mathcal{F}_{V}\)) & 2.05 & 2.01 \\ ShapeGraFormer (w/o \(\hat{V}^{H}\)) & 2.00 & 1.95 \\ **ShapeGraFormer** (w/ \(V_{D}\oplus\mathcal{F}_{V}\oplus\hat{V}^{H}\)) & **1.99** & **1.94** \\ \hline ShapeGraFormer (w/o GCN) & 19.32 & 19.36 \\ ShapeGraFormer (w/o Transformer) & 2.06 & 2.01 \\ **ShapeGraFormer** (w/ GCN + Transformer) & **1.99** & **1.94** \\ \hline \end{tabular} \end{table} Table 4: Top: An ablation study on the modality of \(\mathcal{F}_{1715}^{H}\). Bottom: An ablation study on the GraFormer design choice. \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline & Bottle & Box & Can & Marker & Wood \\ \hline MPVPE (cm) & 5.9 & 6.4 & 7.0 & 9.7 & 11.5 \\ \hline \end{tabular} \end{table} Table 3: Object Reconstruction MPVPE on a selected set of objects from HO-3D and DexYCB. in Table 1, the additional loss term does not introduce a tangible increase in performance. Thus, suggesting our independent Refinement GraFormer component implicitly and successfully learns valid hand-object interactions. ## 5 Conclusion In this paper, we propose one of the first methods for realistic hand-object pose and shape reconstruction from a single depth map. We introduce a novel 3D voxel-based GraFormer network pipeline, which reconstructs detailed 3D shapes via direct regression of mesh vertices. We conduct an ablation study to show the effectiveness of our design choices and the impact of utilizing the power of GCNs along with Transformers for hand-object shape estimation and refinement. We perform quantitative and qualitative analysis on the HO-3D dataset [28] and DexYCB dataset [29] and show outstanding comparative results with the state-of-the-art. In future work, we plan to address limitations such as inaccurate annotations. In addition, we will study RGB+D methods to utilize the extra features found in RGB frames. AcknowledgmentsThis work was partially funded by the Federal Ministry of Education and Research of the Federal Republic of Germany (BMBF), under grant agreements: DECODE [Grant Nr 01IW21001], GreifbAR [Grant Nr 16SV8732], and by the EU project FLUENTLY [Grant Nr 101058680].
2305.04714
Enhancing synthetic training data for quantitative photoacoustic tomography with generative deep learning
Multiwavelength photoacoustic images encode information about a tissue's optical absorption distribution. This can be used to estimate its blood oxygen saturation distribution (sO2), an important physiological indicator of tissue health and pathology. However the wavelength dependence of the light fluence distribution complicates the recovery of accurate estimates, in particular, preventing the use of a straightforward spectroscopic inversion. Deep learning approaches have been shown effective at producing accurate estimates of sO2 from simulated data. Though, the translation of generic supervised learning approaches to real tissues is prevented by the lack of real `paired' training data (multiwavelength PA images of in vivo tissues with their corresponding sO2 distributions). Here, we discuss i) why networks trained on images simulated using conventional means are unlikely to generalise their performance on real tissues, and ii) the prospects of using two generative adversarial network based strategies to improve the generalisability of sO2-estimating networks trained on synthetic data: a) CycleGAN-driven unsupervised domain adaptation of conventionally simulated images, and b) the generation of paired training data using AmbientGANs.
Ciaran Bench, Ben T. Cox
2023-05-08T13:57:30Z
http://arxiv.org/abs/2305.04714v1
Enhancing synthetic training data for quantitative photoacoustic tomography with generative deep learning ###### Abstract Multiwavelength photoacoustic images encode information about a tissue's optical absorption distribution. This can be used to estimate its blood oxygen saturation distribution (sO\({}_{2}\)), an important physiological indicator of tissue health and pathology. However the wavelength dependence of the light fluence distribution complicates the recovery of accurate estimates, in particular, preventing the use of a straightforward spectroscopic inversion. Deep learning approaches have been shown effective at producing accurate estimates of SO\({}_{2}\) from simulated data. Though, the translation of generic supervised learning approaches to real tissues is prevented by the lack of real 'paired' training data (multiwavelength PA images of _in vivo_ tissues with their corresponding sO\({}_{2}\) distributions). Here, we discuss i) why networks trained on images simulated using conventional means are unlikely to generalise their performance on real tissues, and ii) the prospects of using two generative adversarial network based strategies to improve the generalisability of SO\({}_{2}\)-estimating networks trained on synthetic data: a) CycleGAN-driven unsupervised domain adaptation of conventionally simulated images, and b) the generation of paired training data using AmbientGANs. ## I Introduction Information about a tissue's blood oxygen saturation (sO\({}_{2}\)) distribution can be used to assess patient health and monitor tumour therapies. Therefore, there is a demand for a modality that can provide high resolution images of this haematological parameter [1]. Diffuse optical tomography, and Blood Oxygen Level Dependent Magnetic Resonance Imaging (BOLD MRI) can provide information about or related to sO\({}_{2}\). However the former only provides low resolution images at superficial depths while the latter is only sensitive to changes in blood volume and venous deoxyhaemoglobin concentration [2, 3]. In contrast, photoacoustic (PA) tomography can provide both superior resolution and sensitivity to both oxyhaemoglobin and deoxyhaemoglobin. Images are acquired by initially sending pulses of near infrared range (NIR) laser light into the tissue, where photons undergo several scattering events before being absorbed by chromophores. The subsequent relaxation of the excited chromophores raises the temperature of their surrounding environments inducing local increases in pressure that propagate to the sample surface as acoustic waves. Here, transducers (e.g. based on piezo-electrics or Fabry-Perot interferometers) record the waves as pressure time series. Recorded pressure time series can be used to reconstruct images of the photoacoustic initial pressure distribution (\(p_{0}(x,\lambda)\) where \(x\) is the location within the tissue, and \(\lambda\) is the illumination wavelength) using one of several algorithms [4, 5]. Unlike purely optical modalities, information about the optical absorption is encoded in acoustic waves that propagate to the sample surface. Compared to photons, these undergo comparatively little scattering, resulting in an improvement in penetration depth. The amplitude of a perfectly reconstructed PA image is given by: \[p_{0}(x,\lambda)=\mu_{a}(x,\lambda)\Gamma(x)\Phi(x,\lambda), \tag{1}\] where \(\mu_{a}\) is the optical absorption coefficient, \(x\) is the location within the sample, \(\lambda\) is the optical illumination wavelength, \(\Gamma\) is the PA efficiency, and \(\Phi\) is the light fluence. It is clear from Eq. 1 that PA images encode information about the sample's optical absorption coefficient. Knowledge of this parameter at multiple wavelengths (or \(\mu_{a}\) scaled by a wavelength independent constant) can be used to quantify chromophore concentrations with a straightforward spectroscopic inversion (assuming all chromophore species and their molar absorption spectra are known) [6]. However, acquiring accurate estimates of sO\({}_{2}\) from multiwavelength PA images is non-trivial for several reasons [1]. Chief among these is the confounding effect of the wavelength-dependent fluence distribution. A PA image can be viewed as an image of the \(\mu_{a}\) distribution scaled by an (assumed) wavelength independent \(\Gamma\) and a wavelength dependent \(\Phi\). It is this fluence term that prevents the effective use of a simple spectroscopic inversion of multiwavelength PA image amplitudes to estimate chromophore concentrations. Spectral colouring is a term used to describe how the fluence alters a tissue region's PA spectra so that it is not directly proportional to the region's optical absorption spectra. If available, an accurate estimate of the multiwavelength fluence distribution can be used to correct the fluence term out of each image and restore the ability to recover concentration estimates with a spectroscopic inversion. In principle this could be acquired by running a simulation of light propagation in a synthetic model of the tissue. However, because tissue is highly scattering at NIR wavelengths, the spatial distribution of absorbers and scatterers throughout the tissue must be known to acquire an accurate estimate of the fluence. Therefore, the key challenge with quantifying chromophore concentrations from PA images is finding some way to compensate for the multiwavelength fluence distribution using only incomplete prior knowledge of the sample's optical properties. Despite the high errors that may occur from spectral colouring, sO\({}_{2}\) estimates acquired with an approximate linear inversion strategy _without_ an accompanying fluence correction (referred to here as sO\({}_{2}^{*}\)) have been used to assess changes in a tissue's oxygenation or pathology state. In some of these studies the error due to spectral colouring is acknowledged, and the effective utilisation of sO\({}_{2}^{*}\) values is demonstrated by exhibiting how changes in this parameter with the onset of some physiological challenge are consistent with the expected change in tissue state [7, 8]. Though this approach should be used with caution as there is no guarantee that changes in sO\({}_{2}^{*}\) will reflect changes in the actual sO\({}_{2}\) or act as an equivalent biomarker in all cases. The degree of spectral colouring can change considerably with changes in tissue state caused by differences in blood flow or other processes that affect the concentrations of different chromophores. Several strategies have been proposed to overcome the challenges with estimating the wavelength dependence of the fluence. Here, we discuss those that have been applied to physical phantoms or real tissues (and therefore have been tested in realistic imaging scenarios), and are noninvasive (an ultimately preferable option given the additional expertise required to use contrast agents or invasive light sources _in vivo_ and to minimise patient trauma). In some cases, fluence estimates are acquired by constructing synthetic tissue models using assumed optical properties of the real tissue (e.g. based on literature values) or those measured at low resolution with an adjacent imaging modality (e.g. Diffuse Optical Tomography (DOT) or oblique-incidence diffuse reflectance measurements [9, 10, 11]). However, given the natural variation in _in vivo_ tissue properties, it is unlikely that the sample's tissue components will have the same optical properties as those reported in the literature for a specific sample. Furthermore, published values for the optical properties for various tissue types are often measured from excised samples which may differ from their _in vivo_ counterparts because of changes in density and blood content. There is also no guarantee that every unique tissue type present in the sample or their precise location will be known _a priori_, and therefore there is likely to be a significant degree of mismatch between a synthetic tissue model and its corresponding real tissue. As for the adjacent modalities, fluence estimates derived from DOT measurements have a depth dependent resolution practically limited to around 2-3 mm and thus inadequately capture the finer-scale heterogeneity of real tissue [10, 12]. Additionally, the modality requires complex hardware making less appealing to utilise. Sample optical properties have also been estimated from PA images by i) making the implicit assumption that light propagation can be accurately modelled in 1D, ii) assuming the tissue can be modelled as a series of optically homogeneous layers (each represented as a plane) and then iii) fitting a 1D light model (e.g. Beer-Lambert law) to the measured data. However, 1D light propagation models are only valid under a narrow set of conditions that are unlikely to be met in a real tissue [1]. Furthermore, given their typical heterogeneity most tissues are not accurately modelled as a series of planar layers. A method to measure the fluence non-invasively using ultrasonically tagged light has been proposed [13, 14, 15, 16]). However, this also assumes a significant degree of homogeneity in tissue properties that is unlikely to be met in most _in vivo_ imaging scenarios. Statistical separation/decomposition approaches such as those based on Independent Component Analysis (ICA) have been applied to images of synthetic phantoms [17]. However, ICA relies on the assumption that chromophores are spatially distributed in a statistically independent manner, which is certainly not the case for Hb and HbO\({}_{2}\)_in vivo_. Iterative error minimisation techniques (specifically model-based inversion) are another class of approaches that have been tested in phantoms and _in vivo_ tissue [18, 19, 20, 21, 22, 23, 24]. Here, multiwavelength images of the sample (\(x\)) are acquired, and an accurate model of the imaging system is formulated (referred to here by the operator \(I\)). A synthetic tissue model (referred to as \(c\)) is used as an input to \(I\) that outputs synthetic PA images \(\hat{x}\), so \(I(c)\to\hat{x}\). The parameters of \(c\) are then updated so as to decrease the error between \(\hat{x}\) and \(x\). This process is repeated iteratively until the error between \(\hat{x}\) and \(x\) satisfies some criterion. The version of \(c\) that satisfies this condition is assumed to accurately represent the properties of the real tissue/sample. This relies on \(I\) being a highly accurate model of system used to acquire \(x\). However, formulating an accurate model is challenging in practice due to the difficulty with precisely characterising all aspects of the signal acquisition pathway. This is one of several challenges with using this approach. Additionally, when the optical scattering distribution is unknown (and therefore recovered as part of the inversion) the inversion is ill-posed adding further complications. Also, all of the different chromophore species present in significant quantities in the sample (as well as their molar absorption/scattering spectra) must be known _a priori_. This information may not always be available for _in vivo_ samples. Therefore, this strategy has not been widely implemented to estimate chromophore concentrations _in vivo_. In contrast, data-driven techniques based on deep neural networks do not require a well-characterised imaging system. Instead, a generic supervised learning approach requires a training set composed of multiwavelength PA images of each tissue and their corresponding ground truth distributions of chromophore concentrations. CNN-based networks learn to associate an arrangement of particular image features (representing tissue components or other aspects of the image such as reconstruction artefacts) to an sO\({}_{2}\) distribution. However, there is currently no straightforward way to acquire the ground truth sO\({}_{2}\) distribution _in vivo_. Consequently, all reported uses of deep networks involve training datasets composed of simulated images or images of physical phantoms [25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38]. However, networks trained on either type of data will not perform well on images of _in vivo_ tissue. This is because the frequency at which modelled tissue components are found in each tissue, the features that represent them, how they are arranged in space, and the relation these image features have to the sO\({}_{2}\) may differ. Specifically, simulated/phantom datasets belong to a data domain that differs from that describing images of _in vivo_ tissue (see Section II). Unsupervised learning techniques that do not require paired data have been tested on simulated datasets, but have yet to be applied to physical phantoms or real tissues [28]. Visual transformer-based architectures may be effective at extracting and utilising global contextual features relevant to learning sO\({}_{2}\) quantification, and have been used in unsupervised learning frameworks to perform registration of biomedical image data [39]. However, it is not clear how the scheme could be adapted to perform unsupervised image-to-image regression tasks. Therefore, acquiring a realistic dataset remains a critical step towards translating the use of deep neural networks to images of real tissue (and more broadly to validate any other proposed approach _in vivo_). It may be possible to measure the sO\({}_{2}\) in several discrete regions of a tissue using inserted electrodes or probes, and then interpolate between them to estimate a continuous distribution to use as a ground truth [40, 41]. However, this is time consuming, costly, and would ultimately provide only a low resolution ground truth. Synthetic images are comparatively more cost effective to generate, and so it would be preferable to instead find a way to leverage this kind of data. ## II The domain gap between real and simulated images of tissue A network's ability to generalise its performance on a test set of data separate from its training set depends on the similarity of the two datasets' data domains. A data domain \(D=\{\chi,\Upsilon,P(x,y)\}\) consists of an input feature space \(\chi\) (a vector space containing all image features), an output feature space \(\Upsilon\), and a joint probability distribution \(P(x,y)\) over the input and output feature space pair \(\chi\times\Upsilon\), where \(x\) is an instance of the network inputs \(x_{1},x_{2},...x_{i}\in\textbf{x}\) and \(y\) is an instance of the corresponding ground truths \(y_{1},y_{2},...y_{i}\in\textbf{y}\)[42, 43, 44, 45, 46, 47, 48, 49]. In a supervised learning framework, training data will consist of image pairs \(\{x_{i},y_{i}\}\). The joint probability distribution can be decomposed into marginal (commonly referred to as the 'data distribution') and conditional distributions: \(P(x,y)=P(x)P(y|x)\) or \(P(x,y)=P(y)P(x|y)\). A network's performance will generalise well to any unseen test data when the domain describing the training set (\(D_{train}\)) is approximately equal to the domain describing the test data (\(D_{test}\)): \(D_{train}\approx D_{test}\). Therefore, a network trained on simulated images may generalise its performance to images of real tissues if their domains are sufficiently similar. However, images generated using conventional simulation pipelines do not belong to the same domain describing images of real tissues for several reasons. Synthetic tissue models used in simulations are often constructed by approximating tissue components with simple shapes arranged in a way that appears to mimic components found in images of real tissues (acquired with various modalities). Optical and acoustic properties may be assigned based on values reported in the literature. The image data is then simulated by applying a forward model of image generation that includes a model of light transport, acoustic propagation, and image reconstruction using the synthetic tissue as an input. Though in some cases, only a light model is used to construct the forward model. In principle, it is possible to optimise the forward model by carefully characterising a known imaging system. However, conventionally simulated images may still fail to achieve domain-alignment with real images as the construction of sufficiently realistic tissue models requires highly accurate prior knowledge of the way optical properties are typically distributed in real tissues - information that is not readily available. ### _Causes of the domain gap_ Here we outline the causes of the domain gap between simulated training images, and test images of real _in vivo_ tissues. #### Ii-A1 Unequal feature spaces \(\chi_{train}\neq\chi_{test}\) _or_\(\Upsilon_{train}\neq\Upsilon_{test}\)_:_ If the shapes of tissue components are not sufficiently realistic, nor all of the processes involved with simulating image acquisition, then simulated images may contain features not found in images of real tissues. Consequently, a network trained on these simulated images will not be able to accurately detect tissue components (or other features such as artefacts) in real images. Ultimately, the network will have inadequate information about image contents, and consequently, sO\({}_{2}\) estimates will have low accuracy. Network accuracy may also suffer if the output feature space of the test set is not equal to the output feature space of the training set. #### Ii-A2 Unequal data distributions \(P(x)_{train}\neq P(x)_{test}\)_:_ The frequency at which certain features representing arrangements of tissue components or artefacts occur in simulated images may differ from real images. This can happen if the synthetic tissue models in the training set are not sufficiently representative of the tissues depicted in the test set, or if the model used to simulate them does not reflect the imaging system used to acquire the real test images. For example, a network may be trained on a simulated dataset where only a few example tissues have melanin. If this network was applied to a test set of images depicting real tissues with melanin, the network would be unlikely to generalise its performance. This is because it has only learned from a few examples containing melanin, and so will not have learned how its presence may affect the fluence in a wider range of scenarios. #### Ii-A3 Unequal joint distributions \(P(x,y)_{train}\neq P(x,y)_{test}\)_:_ It is possible for the same feature to occur in both the training and test set, but have a different relation to the ground truth depending on which dataset it is found in. For example, wing-like reconstruction artefacts in real images may be mistaken as vessel-like structures by a network that was trained on PA images simulated without an acoustic model. ### _Minimising the domain gap with transfer learning_ Transfer learning may provide a suitable framework for improving a network's performance on out-of-domain test data. Transfer learning refers to a class of techniques that enable a network trained to perform some'source' task (using data belonging to a source domain) to perform a related 'target' task on data described by a target domain. More specifically, consider a network \(K\) that is trained in a supervised manner on image pairs \(\{\hat{x}_{i},\hat{y}_{i}\}\in\mathcal{D}_{s}\{\chi_{s},\Upsilon_{s},P(\hat{x}, \hat{y})\}\) (where \(\mathcal{D}_{s}\), is the source domain, with network inputs given by \(\hat{x}_{1},\hat{x}_{2},...\hat{x}_{i}\in\hat{\textbf{x}}\), and outputs \(\hat{y}_{1},\hat{y}_{2},...\hat{y}_{i}\in\hat{\textbf{y}}\)), to predict \(\hat{y}_{i}\) for a corresponding input \(\hat{x}_{i}\). In other words, it learns a mapping \(K:\chi_{s}\rightarrow\Upsilon_{s}\), which is referred to as the source task \(\mathcal{T}_{s}\)[42, 50]. Transfer learning aims to use \(K\) in some form to perform the separate but related target task \(\mathcal{T}_{t}\) of predicting the outputs \(\hat{y}_{i}\) from inputs \(\tilde{x}_{i}\) belonging to the target domain \(\mathcal{D}_{t}\{\chi_{t},\Upsilon_{t},P(\tilde{x},\tilde{y})\}\) where \(\mathcal{D}_{t}\neq\mathcal{D}_{s}\). We are interested in using a network trained to perform sO\({}_{2}\) quantification on simulated PA images of tissue to perform the same task on images of real tissues. The most generic transfer learning strategies involve some kind of fine-tuning procedure, where a network pretrained on data from the source domain is trained on a small dataset belonging to the target domain. However, we are principally interested in unsupervised transductive transfer learning (of the heterogeneous variety) given i) the lack of available ground truths in the target domain, ii) the source and target tasks are the same, and iii) feature spaces and the data distributions of each domain are unequal [45]. Out of several possible strategies within this subset of transfer learning, we pursue the use of adversarial domain adaptation strategies. This is because adversarial networks offer an effective framework for automatically detecting and adapting relevant features to achieve domain alignment [47, 51]. This is advantageous, as it is often the case that we do not know the kinds of features that are relevant to adapt _a priori_, nor a hand-engineered and consistent method to detect these features and perform the required adaptation. ### _Outline of proposed GAN-based domain alignment strategies_ #### Ii-C1 Introduction to Generative Adversarial Networks Generative Adversarial Networks (GANs) provide a framework for implicitly learning how to randomly sample from a dataset's data distribution. The generic architecture is composed of two modules: a generator \(G\) and discriminator \(D\). \(G\) takes noise (sampled from a Gaussian or uniform distribution) as an input, and outputs an image that (ideally) is indistinguishable from images found in the network's training set. \(D\) takes an image (either produced by \(G\) or from the training set) and classifies whether the image is synthetically generated (i.e. produced by \(G\)) or not. \(G\) and \(D\) are trained as adversaries; \(G\) attempts to produce images that fool \(D\), while \(D\) is trained to differentiate synthetic and real images as accurately as possible. The task is described by [52, 53] and the objective function is given by Equation 2. The first term represents the discriminator's predictions on real data (\(x\) sampled from the training set's data distribution \(P_{r}\)), and the second refers to its predictions on fake/generated data (generated by inputting noise \(z\) sampled from a uniform or Gaussian distribution \(P_{z}\)). Generic GANs are notoriously challenging/unstable to train. One reason for this is because the distance metric implicitly used to assess the quality of the learned data distribution (Jensen-Shannon Divergence) is not always suitable for data distributions describing natural datasets [54, 55]. In practice, more stable GAN variants such as the Wasserstein-GAN with gradient penalty (WGAN-GP) can be used instead. Some synthetic PA images generated with a WGAN-GP are provided in Appendix A. In this article, we use two variants of the generic GAN: an AmbientGAN, and a CycleGAN. #### Ii-C2 Ambient Generative Adversarial Networks A generic GAN can generate images that appear to belong to its training set. GANs have been used to facilitate the generation of training data for networks trained to estimate chromophore concentrations. In [56], a GAN was used to generate random tissue models that appeared to belong to a set of simulated tissues constructed using the shapes of tissue components depicted in real PA image data. These generated models were then assigned optical properties, and used as an input to a forward model to simulate each tissue's corresponding PA images. However, because we don't have the ability to acquire detailed information about the optical/acoustic properties of any given real tissue to provide to a GAN as training data, we can not use this approach to generate paired datasets of PA images of tissues with highly accurate and realistic ground truth information about their optical properties (i.e. this approach is inherently limited by our prior knowledge about how optical properties are typically distributed in real tissues). So how may we generate realistic images of tissue as well as its underlying tissue model without providing the network any examples of what an underlying tissue model looks like? Here, we describe how ambient generative adversarial networks (AmbientGANs) may provide the framework for learning this task [57, 58]. The architecture of an AmbientGAN is similar to a generic GAN - the key difference is that a frozen, differentiable forward model of image generation \(O\) is appended to the generator \(G\). Instead of generating the synthetic image directly, \(G\) now generates an input (a synthetic tissue model) to \(O\) that then outputs an image indistinguishable from those in the training set. The architecture can be trained in the same manner as a generic GAN. \(O\) must be expressed in a way so that it can be computed efficiently (it will be run for every batch), and so it is differentiable (allowing \(D\) to provide feedback to \(G\) through \(O\)). Though, there is no guarantee that a synthetic tissue model generated by \(G\) that produces a realistic image after being processed by \(O\) will itself reflect the properties of a real tissue. E.g. esoteric arrangements of scatterers can guide light towards absorbers in a way that produces an image indistinguishable from images of real tissues. Therefore, additional constraints are needed to ensure the generated tissue models are sufficiently realistic. #### Ii-C3 Cycle consistent generative adversarial networks Cycle consistent generative adversarial networks (CycleGANs) are another class of GAN that can adapt an image that belongs to one domain so that it appears to belong to another while preserving much of the inherent structure in the original image [59]. This can be performed without the use of image pairs, lending itself well to unsupervised domain adaptation tasks. \[\min_{G}\max_{D}V(G,D)=\min_{G}\max_{D}\mathbb{E}_{x\sim P_{r}}[\log D(x)]+ \mathbb{E}_{z\sim P_{z}}[\log(1-D(G(z)))]. \tag{2}\] This could be used, for example, to adapt images simulated using conventional approaches to make them appear more similar to images of real tissues (i.e. align their feature spaces) in an unsupervised manner. The architecture is composed of two generators and two discriminators - one for each of the data domains \(X\) and \(Y\). One generator \(G\) adapts an image \(x\in X\) so that it appears to belong to domain \(Y\) as determined by the discriminator \(D_{y}\) that classifies whether an image belongs to \(Y\). The other generator \(F\) adapts an image \(y\in Y\) so that it appears to belong to \(X\) as determined by the discriminator \(D_{x}\) that classifies whether an image belongs to \(X\). Each training iteration involves several steps and loss functions. A cycle loss (the error between an image \(x\) and \(F(G(x))\) and the error between an image \(y\) and \(G(F(y))\)) is used to ensure much of the original structure is preserved in the adapted images. An identity loss (error between \(x\) and \(F(x)\) and between \(y\) and \(G(y)\)) is used to preserve inter-channel features/structure (e.g. colour for generic image adaptation tasks, or the wavelength dependence of the image amplitude in our case) in the adapted images. Generic discriminator and generator losses are also employed for each module. While training, first, images \(x\) and \(y\) are adapted. Then, the parameters for calculating the cycle-loss and identity loss are computed (i.e. \(F(G(x))\) and \(G(F(y))\). Subsequently, the adapted and original images are fed into their respective discriminators (i.e. \(D_{x}(x)\), \(D_{x}(F(y))\), \(D_{y}(y)\), and \(D_{y}(G(x))\)). All the losses are calculated, and the generators and discriminators updated. Further details are provided in Appendix C. CycleGANs have been used to improve the performance of networks trained to estimate chromophore concentrations from simulated PA images when applied to real data [60]. A CycleGAN (dubbed SEED-Net) was trained to adapt simulated \(p_{0}\) images (of synthetic tissues modelled after a variety of samples, such as mouse muscle tissue and brain tissue) so that they appeared to belong to the same domain as real images (agar phantoms with absorbing/scattering inclusions of basic shapes, and images of _ex vivo_ tissue). In essence, the CycleGAN was trained to help align the input feature spaces of the experimental and simulated domains. A secondary dual-path network (QOAT-Net) based on the U-Net was then trained on adapted simulated images to estimate each sample's \(\mu_{a}\) distribution (not adapted in any way) as ground truths. The trained network was then tested on real agar phantom images, and on real images of _ex vivo_ tissue phantoms also not used for training (though, only a limited set of examples are presented in the paper). Estimates of \(\mu_{a}\) acquired from the real test images were more accurate compared to those acquired using QOAT-Net trained on non-adapted simulated images alone. When provided images of phantoms constructed from _ex vivo_ porcine liver tissue and tenderloin, the estimated \(\mu_{a}\) for each tissue type were within the expected range, though, these results were not precisely validated. Their QOAT-Net was also applied to an _in vivo_ image of a mouse cross section, where the estimated \(\mu_{a}\) was within the expected range. Although this work shows how adapting network inputs with adversarial training strategies can improve the accuracy of \(\mu_{a}\) estimates acquired from experimentally acquired target data, the successful application of this approach on real tissue remains uncertain for a few reasons. Firstly, SEED-Net only learns to make the input feature spaces of the source and target domains more similar. Consequently, QOAT-Net only learns to output oversimplified \(\mu_{a}\) distributions. Despite receiving a realistic looking input, it may not be able to output \(\mu_{a}\) distributions with properties typical of those found in the real phantoms. Furthermore, the only constraint on the adaptation performed by SEED-Net is that the input images appear indistinguishable from the real set of images. QOAT-Net would ideally learn to invert the model of image generation associated with the target (real/phantom) images. Therefore, an ideal adaptation would be constrained to ensure that adapted images are those that would have been produced from the same underlying phantom/tissue if the model of image generation for target images were to be applied to it. Without this constraint, there is no guarantee that QOAT-Net will be learning to invert the forward model underlying the acquisition of the target images. However, it is not clear as to how such a constraint could be formulated. ## III Outline of experiments Here, we discuss the potential of two GAN-based strategies for improving the quality of simulated training data: 1. First, we describe a toy example showing how CycleGANs can be used to improve the generalisability of an sO\({}_{2}\)-estimating network by aligning the input feature spaces of a source domain of images simulated with a light propagation model, an acoustic model, and an image reconstruction model and a target domain composed of images simulated with a light model alone. 2. We also provide a demonstration of how AmbientGANs can be used to generate paired training data for chromophore quantification using only PA images as inputs. We generate PA images of circular absorbers in a homogeneous absorbing background as well as images of their underlying optical absorption distributions. ## IV Methods and Results ### _Unsupervised domain adaptation with a CycleGAN_ We sought to investigate how CycleGANs could be used to improve the performance of an sO\({}_{2}\)-estimating network on out-of-domain target data with domain adaptation. An ideal test would involve the use of simulated source data with real target data (e.g. multiwavelength PA images of real tissues with their corresponding ground truth sO\({}_{2}\) distributions). However, given the challenges with acquiring information about the ground truth sO\({}_{2}\) distributions in real tissues, we instead opted to conduct a study utilising simulated data. This allowed us to Figure 1: An example of a 2D slice of an image with reconstruction artefacts (\(A\)) produced using a simulation pipeline consisting of a light, acoustic, and image reconstruction model along with the same tissue’s corresponding non-reconstructed image without artefacts (_NA_) simulated with a light model alone. Images from these two domains were used in the CycleGAN-based domain adaptation experiment. This figure is used to demonstrate the differences in the images from each domain as opposed to showing training examples fed to the CycleGAN. Figure 2: Schematic showing how data flows through the CycleGAN in the case where reconstructed (\(A\)) images (\(x\) belonging to domain \(X\)) will be adapted to appear non-reconstructed (_NA_) (\(y\) belonging to domain \(Y\)). The generator \(G\) adapts \(A\) images from domain \(X\) to appear as though they belong to domain \(Y\) (i.e. appear like an _NA_ image). Generator \(F\) adapts an image belonging to domain \(Y\) so that it appears to belong to domain \(X\). The Cycle loss (difference between the input \(A\) image \(x\), and its reconstructed counterpart \(F(G(x))\) ) is used to ensure the adaptation preserves much of the original structure found in \(x\). Discriminator \(D_{y}\): determines whether an input is a real or generated example from the \(Y\) domain. Discriminator \(D_{x}\): determines whether an input is a real or generated example from the \(X\) domain. assess whether this approach may be effective even in the highly idealised case where any domain mismatch arises from noise and reconstruction artefacts alone. A summary of the experiment is shown in Fig. 3. We consider two datasets: i) a dataset composed of 2D PA images of vessels immersed in multi-layer tissues simulated using a forward model that considered light propagation, acoustic propagation, and image reconstruction with added noise following the procedure described in [61] (referred to here as artefact '_A_' images) and ii) another set of images simulated with a light model alone (referred to as non-artefact or '_NA_' images). An example of each image type is shown in Fig. 1 (though, this figure does not depict a training example). To demonstrate whether this approach may work even in the ideal case where there is partial alignment in the source and target domains, each dataset was simulated to ensure that any domain mismatch would arise from the presence of artefacts and noise in the \(A\) images. _NA_ images were simulated by i) running a light model (as described in [61]) at two illumination wavelengths (784 nm and 820 nm) for all examples in a set of 3D tissue models (dimensions \(40\times 120\times 120\) pixels, corresponding to real dimensions of \(4\times 12\times 12\) mm), then ii) multiplying each fluence distribution by their respective tissue model's corresponding \(\mu_{a}\) distribution, then iii) parsing a 2D slice (slice 60 from the second dimension, resulting in a \(40\times 120\) pixel image) from the resulting image volumes, and lastly iv) cropping the first and last 40 columns from each image, resulting in dimensions of \(40\times 40\) pixels. \(A\) images were simulated by performing steps i) - ii), and then following this with the application of acoustic model and image reconstruction model (as described in [61]) to each image volume. A 2D slice (slice 70) was then parsed from each reconstructed image volume and cropped in the same manner as the _NA_ images to share the same dimensions. Parsing \(A\) images from slice 70 of the 3D image volumes ensured that each example from each domain depicted different contents. Slices near the middle of each tissue model were chosen to ensure images were likely to contain a dense arrangement of vessels. The domains describing either set of data were similar in the following ways. Firstly, given the image slices were taken from central regions of each tissue, the ground truths are composed of dense arrangements of vessels and consequently, the output features spaces will have a significant degree of alignment. The joint probability distributions and input feature spaces may also share some degree of alignment as the illumination parameters of the light model were identical for each image set and images were parsed from different (but central) regions of the same underlying 3D tissue models. Despite these similarities, there was considerable mismatch in the input feature spaces and joint probability distributions due to noise and reconstruction artefacts. The potential consequences of this are shown by the poor generalisation achieved by an sO\({}_{2}\)-estimating network trained on \(A\) images when applied to _NA_ images as shown in Appendix B1. A CycleGAN (see Fig. 2 for a schematic and Fig. 15 for architecture details) was trained on 400 2D images from each set for 6 epochs. The stopping point was determined by visually inspecting a validation set of 10 \(A\) images and terminating training when most of the wing-like reconstruction artefacts, Fig. 3: Summary of the CycleGAN domain adaptation experiment. Adapted \(A\) images were used as inputs to an sO\({}_{2}\)-estimating network pretrained on _NA_ images, and the resulting outputs were compared to those produced from using the corresponding unadapted images as inputs. along with the low-amplitude artefact region commonly found just beneath vessels appeared to have been removed from the images (further training details are given in Appendix C). The earliest possible epoch satisfying our heuristic criteria was chosen to prevent the images from being 'overadapted' and therefore less representative of their underlying tissue. A more quantitative quality metric would be preferable to improve the consistency and the robustness of our stopping criteria. Furthermore, a larger validation set would have been preferable. However, due to constraints on the amount of available training data, the use of a larger validation set was not possible. Once trained, 50 \(A\) images not in the training/validation set were adapted using the trained CycleGAN (some examples are shown in Figs. 4 and 19). The adapted images and their unadapted counterparts were fed into an sO\({}_{2}\)-estimating network pretrained on _NA_ images (training details given in Appendix D). The resultant sO\({}_{2}\) estimates are summarised in Fig. 5 and Table I. The mean of all estimated sO\({}_{2}\) values from the adapted images (shown in Table I) was closer to the ground truth than that produced by the unadapted images. For this calculation, the vessels in the original \(A\) test images and their adapted counterparts were segmented by setting all pixels in a given image with values less than 0.4 times the max pixel amplitude in the whole image, or less than 0.3 times the max value of all the pixels in their row to zero. Vessel pixels were extracted from the _NA_ versions of the images using their known ground truth locations. Observing the bivariate histograms of network outputs in Fig. 5, it is evident that fewer sO\({}_{2}\) estimates acquired from the adapted images had been underestimated, with a much more equal spread of estimates about the true sO\({}_{2}\). These histograms were constructed by only considering non-zero sO\({}_{2}\) estimates that had a non-zero corresponding ground truth value. ### _Generating realistic image data with AmbientGANs_ We sought to provide a basic demonstration of how AmbientGANs could be used to generate paired training data for learning chromophore quantification. We constructed a WGAN-GP with a frozen and pretrained network approximating a light propagation model (that outputs an image of \(\mu_{a}\Phi\)) appended to the generator. The architecture is shown in Fig. 6. Therefore, the generator was trained to generate tissue models that, when fed through the light model network, would produce images of \(\mu_{a}\Phi\) (referred to here as \(p_{0}\) images) indistinguishable from those presented to the critic. The critic (discriminator) was fed 2D PA images (\(40\times 40\) pixels) (i.e. \(p_{0}(x,\lambda)=\Gamma\mu_{a}(x,\lambda)\Phi(x,\lambda)\)), where \(x\) is each pixel in the image, \(\lambda\) is the excitation wavelength, and \(\Gamma=1\) is the PA efficiency) of circular absorbers immersed in a homogeneous absorbing background medium. Each tissue model contained between 1-3 circular absorbers with radii between 3 and 5 pixels. The homogeneous absorbing background was assigned an absorption coefficient between 0 and.5 mm\({}^{-1}\). Each circular absorber was assigned a random absorption coefficient computed by adding a random number between 0 and 1.3 mm\({}^{-1}\) to the background's absorption coefficient (this ensured each absorber had a higher absorption coefficient than the background, as is generally the case with vessels immersed in tissue). Fluence simulations were run in MCXLAB [62]. The tissue was illuminated using a truncated, collimated Gaussian beam centred at pixel 20 with a waist-radius of 50 pixels. The pixel length was set to have real dimensions of 0.1 mm, and the refractive index was homogeneous throughout the tissue with a value of 1.4. The timestep was set to \(10^{-11}\)s with a total time of \(10^{-9}\)s. A total of \(10^{7}\) photons were used for each simulation. The light model network (architecture shown in Fig. 6). was trained on 500 image pairs (an image of the tissue's absorption coefficient distribution as an input and the corresponding image of \(\mu_{a}\Phi\) as the ground truth) for 80 epochs. A validation set of 100 examples was used to determine the stopping point. The network was trained with a loss function of the Euclidean norm of the squared difference between the predicted image and the ground truth, along with Adam as the optimisation algorithm. The AmbientGAN was trained with 399 \(p_{0}\) images (simulated in the same way as discussed above) with the same loss functions, critic/generator architectures, and hyperparameters used in the WGAN-GP example shown in Appendix A) for 5000 epochs. The outputs shown indicate that the AmbientGAN has produced plausible underlying absorption distributions for the generated images (e.g. absorbers with high intensity at depth have correspondingly high absorption coefficient values). Though, it failed to reproduce the shadowing effect in the generated \(p_{0}\) images (both within the vessels and underneath them), and homogeneous \(\mu_{a}\) estimates within the vessels. The generated absorption coefficient images contain fluctuating values at larger depths. This likely occurs because the absorption coefficient in these regions do not have a large effect on the final image amplitude due to the decay in the fluence. Therefore, the network has no incentive to keep absorption values at greater depths constant and the same as the rest of the background. This phenomenon has been found in the results of other inversion schemes, such as [63, 64, 18]. The generated tissue models also contain unrealistic tile/checkerboard artefacts. Such artefacts have been reported to emerge as a consequence of choices in stride/kernel size, though this has not been investigated here [65]. \begin{table} \begin{tabular}{|l|l|l|} \hline \multicolumn{3}{|c|}{Mean Vascular sO\({}_{2}\) estimates} \\ \hline \(A\) Images & Adapted Images & Ground Truth _NA_ Images \\ \hline 30.8\% (\(\sigma\) = 31.0\%) & 39.9\% (\(\sigma\) = 30.8\%) & 49.5\% (\(\sigma\) = 27.3\%) \\ \hline \end{tabular} \end{table} TABLE I: The mean of all sO\({}_{2}\) estimates within vessels produced from 50 \(A\) test images, their adapted counterparts, and the _NA_ versions of these test images when used as inputs to an sO\({}_{2}\)-estimating network pretrained on _NA_ images. The mean vascular sO\({}_{2}\) estimates produced from the adapted images was 9.1% closer to the ground truth (percentage points) than those produced from the unadapted images. Fig. 4: Example of two unadapted \(A\) images (top row), their adapted counterparts appearing as though they belong to the _NA_ domain (middle row), and the corresponding ‘true’ _NA_ versions of the images (bottom row). The CycleGAN has adapted the \(A\) images to remove their wing-like artefacts and regions of low amplitude underneath the vessels, while preserving much of the original tissue’s structure. ## VI Conclusion Fig. 5: Bivariate histogram showing the accuracy distribution of estimated sO\({}_{2}\) values produced from \(A\) images adapted to appear _NA_ when processed by an sO\({}_{2}\)-estimating network trained on _NA_ images, and when the same unadapted \(A\) images were used as inputs to the network. More of the sO\({}_{2}\) estimates acquired from the unadapted \(A\) inputs are underestimated, while those acquired from the adapted images are spread more evenly around the true sO\({}_{2}\). Fig. 6: Summary of the AmbientGAN experiment, where we sought to generate simulated \(p_{0}\) images of circular absorbers alongside images of their underlying \(\mu_{a}\) distributions by providing the network with only with \(p_{0}\) images. ## V Discussion We have described two GAN-based strategies for reducing the domain gap between synthetic PA images used for training and real PA images used as test data. With a toy example, we have shown that unsupervised domain adaptation performed using a CycleGAN can align the feature spaces of images belonging to two different data domains. The adaptation of the target input data (\(A\) images) so that its feature space is more closely aligned with that of the source domain (_NA_ images) can improve the accuracy of the mean of their sO\({}_{2}\) estimates produced by an sO\({}_{2}\)-estimating network trained on data from the source domain. However, even in the idealised case presented here, the approach was only capable of'shifting' sO\({}_{2}\) estimates to be more equally distributed about the ground truth, and did not appear to reduce the variance in the accuracy of estimates. It is clear that the constraints on the adaptation process are inadequate. An ideal adaptation of an \(A\) image would produce an image identical to one that would be simulated if the forward model used to simulate _NA_ images was applied to the same underlying tissue model. However, no such constraint was applied, and it is not clear how one could be formulated. Another limitation of this technique is that unless there are ground truths available from both the source and target domains, its use is practically limited to cases where the output feature spaces of both domains are similar. We have also illustrated how the AmbientGAN framework can be used to generate paired training data for sO\({}_{2}\)-estimating/chromophore quantification networks, only providing it PA images for training. However, there are two key challenges with the approach. The first is formulating a differentiable forward model of image generation that is computationally efficient to compute. This was bypassed in this study by training a network to approximate the forward model. However, in practice, a paired dataset of the optical properties of a series of tissues and their corresponding PA images will not be available. Secondly, even in the case where an efficient and differentiable forward model can be appended to the generator, there is no clear way to constrain it to ensure the generated tissue models are indeed representative of real tissues without the use of prior knowledge of how optical properties are typically distributed _in vivo_. The same limitation applies to recent work described in [66], where conditional generation is used to preserve the spectral features of simulated images adapted to appear more realistic. Indeed, outside of the techniques proposed here, a wide range of generative modelling frameworks could be used to learn the adaptation of network inputs (e.g. improving the quality of adaptations through the use of auxiliary networks [67], or more state of the art frameworks such as diffusion models [68, 69, 70]). But despite possible improvements to adapted image quality, it is not immediately clear how any of these frameworks could improve the quality of the ground truth concentration distributions. With that said, in the event where this information can be drawn from a set of tissue examples, then it is not unreasonable Fig. 7: Top: AmbientGAN generated tissue models (images of their \(\mu_{a}\) distribution) and their corresponding generated \(p_{0}\) images. Bottom: A few non-generated \(p_{0}\) images fed to the AmbientGAN’s critic. to suggest that generative deep learning could accelerate the generation of large amounts of training data. Aside from the applications proposed here, adaptive models could also be used for other tasks related to chromophore quantification. For example, it might be possible to use the CycleGAN framework for approximation error modelling [71, 72]. I.e. when adapting NA images to appear \(A\), the CycleGAN can be thought of as implicitly learning how i) the inclusion of the acoustic and reconstruction models affect image properties, and ii) how to execute a post-processing step that incorporates them into the outputs of the light model that produces the NA images. It is evident that generative models are versatile tools and that this work has by no means provided a complete discussion of their possible uses for chromophore quantification or photoacoustic imaging more generally. Further investigation is needed to understand the true impact these techniques could have on realising a learned model for _in vivo_ chromophore quantification. ## Acknowledgments The authors would like to thank Andreas Hauptmann, Antonio Stanziola, and Simon Arridge for useful discussions. CB acknowledges funding from the London Interdisciplinary Doctoral Training Programme (LIDo).
2306.12426
On Traczyk's BCK-sequences
BCK-sequences and n-commutative BCK-algebras were introduced by T. Traczyk, together with two related problems. The first one, whether BCK-sequences are always prolongable. The second one, if the class of all n-commutative BCK-algebras is characterised by one identity. W. A. Dudek proved that the answer to the former question is positive in some special cases, e.g. when BCK-algebra is linearly ordered. T. Traczyk showed that the answer to the latter is affirmative for n = 1, 2. Nonetheless, by providing counterexamples, we proved that the answers to both those open problems are negative.
Denis Zelent
2023-03-22T14:36:53Z
http://arxiv.org/abs/2306.12426v1
# On Traczyk's BCK-sequences ###### Abstract BCK-sequences and \(n\)-commutative BCK-algebras were introduced by T. Traczyk, together with two related problems. The first one, whether BCK-sequences are always prolongable. The second one, if the class of all \(n\)-commutative BCK-algebras is characterised by one identity. W. A. Dudek proved that the answer to the former question is positive in some special cases, e.g. when BCK-algebra is linearly ordered. T. Traczyk showed that the answer to the latter is affirmative for \(n=1,2\). Nonetheless, by providing counterexamples, we proved that the answers to both those open problems are negative. 2010 Mathematics Subject Classification: 03G25; 06F35; 08B99. Keywords: BCK-algebra; BCK-sequence; variety. Various types of BCK-algebras - as algebras strongly connected with nonclassical propositional calculi - are studied by many authors. A short survey of basic results on BCK-algebras can be found in the book [8]. The class of BCK-algebras is not a variety, but, for example, the class of finite BCK-algebras is solvable [1]. Such BCK-algebras have important applications in coding theory [5] (see also [4] and [6]). For this reason people are looking for new ways of defining various classes of BCK-algebras to make this study easier, as in e.g. [3], where the method of rooted trees is used to construct commutative BCK-algebras. In these studies, it is important whether a given class of BCK-algebras can be defined with a small number of simple identities. T. Traczyk showed [10] that for \(n=1\) and \(n=2\) the class of all \(n\) -commutative BCK algebras can be defined with only one identity. We will show that for \(n>2\) this is, unfortunately, no longer the case. By a BCK-algebra we mean an algebra of the form \((X,\cdot,0)\), where \(X\) is the non-empty set with a designated element \(0\) and the dot as a binary operation satisfying the following axioms: \[\begin{array}{llll}(1)&((x\cdot y)\cdot(z\cdot y))\cdot(x\cdot z)=0,&(2)&(x \cdot(x\cdot y))\cdot y=0,\\ (3)&x\cdot 0=x,&(4)&0\cdot x=0,\\ (5)&x\cdot y=y\cdot x=0\implies x=y.&\end{array}\] Then also \(x\cdot x=0\). Moreover, any BCK-algebra \((X,\cdot,0)\) is partially ordered by the relation \(\leq\) defined by \[x\leq y\mbox{ iff }x\cdot y=0.\] A congruence \(\rho\) defined on a partially ordered algebra \((X,\cdot,\leq)\) is convex if and only if for all \(x,y,z\in X\) from \(x\leq y\leq z\) and \((x,z)\in\rho\) it follows \((x,y)\in\rho\) (cf. [7]). In the case of BCK-algebras, a congruence \(\rho\) is convex if and only if \((x\cdot y,0)\in\rho\) together with \((y\cdot x,0)\in\rho\) imply \((x,y)\in\rho\) (cf. [9]). H. Yutani proved in [11] that all congruences of a finite BCK-algebra are convex. T. Traczyk obtained (cf. [10]) a more general result: a BCK-algebra in which every strongly decreasing (with respect to \(\leq\)) sequence of elements is finite has only convex congruences. This prompted T. Traczyk to study BCK-algebras in which certain sequences stabilise from a certain point. Such algebras are e.g. \(n\)_-commutative BCK-algebras_, i.e. BCK-algebras, in which \(n\) is a minimal integer for which for every two elements \(x_{0},x_{1}\) such that \(x_{1}\leq x_{0}\) we have \(x_{n}=x_{n+1}\), where \(x_{k}=x_{k-2}\cdot(x_{k-2}\cdot x_{k-1})\) for \(k=2,3,\ldots\) The class \({\bf V}_{n}\) of all \(n\)-commutative BCK-algebras is a variety and \({\bf V}_{n}\neq{\bf V}_{n+1}\) (cf. [10]). Moreover, if for arbitrary \(x,y\) in a given BCK-algebra we define two BCK-sequences \(x_{0},x_{1},x_{2},\ldots\) and \(y_{o},y_{1},y_{2},\ldots\) by \[(6) x_{0}=x,\ x_{1}=y\cdot(y\cdot x),\ldots,x_{k}=x_{k-2}\cdot(x_{k-2}\cdot x _{k-1}),\ldots\] \[(7) y_{0}=y,y_{1}=x\cdot(x\cdot y),\ldots,y_{k}=y_{k-2}\cdot(y_{k-2} \cdot y_{k-1}),\ldots\] for \(k=2,3,\ldots\) Then \[(8) x_{0}\geq y_{1}\geq x_{2}\geq y_{3},\] \[(9) y_{0}\geq x_{1}\geq y_{2}\geq x_{3}.\] The variety \({\bf V}_{1}\) is characterised by the identity \(x_{1}=y_{1}\); the variety \({\bf V}_{2}\) by the identity \(x_{2}=y_{2}\) (cf. [10]). Due to this fact, T. Traczyk posed in [10] the following two questions: Question 1.: _Can the sequences \((8)\) and \((9)\) always be prolonged?_ Question 2.: _Is the variety \({\bf V}_{n}\) characterised by the identity \(x_{n}=y_{n}\)?_ As for the first question, a partial answer was given by W.A. Dudek. Namely, he proved in [2] that prolongation of \((8)\) and \((9)\) is possible in BCK-algebras satisfying the identity \(x\cdot(x\cdot y)=y\cdot(y\cdot x)\) and in BCK-algebras that are linearly ordered. He also gave an example of a BCK-algebra with infinite strongly decreasing sequences \((8)\) and \((9)\). Nevertheless, the answer to Question 1 is negative. **Theorem 1**.: _For every \(n\geq 6\) there are at least two BCK-algebras of order \(n\) for which the sequences \((8)\) and \((9)\) cannot be prolonged._ Proof.: Consider two non-isomorphic BCK-algebras: They were found as counterexample to Question 1 using computer program written by the author. The BCK-algebra from Table 1 has two maximal elements (with respect to \(\leq\)): \(x_{0}=4\) and \(y_{0}=5\). For these elements, using (6) and (7), we obtain: \[x_{0}=4\quad x_{1}=3,\ \ x_{2}=2,\ \ x_{k}=1\quad\text{for }k\geq 3\] \[y_{0}=5,\ \ y_{1}=2,\ \ y_{2}=2\ \ \ y_{k}=2\quad\text{for }k\geq 3.\] Thus (8) and (9) have the form \[x_{0}=4\geq 2\geq 2\geq 2,\ \ \ \ \ y_{0}=5\geq 3\geq 2\geq 1\] and cannot be prolonged because \(y_{3}\cdot x_{4}=1\), i.e, \(y_{3}\nleq x_{4}\). The BCK-algebra from Table 2 also has two maximal elements (with respect to the order \(\leq\)): \(x_{0}=3\) and \(y_{0}=5\). For these elements we have \[x_{0}=3,\ \ x_{k}=1\text{ for }k\geq 1,\] \[y_{0}=5,\ \ y_{1}=2,\ y_{2}=1,\ y_{k}=0\text{ for }k\geq 3.\] Since \(x_{3}\cdot y_{4}=1\), these sequences cannot be prolonged. Thus, for \(n=6\), there are two BCK-algebras with BCK-sequences that cannot be prolonged. Now let \((G_{n},\cdot,0)\) be an arbitrary BCK-algebra of order \(n\geq 6\). Consider the set \(G_{n+1}=G_{n}\cup\{n\}\) and the multiplication \[x*y=\left\{\begin{array}{cl}x\cdot y&\text{for }x,y\in G_{n},\\ 0&\text{for }x\in G_{n+1},\ y=n,\\ n&\text{for }x=n,\ y\in G_{n}.\end{array}\right.\] It is not difficult to verify that \((G_{n+1},*,0)\) is a BCK-algebra of order \(n+1\) and \((G_{n},\cdot,0)\) is its BCK-subalgebra. If \(G_{6}\) is a BCK-algebra defined by Table 1 (or by Table 2), then \(G_{7}\) is a BCK-algebra in which the sequences (8) and (9) initiated by \(x_{0}=4\), \(y_{0}=5\) (respectively, by \(x_{0}=3\), \(y_{0}=5\)) cannot be prolonged. By induction, these sequences cannot be prolonged in each BCK-algebra \(G_{n+1}\), \(n\geq 6\). **Lemma 1**.: _The set \(X_{n}=\{0,1,2,\ldots,n-1\}\), \(n\geq 5\), with the operation_ \[x*y=\left\{\begin{array}{ll}0&for\ \ \ x\leq y,\\ x&for\ \ \ y=0,\\ 1&for\ \ \ x=y+1,\\ x-y-1&for\ \ \ x-y-1>0\end{array}\right.\] _is a BCK-algebra linearly ordered by the natural order of non-negative integers._ Proof.: Because axioms (3), (4) and (5) are trivial, we will check only axioms (1) and (2). For \(x=0\) or \(y=0\) the condition (1) is valid for each \(z\in X_{n}\). Substituting \(z=0\) we can reduce it to \((x*y)*x=0\), which is true for \(x\leq y\). If \(y>x\), then \((x*y)*x=1*x=0\) for \(x=y+1\), and \((x*y)*x=(x-y-1)*x=0\) otherwise. Thus, it is true for \(z=0\). It is also true when it contains only two different elements. The remaining case is when \(x,y,z\) are three different non-zero elements. The cases \(x<y<z\), \(\ x<z<y\) and \(z<x<y\) are trivial. Let \(A=((x*y)*(z*y))*(x*z)\). If \(z<y<x\), then \(y\geq z+1\), \(x\geq z+2\). Hence \(x*z=x-z-1>0.\) Thus, \(A=(x*y)*(x-z-1)=0\) for \(x=y+1\). For \(x>y+1\) we have \(A=(x-y-1)*(x-z-1)=0\) since \(x-y<x-z\). So, in this case (1) is satisfied. If \(y<x<z\), then \(x\geq y+1\), \(z\geq y+2\) and \(z*y>0\). Thus \(A=1*(y*z)=0\) for \(x=y+1\), and \(A=(x-y-1)*(z-y-1)=0\) for \(x>y+1\) since \(x-y<z-y\). Hence, in this case, (1) is satisfied as well. Now let \(0<y<z<x\), meaning \(x-y-1>0\). For \(z=y+1\), \(A=((x-y-1)*1)*(x*z)=0\) if \(x-y-1=1\) or \(x-y-1=2\). If \(x-y-1=t\geq 3\), then \(x-z=t\). Hence, \(A=(t*1)*(x*z)=(t-3)*(t-1)=0\). For \(z=y+k\), \(k>1\), we have \(x-y-1=k+t-1>0\), \(z-y-1=k-1>0\), \(A=((k+t-1)*(k-1))*(x*z)=0\), if \(t=1\). If \(t>1\), then \(A=((k+t-1)*(k-1))*(t-1)=(t-1)*(t-1)=0\). So (1) is satisfied for every case. To prove (2), let us observe that for any \(x\leq y\) as well as for \(y=0\), the axiom is always satisfied. For \(x=y+1\) we have \((x*(x*y))*y=((y+1)*1)*y=0\). For \(x=y+k\), \(k>1\), we have \(((y+k)*(k-1))*y=y*y=0\). This completes the proof. **Lemma 2**.: _Let \((X_{n},\cdot,0)\) be as in the previous lemma. For every \(n\geq 5\), the algebra \((X^{\prime}_{n},*,0)\), where \(X^{\prime}_{n}=X_{n}\cup\{n\}\) and_ \[x\cdot y=\left\{\begin{array}{ll}x*y&for\ \ \ x,y\in X_{n},\\ n&for\ \ \ x=n,\ y=0,\\ n-y-1&for\ \ \ x=n,\ y\in X_{n}-\{0\},\\ 0&for\ \ \ x\in X^{\prime}_{n}-\{n-1\},\ y=n,\\ 1&for\ \ \ x=n-1,\ y=n.\end{array}\right.\] _is a BCK-algebra of order \(n+1\)._ Two examples of such constructed BCK-algebras are shown below: Proof.: Due to the way the algebra \((X^{\prime}_{n},\cdot.0)\) is defined, it directly follows that \[(10)\ \ x\leq y\implies n*y\leq n*x.\] Additionally, \[(11)\ \ x\leq n\implies x*y\leq n*y\mbox{ for all }y\neq n.\] Indeed, for \(x\leqslant y\) the last implication is trivial. If \(y<x\), then \(n=x+k\), \(x=y+t\), \(n=x+k+t\), \(k,t>0\), which for \(t=1\) gives \(x*y=1\leqslant n*y\) since by the definition \(n*y\geqslant 1\) for all \(y\neq n\). For \(t>1\) we have \(x*y=x-y-1=t-1<k+t-1=n+y\), which completes the proof of (11). In view of Lemma 1, the proof that \((X^{\prime}_{n},\cdot,0)\) is a BCK-algebra can be done by verifying (1) and (2), in the case when at least one element is equal to \(n\). Conditions (3), (4) and (5) are satisfied due to the method of the above definition. If in (1) one element is \(n\) and the second is \(0\), or one is \(n\) and the other two are equal, (1) is satisfied. Now, let \(x=n\). Then \(0<y<z<n\) or \(0<z<y<n\). The first case needs to be divided into two subcases: * \(z=y+1\). Then \(((n*y)*(z*y))*(n*z)=((n*y)*1)*(n*z)=0\) if \(y=n-2\) or \(y=n-3\). If \(y<n-3\), then \(((n*y)*1)*(n*z)=((n-y-1)*1)*(n*z)=(n-y-3)*(n*z)=(n*(y+2))*(n*z)=0\), where the last equation follows from (10). * \(z>y+1\). Then \(((n*y)*(z*y))*(n*z)=((n*y)*(z-y-1))*(n*z)=((n-y-1)*(n-y-2))*1=1*1=0\) for \(z=n-1\). For \(z<n-1\) we have \(((n*y)*(z-y-1))*(n*z)=((n-y-1)*(z-y-1))*(n*z)=(n-y-1-(z-y-1)-1)*(n*z)=(n-z-1)*( n*z)=(n*z)*(n*z)=0\). Let \(y=n\). Then \(0<x<z<n\) or \(0<z<x<n\). In the first case \(x*n=0\) and thus \(((x*n)*(z*n))*(x*z)=0\). For the second case, if \(x=n-1\), then \(((x*n)*(z*n))*(x*z)=(1*0)*((n-1)*z)=0\) since \((n-1)*z\neq 0\). Finally, let \(z=n\). Then if \(0<x<y<n\), then \(((x*y)*(n*y))*(x*n)=0\) because \(x*y=0\), and if \(0<y<x<n\), then \(((x*y)*(n*y))*(x*n)=0\) follows from (11). This completes the proof of (1). As for (2), the cases when \(y=n\) or when \(x=n\) and \(y\in\{0,n-1,n\}\) are trivial. The only remaining case is when \(x=n\) and \(y\in\{1,2,\ldots,n-2\}\), but then \((x\cdot(x\cdot y))\cdot y=(n\cdot(n-(y+1))\cdot y=(n-(n-(y+1)+1))\cdot y=y\cdot y=0\). Thus, \((X^{\prime}_{n},\cdot,0)\) is a BCK-algebra. We can now show that the above construction allows us to give a counterexample to Question 2. **Theorem 2**.: _For \(m\geq 3\), the variety \(V_{m}\) is not determined by \(x_{m}=y_{m}\)._ Proof.: We will prove it by showing that for every \(n\geq 5\) the BCK-algebra of order \(n+1\) defined in Lemma 2 belongs to the variety \(\mathbf{V}_{n-2}\), but there exists \(x,y\) such that \(x_{n-2}\neq y_{n-2}\). Firstly, we will show that this BCK-algebra belongs to \(\mathbf{V}_{n-2}\). From Lemma 2, \(X_{n}\) and \(X^{\prime}_{n}-\{n-1\}\) are isomorphic linearly ordered BCK-algebras and thus the longest possible sequence (of different elements) which we can obtain occurs when \(x_{0}=n-1\) and \(x_{1}=n-2\). In that case \(x_{2}=n-3\), \(x_{3}=n-4,\ldots,x_{n-3}=2\), \(x_{n-2}=1=x_{n-1}\). In any other case, we will also have \(x_{n-2}=x_{n-1}\) due to the linearity and the length of those sequences. That shows that this BCK-algebra indeed belongs to \(\mathbf{V}_{n-2}\). Now, let us see what happens with sequences (6) and (7) in case \(x=n-1\), \(y=n\). Then \(x_{1}=y\cdot(y\cdot x)=n\cdot(n\cdot(n-1))=n\cdot 1=n-2\), \(x_{2}=x\cdot(x\cdot x_{1})\)\(=(n-1)\cdot((n-1)\cdot(n-2))=(n-1)\cdot 1=n-3,\ldots,x_{n-3}=2,\ x_{n-2}=1\), but \(y_{1}=x\cdot(x\cdot y)=(n-1)\cdot((n-1)\cdot n)=(n-1)\cdot 1=n-3,\ y_{2}=y\cdot(y \cdot y_{1})=n\cdot(n\cdot(n-3))=n\cdot 2=n-3,\ldots,y_{n-2}=n-3\), and obviously \(n-3\neq 1\) for \(n\geq 5\), meaning \(x_{n-2}\neq y_{n-2}\) for those sequences, which completes the proof. ## Conclusion This paper shows that although prolonging BCK-sequences is possible in some special cases, as shown in [2], it is not possible in general. It also shows that the variety \(V_{n}\) is not generated by the identity \(x_{n}=y_{n}\). This solves both open problems posed by Traczyk in [10]. ## Acknowledgement I would like to extend my deepest gratitude to Wieslaw A. Dudek for bringing those open problems to my attention, as well as for his guidance in writing this paper. Special thanks to Michael Kinyon, who was the first person to give the counterexample to Question 2 in the case of \(\mathbf{V}_{3}\).
2304.05723
Distributed Coverage Control of Constrained Constant-Speed Unicycle Multi-Agent Systems
This paper proposes a novel distributed coverage controller for a multi-agent system with constant-speed unicycle robots (CSUR). The work is motivated by the limitation of the conventional method that does not ensure the satisfaction of hard state- and input-dependent constraints and leads to feasibility issues for multi-CSUR systems. In this paper, we solve these problems by designing a novel coverage cost function and a saturated gradient-search-based control law. Invariant set theory and Lyapunov-based techniques are used to prove the state-dependent confinement and the convergence of the system state to the optimal coverage configuration, respectively. The controller is implemented in a distributed manner based on a novel communication standard among the agents. A series of simulation case studies are conducted to validate the effectiveness of the proposed coverage controller in different initial conditions and with control parameters. A comparison study in simulation reveals the advantage of the proposed method in terms of avoiding infeasibility. The experiment study verifies the applicability of the method to real robots with uncertainties. The development procedure of the method from theoretical analysis to experimental validation provides a novel framework for multi-agent system coordinate control with complex agent dynamics.
Qingchen Liu, Zengjie Zhang, Nhan Khanh Le, Jiahu Qin, Fangzhou Liu, Sandra Hirche
2023-04-12T09:28:51Z
http://arxiv.org/abs/2304.05723v2
# Distributed Coverage Control of Constrained Constant-Speed Unicycle Multi-Agent Systems ###### Abstract This paper proposes a novel distributed coverage controller for a multi-agent system with constant-speed unicycle robots (CSUR). The work is motivated by the limitation of the conventional method that does not ensure the satisfaction of hard state- and input-dependent constraints and leads to feasibility issues for multi-CSUR systems. In this paper, we solve these problems by designing a novel coverage cost function and a saturated gradient-search-based control law. Invariant set theory and Lyapunov-based techniques are used to prove the state-dependent confinement and the convergence of the system state to the optimal coverage configuration, respectively. The controller is implemented in a distributed manner based on a novel communication standard among the agents. A series of simulation case studies are conducted to validate the effectiveness of the proposed coverage controller in different initial conditions and with control parameters. A comparison study in simulation reveals the advantage of the proposed method in terms of avoiding infeasibility. The experiment study verifies the applicability of the method to real robots with uncertainties. The development procedure of the method from theoretical analysis to experimental validation provides a novel framework for multi-agent system coordinate control with complex agent dynamics. This paper gives a novel solution for multiple robots to effectively cover a polygonal area. Compared to the conventional approaches, our method allows the robots to cover a target region using circular orbits, which is suitable for constant-speed unicycle robots (CSUR) like fixed-wing unmanned aerial vehicles (fUAV). The main advantage of this method is to ensure that the coverage is always successful by preventing the robots from departing the target region. Also, the method satisfies common control saturation constraints in practice and can be implemented in a reliable decentralized scheme. The method is validated to be effective for wheeled robots in experiment studies, although it can also be applied to UAVs in theory. multi-agent systems, coverage control, barrier-Lyapunov function, invariance, input-saturation control. ## I Introduction The effective coverage of a target region using robots is an important task for various practical applications in industry, agriculture, and public services. It is also the prototype for more complicated tasks, such as event monitoring, production measuring, and resource allocation. The essential objective of a coverage problem is to effectively allocate the robots in the target region, such that a certain criterion is optimized. Previously, a coverage task is usually executed by a single robot using its trajectory [1]. Nevertheless, multi-agent systems that consist of multiple networked robots are increasingly used. In this case, each agent only dominates a local partition of the target region, which results in high efficiency and superior reliability due to effective collaboration and coordination among the agents. For multi-agent optimal coverage, the most widely used is the closest-distance criterion, i.e., every spot of the target region is dominated by its closest agent [2]. The corresponding solution is depicted as a Centroidal Voronoi Tessellation (CVT) [3], where each agent is positioned in the geometric center or the _centroid_ of a Voronoi partition. Solving a multi-agent coverage problem is equivalent to finding a Voronoi partition scheme subject to the optimized criterion. This inspires the design of an optimal coverage controller that drives the agents to move along the negative gradient direction of the coverage criterion and ultimately reach the optimal coverage configuration [3]. However, most of the conventional coverage controllers are only effective for agents that are formulated as single integrators, or single-integrator robots (SIR), such as quadcopters. Optimal coverage control using agents with complex dynamics is still an open and challenging question. A typical agent with complex dynamics is a constant-speed unicycle robot (CSUR) which moves at a constant linear speed and is steered by its angular velocity [4]. Different from SIRs, a CSUR does not freeze in a fixed position but always moves until the power is used up. Our focus on CSURs is motivated by the interest to solve optimal coverage using fixed-wing unmanned aerial vehicles (fUAV), a class of vehicles that are maneuvered by two fixed wings [5]. Compared to quadcopters, an fUAV can carry heavier loads and cruise at a higher speed with less power, offering higher efficiency in terms of longer air-borne time and a larger coverage capability [5]. However, the conventional coverage control methods used for SIRs can not be directly applied to CSURs due to the difference between their dynamic properties. A CSUR is typically controlled to orbit around a fixed point [6]. In this sense, optimal coverage can be realized by treating the orbiting center of a CSUR as a conventional agent and regulating each CSUR to orbit around the geometric center of its Voronoi partition [7]. However, this may lead to a feasibility issue of the CVT when the orbiting centers move out of the target region and the number of CSURs is reduced before optimal coverage is reached. The main reason for the feasibility issue is that the orbiting motion of a CSUR renders an under-actuated dynamic model that brings up an additional state-dependent perturbation term. This term may deflect the desired motion direction of a CSUR and drive it toward the outside of the target region. This issue only shows up in a multi-agent system with complex agent models but not in one with simple and fully-actuated agent dynamics like SIRs. To our best knowledge, the feasibility issue of a coverage control problem has not been well defined and studied by existing work, due to the lack of studies on the coverage control of complex agents. Fixing this problem requires an additional switching law which, however, brings discontinuity to the controller [7]. Another solution that has not been explored is to use several hard constraints to forcibly confine the orbiting centers of the CSURs within the target region. These hard constraints can be embedded in the coverage control problem with barrier functions, such as a control barrier function (CBF) [8] or a barrier Lyapunov function (BLF) [9] which are widely used for safe-critic control. Nevertheless, using them to construct feasible coverage controllers for multi-agent systems with complex dynamic models has not been investigated by previous work. Besides the feasibility issue, distributed realization is also important for coverage control. In practice, it is very common that robots are not fully connected, which brings up challenges to centralized control approaches. A distributed controller that only requires the communication locally performed among adjacent agents is more robust to faults and anomalies than a centralized one since the agents are not affected when their non-adjacent agents are defective. The distributed realization has been an essential requirement for many multi-agent coordinate control problems, such as consensus [10], formation [11], and distributed optimization [12]. The conventional coverage controllers for SIRs can also be implemented in a distributed manner [13]. However, whether a multi-CSUR system admits a distributed coverage controller is still an open problem. In general, designing a distributed coverage controller should not only incorporate the complex dynamic models of the agents but also redefine the communication standard among them. In this paper, we propose a novel distributed controller for coverage control of a multi-CSUR system with the feasibility issue fixed. The controller is designed based on a novel coverage cost function which serves as a barrier-Lyapunov function (BLF) that encodes the hard state-dependent constraints to the coverage problem. It ensures that the orbiting centers of the CSURs asymptotically approach the optimal configuration while being confined within the target region. Thus, optimal coverage is ultimately achieved without causing infeasibility. The achievement of optimal coverage and the satisfaction of the state-dependent constraints are proved using a Lyapunov-based method and the controlled invariance theory, respectively. Also, the control-saturation constraints are satisfied via a Sigmoid function. The controller is designed in a distributed manner with the communication standard properly redefined. The rest of this paper is organized as follows. In Sec. II, we briefly review the related work on optimal coverage control and address the main challenges of our study. Sec. III introduces the preliminaries and formulates the problem. Sec. IV proposes the theoretical results of our proposed coverage controller. The simulation and experimental studies conducted to validate the proposed method are presented in Sec. V and Sec. VI, respectively. Finally, Section VIII concludes the paper. _Notations_: \(\mathbb{R}\) is the set of real scalars. \(\mathbb{R}_{+}\) and \(\mathbb{R}_{\geq 0}\) denote the positive and non-negative real scalars. \(\mathbb{N}\) and \(\mathbb{N}^{+}\) are the sets of non-negative and positive integers. For a real scalar \(a\in\mathbb{R}\), \(|a|\in\mathbb{R}_{\geq 0}\) is its absolute value. \(x\in\mathbb{R}^{n}\) represents an \(n\) dimensional vector and \(A\in\mathbb{R}^{n\times m}\) is an \(n\) by \(m\) matrix. \(\|x\|\) is the 2-norm of \(x\) and \(\|x\|_{Q}=\sqrt{x^{\dagger}Qx}\) is its weighted norm, \(Q\in\mathbb{R}^{n\times n}\), \(Q>0\). For a closed compact set \(\Omega\in\mathbb{R}^{n}\), \(\Omega\) represents the interior of \(\Omega\) and \(\partial\Omega\) is its boundary. For a set \(\mathcal{A}\subset\Omega\), \(\Omega-\mathcal{A}\) denotes the set difference of \(\Omega\) and \(\mathcal{A}\). ## II Related Work The optimal coverage problem is originally introduced in [3] based on a facility location problem [14] which also addresses the relation between its solution and a CVT. In [15], optimal coverage is defined as a coordination control problem for multi-agent systems with time-variant network topology and nonsmooth dynamics, based on which a general distributed coverage control law is proposed using nonsmooth gradient flows. The stability of the controlled system is analyzed using nonsmooth Lyapunov functions. This work has formed the theoretical foundation of optimal coverage control problems. Then, a general gradient searching law is designed for a team of SIRs [16]. The gradient-based control framework is then extended to generic multi-agent coordination control problems in [17]. In [18], this control framework is further extended to various coverage cost criteria, where the non-convexity of the coverage problem is clarified. All these efforts have provided us with a strong theoretical foundation to analyze the feasibility and stability of the coverage control solutions. Recent work attempts to improve the flexibility of the control methods against imperfect environmental knowledge. In [19], a radius basis function (RBF) is used to approximate the unknown distribution function of the coverage criterion, such that the robots can incrementally learn the environment knowledge during the movement. In [20] and [21], an adaptive controller is proposed for a time-variant coverage criterion. Besides, many efforts are devoted to the optimal coverage over nontrivial geometric manifolds like circles [22, 23], spherical surfaces [24], or arbitrary curves designated by certain vector-fields [25]. Complementary results toward complicated coverage tasks are also introduced. In [26], a control scheme is proposed to ensure a smooth transference between coverage and other coordinate tasks. The work in [27] attempts to seek a global optimal coverage solution. In [28], the coverage control problem is investigated for a team of disk-shaped robots with heterogeneous sizes. Also, [29] studies coverage control of robots with adjustable sensor ranges, which leads to Voronoi partitions with soft margins instead of the conventional ones with clear boundaries. A survey on other recent development of multi-agent coverage control can be referred to in [30]. Compared to SIRs, the coverage control of complex agents attracts less attention. In [7, 31], coverage controllers are developed for CSURs, where the ultimate optimal coverage configuration corresponds to the solution where the orbiting centers of the CSURs coincide with the Voronoi centroids. The feasibility issue is solved using hard switching schemes which have obvious shortcomings. Firstly, they may lead to instability for an oddly shaped region due to the finite discrete-sampling rate. Secondly, they require a large control effort on the boundary of the target region, which is difficult to satisfy considering the practical control limits. Thirdly, the closed-loop system under hard switching is not robust to disturbances. To avoid hard switching in the controller inputs, a feasible solution is to formulate the feasibility requirement as a group of state-dependent constraints and encode them into the coverage controller using barrier functions [32], which may result in a controller subject to the _controlled invariance_ property [33]. Although the barrier functions are widely applied to practical control systems due to the advantage of continuous control inputs, they have not been used for coverage control of complex agents. We recognize them as powerful tools to solve the feasibility issue for multi-CUSR systems. ## III Preliminaries and Formulation This section introduces the preliminaries and the problem formulation. We first recall the classical multi-robot optimal coverage problem and present a conventional distributed coverage controller for SIRs. Then, we introduce the dynamic model of a CSUR. Finally, we formulate the multi-CSUR optimal coverage control problem studied in this paper. ### _The Optimal Coverage Problem with Multiple Agents_ Let \(\Omega\in\mathbb{R}^{2}\) be a closed convex polygonal set surrounded by \(M\in\mathbb{N}^{+}\) linear edges, i.e., \[\Omega=\left\{\omega\in\mathbb{R}^{2}\left|h_{j}(\omega)\geq 0\;,\,\forall\,j \in\mathcal{M}\right.\right\}, \tag{1}\] where \(\mathcal{M}=\{1,2,\cdots,M\}\) and \(h_{j}(\omega)\) is defined as \[h_{j}(\omega)=b_{j}-a_{j}^{\top}\omega,\;\omega\in\mathbb{R}^{2},\;j\in \mathcal{M}, \tag{2}\] where \(a_{j}\in\mathbb{R}^{2}\), \(b_{j}\in\mathbb{R}\), are coefficients to denote the edges. Also, we denote the boundary \(\partial\Omega\) and the interior \(\mathrm{int}\,\Omega\) of the region respectively as \[\partial\Omega =\left\{\omega\in\mathbb{R}^{2}\left|h_{j}(\omega)=0,\,\exists \,j\in\mathcal{M}\right.\right\}, \tag{3}\] \[\mathrm{int}\,\Omega =\left\{\omega\in\mathbb{R}^{2}\left|h_{j}(\omega)>0\;,\,\forall \,j\in\mathcal{M}\right.\right\}.\] Note that \(\mathrm{int}\,\Omega\) is open. To simplify the representation, we assume that the origin \(O\) of the coordinate is within \(\Omega\) or on its boundary, i.e., \(O\in\Omega\) without losing generality. Actually, for any other case, we can always apply a coordinate transformation to make it satisfied for the new coordinate frame. Thus, we can regulate \(\|a_{j}\|=1\) and \(b_{j}>0\) for all \(j\in\mathcal{M}\) to uniquely define the edges. \(N\) agents are placed in region \(\Omega\) for coverage. The position of each agent is denoted as \(z_{k}\in\mathbb{R}^{2}\), \(k\in\mathcal{N}\), where \(\mathcal{N}=\{1,2,\cdots,N\}\). We define \(\mathcal{Z}=\{z_{1},z_{2},\cdots,z_{N}\}\), \(z_{i}\neq z_{j}\) for any \(i,j\in\mathcal{N}\), \(i\neq j\), as a _configuration_ which is defined on a joint domain \(\Omega^{N}=\underbrace{\Omega\times\cdots\times\Omega}_{\mathcal{N}}\) with \(\mathcal{Z}\in\Omega^{N}\) denoting \(z_{1}\in\Omega\cap z_{2}\in\Omega\cap\cdots\cap z_{N}\in\Omega\). The objective of the optimal coverage problem is to properly locate the \(N\) agents to minimize the following coverage cost, \[H_{\Omega}(\mathcal{Z})=\int_{\Omega}f(\omega,\mathcal{Z})\Phi(\omega)\mathrm{ d}\omega,\;\mathcal{Z}\in\Omega^{N}, \tag{4}\] where \(\omega\in\Omega\) denotes an event in the region \(\Omega\), \(\Phi:\Omega\rightarrow\mathbb{R}^{+}\) is a function that depicts the distribution of events \(\omega\in\Omega\), and \(f:\Omega\times\Omega^{N}\rightarrow\mathbb{R}^{+}\) is a function that assigns a real weight to an event \(\omega\in\Omega\). In this paper, the weight function is [2], \[f(\omega,\mathcal{Z})=\min_{k\in\mathcal{N}}\frac{1}{2}\left\|\omega-z_{k} \right\|^{2},\;\mathcal{Z}\in\Omega^{N}, \tag{5}\] which calculates the squared Euclidean distance between an event \(\omega\in\Omega\) and its closest agent. This is equivalent to splitting the region \(\Omega\) into \(N\) mutually exclusive Voronoi partitions \(\Omega_{1}\), \(\Omega_{2}\), \(\cdots\), \(\Omega_{N}\) using the \(N\) agents. Each partition is defined as \[\Omega_{k}=\left\{\omega\in\Omega\left|\left\|\omega-z_{k}\right\|\leq\left\| \omega-z_{i}\right\|,\,\forall\,i\neq k,i\in\mathcal{N}\right.\right\}. \tag{6}\] Then, function (5) can be rewritten as \[f(\omega,\mathcal{Z})=\frac{1}{2}\left\|\omega-z_{k}\right\|^{2},\;\mathrm{if }\;\omega\in\Omega_{k} \tag{7}\] which takes off the minimum operator in (5) and converts it to a piece-wise quadratic form. Substituting the weight function (7) to (4), the coverage cost becomes \[H_{\Omega}(\mathcal{Z})=\sum_{k=1}^{N}\frac{1}{2}\int_{\Omega_{k}}\|\omega-z_{ k}\|^{2}\Phi(\omega)\mathrm{d}\omega \tag{8}\] which transfers the integration over the entire region \(\Omega\) to the summary of the individual integrals on all Voronoi partitions \(\Omega_{k}\), \(k\in\mathcal{N}\). Thus, the optimal coverage problem is solved by placing the agents at the following optimal configuration \[\mathcal{Z}^{*}=\arg\min_{\mathcal{Z}\in\Omega^{N}}H_{\Omega}(\mathcal{Z}). \tag{9}\] Note that the coverage cost (8) is a nonconvex function of which a global minimum solution is difficult to find [18]. Similar to the previous work [7, 34], in this paper, we are only concerned with its local optimal solutions which can be solved using gradient-based control laws [3, 35]. Therefore, we refer to the optimal configuration given by (9) as a _local optimal configuration_ (LOC). It is worth mentioning that there may exist multiple LOCs in the domain \(\Omega^{N}\). Enumerating all LOCs and discussing which is the best is beyond this paper. ### _Distributed Coverage Controller for A Multi-SIR System_ Given the Voronoi partitions defined in (6), we say two partitions are adjacent if they share common boundaries, i.e., \(\exists\omega\in\Omega\), \(\omega\in\Omega_{i}\cap\Omega_{j}\). Based on this, we claim that agents \(i,j\in\mathcal{N}\), \(i\neq j\), are _adjacent_ if their Voronoi partitions \(\Omega_{i}\) and \(\Omega_{j}\) are adjacent. We define an adjacency mapping \(\mathscr{A}:\mathcal{N}\to 2^{\mathcal{N}}\) to depict the adjacency relation between the agents. Specifically, \(\mathscr{A}_{k}\), \(k\in\mathcal{N}\) is the set of all adjacent agents of agent \(k\). Note that the adjacency relation is bidirectional, i.e., for any \(i,j\in\mathcal{N}\), \(i\neq k\), \(i\in\mathscr{A}_{k}\Leftrightarrow k\in\mathscr{A}_{i}\). Also, we define a commonly used set \(\overline{\mathscr{A}_{k}}=\mathscr{A}_{k}\cup k\), \(k\in\mathcal{N}\). The adjacency relation is needed to incorporate a common practical condition that communication can only be effective within a certain range [15, 36]. For the optimal coverage problem, this range refers to the largest distance between adjacent agents, which renders a common and practical assumption that only adjacent agents can conduct bidirectional communication [34]. Then, we proceed with the discussion on the solution to the optimal coverage problem. It is known that a LOC of the coverage cost (8) can be obtained by solving \[\nabla H(\mathcal{Z})=\bigg{[}\,\frac{\partial H(\mathcal{Z})}{\partial z_{1}} \,\,\frac{\partial H(\mathcal{Z})}{\partial z_{2}}\,\,\,\ldots\,\,\,\frac{ \partial H(\mathcal{Z})}{\partial z_{N}}\bigg{]}^{\!\top}=0. \tag{10}\] According to [35], the \(k\)-th element of the gradient \(\nabla H(\mathcal{Z})\) is calculated as \[\begin{split}\frac{\partial H(\mathcal{Z})}{\partial z_{k}}& =\int_{\Omega_{k}}\frac{1}{2}\frac{\partial\|\omega-z_{k}\|^{2}} {\partial z_{k}}\Phi(\omega)\mathrm{d}\omega\\ &=M(\mathcal{Z}_{\overline{\mathcal{A}_{k}}})\Big{(}z_{k}-C( \mathcal{Z}_{\overline{\mathcal{A}_{k}}})\Big{)}\,,\end{split} \tag{11}\] where \(\mathcal{Z}_{\overline{\mathcal{A}_{k}}}\in\Omega^{|\overline{\mathcal{A}_{k}}|}\) is the set of all \(z_{j}\) with \(j\in\overline{\mathcal{A}_{k}}\) where \(|\overline{\mathcal{A}_{k}}|\) is the number of elements in the finite set \(\overline{\mathcal{A}_{k}}\), and \(M(\mathcal{Z}_{\overline{\mathcal{A}_{k}}})\in\mathbb{R}\) and \(C(\mathcal{Z}_{\overline{\mathcal{A}_{k}}})\in\mathbb{R}^{2}\) are the geometric mass and the centroid of the Voronoi partition \(\Omega_{k}\), defined as \[M(\mathcal{Z}_{\overline{\mathcal{A}_{k}}})\!=\!\!\int_{\Omega_{k}}\!\!\!\! \Phi(\omega)\mathrm{d}\omega,\,\,C(\mathcal{Z}_{\overline{\mathcal{A}_{k}}}) \!=\!\!\int_{\Omega_{k}}\frac{\omega\Phi(\omega)\mathrm{d}\omega}{M( \mathcal{Z}_{\overline{\mathcal{A}_{k}}})}, \tag{12}\] Here, we refer to \(\mathcal{Z}_{\overline{\mathcal{A}_{k}}}\) as a _partial configuration_ since it only contains the positions of \(z_{k}\) and its adjacent agents. It is noticed in (11) that the computation of gradient \(\frac{\partial H(\mathcal{Z})}{\partial z_{k}}\) only needs the positions of agent \(k\) and its adjacent agents contained in \(\mathcal{Z}_{\overline{\mathcal{A}_{k}}}\), which is an important property for the implementation of a distributed coverage controller to be discussed later. The relation among the agent positions, the Voronoi partition, and the centroids is illustrated in Fig. 1. Since \(M(\mathcal{Z}_{\overline{\mathcal{A}_{k}}})>0\) holds for all \(k\in\mathcal{N}\), by solving \(\nabla H(\mathcal{Z})=0\), we know that a necessary condition for \(\mathcal{Z}\) being a LOC is, \[z_{k}=C(\mathcal{Z}_{\overline{\mathcal{A}_{k}}}),\,\,\forall\,k\in\mathcal{N}. \tag{13}\] Therefore, if a configuration is a LOC, the agent positions and the Voronoi centroids must coincide with each other, but the coinciding condition does not necessarily indicate a LOC. Additional conditions should be used for further judgment. A LOC indicated by condition (13) can be found using the following gradient-based method \[\dot{z}_{k}=-\frac{\partial H(\mathcal{Z})}{\partial z_{k}},\,\,k\in\mathcal{N}, \tag{14}\] which is the main technical point of the conventional methods for the multi-agent coverage problem. For a multi-SIR system with the following single-integrator-based models [15, 36], \[\dot{z}_{k}(t)=u_{k}(t),\,\,k\in\mathcal{N}, \tag{15}\] where \(u_{k}(t)\in\mathbb{R}^{2}\) is the velocity of a SIR as its control input, a trivial optimal coverage controller can be designed as \[u_{k}(t)=-\frac{\partial H(\mathcal{Z})}{\partial z_{k}},\,\,k\in\mathcal{N}. \tag{16}\] This renders a distributed controller since the computation of the gradient \(\frac{\partial H(\mathcal{Z})}{\partial z_{k}}\) only requires the positions of agent \(z_{k}\) and its adjacent agents. ### _The Dynamic Model of A CSUR_ The dynamic model of a CSUR is depicted as follows [37], \[\begin{split}\dot{\zeta}(t)&=v_{0}r(\theta)\\ \dot{\theta}(t)&=u(t),\end{split} \tag{17}\] where \(\zeta(t)\in\mathbb{R}^{2}\) and \(\theta(t)\in\mathbb{R}\) are the position and the orientation of the CSUR at time \(t\in\mathbb{R}_{\geq 0}\), respectively, \(v_{0}\in\mathbb{R}^{+}\) is the constant linear speed of the robot, \(u(t)\in\mathbb{R}\) is the angular velocity input of the robot, and \(r(\theta)=[\,\cos(\theta)\,\,\sin(\theta)]^{\!\top}\) is a transformation vector. It is easy to verify that \(r(\theta)\) satisfies \[\big{\|}r(\theta)\big{\|}=1,\,\,\mathrm{and}\,\,\frac{\partial^{2}r(\theta)}{ \partial\theta^{2}}=-r(\theta),\,\,\forall\,\theta\in\mathbb{R}. \tag{18}\] For the CSUR input \(u(t)\) in (17), we regulate that \(u(t)<0\) and \(u(t)>0\) indicate clockwise and anticlockwise orientation directions, respectively. When \(u(t)\equiv 0\), the CSUR moves along a straight line. Note that the robot model (17) is under-actuated since the three-dimensional state \([\,\zeta^{\!\top}(t)\,\,\theta(t)\,]^{\!\top}\) is excited by a one-dimensional input signal \(u(t)\). Also, it is not possible to let a CSUR freeze in a fixed position like a SIR since it always moves at a constant speed \(v_{0}\). Following [7, 38], we use the following virtual center of a CSUR, instead of its position \(\zeta(t)\), to perform the coverage task, \[z(t)=\zeta(t)+\frac{v_{0}}{\omega_{0}}\frac{\partial r(\theta)}{\partial\theta}, \tag{19}\] where \(\omega_{0}\in\mathbb{R}\), \(\omega_{0}\neq 0\) is a constant parameter that represents the nominal angular velocity of CSUR. Taking the derivative of (19), the dynamic model of the virtual center is \[\dot{z}(t)=\dot{\zeta}(t)+\frac{v_{0}}{\omega_{0}}\frac{\partial^{2}r(\theta)}{ \partial\theta^{2}}\dot{\theta}(t)=v_{0}r(\theta)-\frac{v_{0}}{\omega_{0}}r( \theta)u(t). \tag{20}\] The meaning of the virtual center \(z(t)\) is not straightforward for an arbitrary robot trajectory \(\zeta(t)\) but is clear for a special case \(u(t)\equiv\omega_{0}\). Substituting it to (20), we have \(\dot{z}(t)=0\) which denotes that the virtual center \(z(t)\) is a static point in this case. Then, equation (19) indicates that the robot is moving around \(z(t)\) along a circular orbit with a linear speed Fig. 1: A coverage example of a rectangular region. The red ‘o’ marks are the agent positions. The solid lines define the Voronoi partitions. The blue ‘+’ marks are the Voronoi centroids. The fact that the ‘o’ marks do not coincide with the blue ‘+’ marks indicate that this is not a LOC. \(v_{0}\), an angular velocity \(\omega_{0}\), and orbit radius \(v_{0}/|\omega_{0}|\). Thus, \(z(t)\) can be interpreted as the center of the circular orbit of the CSUR when it is a static point, which is why it is referred to as a _virtual_ center. The relation between the CSUR position \(\zeta(t)\) and its virtual center \(z(t)\) is shown in Fig. 2. Different from the CSUR position \(\zeta(t)\) that has to always move at a constant linear speed, the virtual center \(z(t)\) can remain static at a certain position when the CSUR is controlled with a constant input \(u(t)\equiv\omega_{0}\). This property is similar to the dynamics of a SIR as shown in (15), which provides the possibility to extend the existing results for SIRs to the virtual centers of CSURs. Therefore, in his paper, we refer to the CSUR virtual centers as _CSUR agents_ and use them to conduct the coverage task. Nevertheless, it is noticed that the dynamic model of a CSUR agent in (20) is more complicated than a SIR model (15), which brings up challenges to this extension. Sec. III-D explains the challenges in detail. ### _The Optimal Coverage Control of Multiple CSUR Agents_ Derived from (20), the dynamic model of each agent in a multi-CSUR system is depicted by \[\dot{z}_{k}(t)=v_{0}r(\theta_{k})-\frac{v_{0}}{\omega_{0}}r(\theta_{k})u_{k}( t),\ k\in\mathcal{N}, \tag{21}\] where \(\theta_{k}(t)\) and \(u_{k}(t)\) are respectively the orientation and the control input of agent \(k\). Here, we assume that all agents have the same speed parameters \(v_{0}\), \(\omega_{0}\), for the simplification of the problem. The nonlinear projection gain \(r(\theta_{k})\) and the additive perturbation term \(v_{0}r(\theta_{k})\) in the dynamic model (21) make the coverage control problem more challenging than SIRs. From (17), we know \(r(\theta_{k})\) has a constant norm \(1\), which means that these nonlinear terms constantly perturb the agent velocity \(\dot{z}(t)\) from the desired gradient-searching direction \(-\frac{\partial H(\mathcal{Z})}{\partial z_{k}}\) and prevent the agent position \(z_{k}(t)\) from converging to the optimal configuration \(\mathcal{Z}^{*}\). In some cases, the CUSR agents may even move out of \(\Omega\), such that the optimization problem (9) becomes infeasible. Note that SIRs do not have the feasibility issue since the closed-loop dynamic model (14) is not twisted by these nonlinear terms and the control law (16) always guides the SIRs toward the interior of the target region \(\Omega\). To ensure feasibility, the CSUR agents must be confined within the target region. Meanwhile, the control inputs should follow certain saturation restrictions for the concern of limited energy or resources. Based on this consideration, we formulate the multi-CSUR optimal coverage control problem as follows. **Problem 1**.: _Given a convex set \(\Omega\subset\mathbb{R}^{2}\) defined in (1) and \(N\) CSUR agents depicted by (21), design a distributed control law \(u_{k}(t)\) for all \(k\in\mathcal{N}\) subject to the adjacency relation \(\mathcal{A}\), such that the following objectives are achieved._ 1. _For all_ \(k\in\mathcal{N}\) _and_ \(t\in\mathbb{R}_{\geq 0}\)_, the control inputs satisfy_ \[|u_{k}(t)|\leq\overline{U},\ \overline{U}\in\mathbb{R}^{+}.\] (22) 2. _For all_ \(t\in\mathbb{R}_{\geq 0}\)_, the agent configuration_ \(\mathcal{Z}(t)\) _satisfies_ \[\mathcal{Z}(t)\in\Omega^{N},\ \forall\mathcal{Z}(0)\in\Omega^{N}.\] (23) 3. _The agent configuration_ \(\mathcal{Z}(t)\) _asymptotically converges to a LOC_ \(\mathcal{Z}^{*}\) _denoted by (_9_)._ The main difference between Problem 1 and the multi-SIR coverage problem in previous work [15, 36] is indicated by objectives 1) and 2), respectively corresponding to the requirements on the input- and state-dependent constraints. Another difference is that the optimal coverage configuration \(\mathcal{Z}^{*}\) is for the CUSR agents or the virtual centers of the CSURs, instead of the positions of the CUSRs. When a LOC is achieved, the CSURs are expected to move along their circular orbits around their static virtual centers assigned by the optimal configuration \(\mathcal{Z}^{*}\). Note that Problem 1 is only concerned with a LOC instead of a globally optimal solution. The LOC solutions may be multiple. To which LOC \(\mathcal{Z}\) converges mainly depends on the initial robot configuration [39]. Also, the adjacency relation \(\mathcal{A}\) may not be constant but changes over time [15]. In this paper, we are only concerned with minimizing the coverage cost (8) without incorporating additional requirements like collision avoidance or time limits. These requirements render more nontrivial challenges that can hardly be fully addressed in this paper. In fact, the parameters \(v_{0}\), \(\omega_{0}\) can be changed to reduce the radius of the circular orbits of the CSURs to reduce the chance of collisions. Extensions to these challenging problems will be explored in future work. ### _Positively Invariant Set and Tangent Cone_ In this subsection, we introduce the _positively invariant set_ and the _tanget cone_ which are important to the analysis of the satisfaction of the hard state-dependent constraints addressed by objective 2) of Problem 1. **Definition 1**.: _[_33_]_ \(\mathcal{S}\subset\mathbb{R}^{n}\) _is a positively invariant set for system \(\dot{x}(t)=f(x(t))\) if \(\forall\,x(0)\in\mathcal{S}\), \(x(t)\subset\mathcal{S}\) for \(t\in\mathbb{R}_{+}\)._ **Definition 2**.: _[_33_]_ _The tangent cone of a convex set \(\mathcal{S}\subset\mathbb{R}^{n}\) in \(x\in\mathbb{R}^{n}\) is a set_ \[\mathscr{C}_{\mathcal{S}}(x)=\left\{z\in\mathbb{R}^{n}\left\|\lim_{r\to 0} \frac{\mathscr{D}(x+\tau z,\mathcal{S})}{\tau}=0\right\}, \tag{24}\] _where \(\mathscr{D}:\mathbb{R}^{n}\times 2^{\mathbb{R}^{n}}\to\mathbb{R}_{\geq 0}\) is a function that specifies the distance between a vector and a set,_ \[\mathscr{D}(x,\mathcal{S})=\inf_{s\in\mathcal{S}}\lVert x-s\rVert. \tag{25}\] Fig. 2: The trajectories of the CSUR position \(\zeta(t)\) and its virtual center \(z(t)\), represented as a gray dashed line and a black dotted line, with the arrows pointing out the directions. The black cross and the blue dot are respectively the positions of the CSUR and its virtual center at a certain time \(t\in\mathbb{R}\), where the blue arrow indicates the orientation of the robot. The trajectory of the CSUR converges to a circle ultimately, as \(z(t)\) reaches a static point. The radius of the circle is \(v_{0}/|\omega_{0}|\). The hard state-dependent constraints addressed in (23) can be formulated in a way that the region \(\Omega\) becomes a _positively invariant set_ of the system. The tangent cone of a closed set \(\mathcal{S}\) is the set of all feasible directions \(\dot{x}\) of the system at state \(x\), such that \(\mathcal{S}\) is a positively invariant set. Whether a closed set is positively invariant is determined by the following Lemma. **Lemma 1**.: _[_33_]_ _Consider a system \(\dot{x}(t)=f(x(t))\) of which each initial condition \(x(0)\in\mathcal{X}\subseteq\mathbb{R}^{n}\) admits a globally unique solution. Then, a closed and convex set \(\mathcal{S}\subseteq\mathcal{X}\) is positively invariant for the system if and only if \(f(x)\in\mathscr{C}_{\mathcal{S}}(x)\), \(\forall\,x\in\partial\mathcal{S}\), where \(\partial\mathcal{S}\) is the boundary of \(\mathcal{S}\)._ Lemma 1 provides an easy approach to validate whether a designed controller achieves objective 2) of Problem 1 by only investigating the tangent cone on the boundary of the set. Note that Lemma 1 only applies to closed and convex sets. ## IV Main Results In this section, we present the main theoretical results of the proposed optimal coverage controller. We first introduce an off-LOC cost function and a novel BLF-based coverage cost function. Based on these functions, we propose a novel distributed coverage controller. Then, we validate the objectives in Problem 1 for the proposed controller one by one. ### _The Off-LOC Cost Function_ For any agent \(k\in\mathcal{N}\) and its adjacent agents \(\mathscr{A}_{k}\), we define the following off-LOC cost function, \[W(\mathcal{Z}_{\mathscr{A}_{k}})\!=\!\frac{1}{2}\!\left\|z_{k}(t)\!-\!C( \mathcal{Z}_{\mathscr{A}_{k}})\right\|_{Q}^{2},\,\mathcal{Z}_{\mathscr{A}_{k }}\!\in\!\Omega^{|\overline{\mathscr{A}_{k}}|}, \tag{26}\] where \(Q\in\mathbb{R}^{2\times 2}\) is a symmetrically positive-definite matrix. It can be verified that \(W(\mathcal{Z}_{\mathscr{A}_{k}})\geq 0\) for all \(\mathcal{Z}_{\mathscr{A}_{k}}\in\Omega^{|\overline{\mathscr{A}_{k}}|}\) and \(W(\mathcal{Z}_{\mathscr{A}_{k}})=0\) holds if and only if (13) is satisfied, i.e., \(\mathcal{Z}_{\mathscr{A}_{k}}^{\overline{\mathscr{A}_{k}}}\) belong to a LOC. Therefore, this function measures how close a partial configuration \(\mathcal{Z}_{\mathscr{A}_{k}}\) to a LOC, which is why we refer to it as the _off-LOC cost_. The following proposition is granted. **Proposition 1**.: \(W(\mathcal{Z}_{\mathscr{A}_{k}})\) _has the following properties for all \(\mathcal{Z}_{\mathscr{A}_{k}}\in\Omega^{|\overline{\mathscr{A}_{k}}|}\) and any \(k\in\mathcal{N}\)._ _1). There always exists_ \(\overline{W}\in\mathbb{R}_{+}\)_, such that_ \(W(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})<\overline{W}\)_._ _2)._ \(W(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})>0\) _always holds if_ \(z_{k}\in\partial\Omega\)_._ _3). There always exists_ \(\epsilon\in\mathbb{R}_{+}\)_,_ \(\epsilon<\min\limits_{j\in\mathcal{M}}\sup\limits_{\omega\in\Omega}h_{j}(\omega)\)_, such that_ \(W(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})>0\) _holds for any_ \(x_{k}\in\Omega-\Omega_{\epsilon}\)_, where_ \(\Omega_{\epsilon}\subset\Omega\) _is a closed convex set defined as_ \[\Omega_{\epsilon}=\left\{\omega\in\mathbb{R}^{2}\left|h_{j}(\omega)\geq \epsilon\,\ \forall\,j\in\mathcal{M}\right.\right\}. \tag{27}\] Proof.: For property 1), we know that any configuration defined in the region \(\Omega\), i.e., \(\mathcal{Z}\!\in\!\Omega^{N}\), corresponds to a certain Voronoi partition of \(\Omega\), such that \(\Omega_{k}\neq\varnothing\) and \(M(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})>0\) hold for all \(k\in\mathcal{N}\). As a result, \(z_{k}\) and \(C(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})\) are both bounded, which means that \(W(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})\) always has an upper bound \(\overline{W}\in\mathbb{R}_{+}\), \(\forall\,k\in\mathcal{N}\). For property 2), we consider its negative proposition by supposing that there exists \(z_{k}\in\partial\Omega\), \(\exists\,k\in\mathcal{N}\), such that \(W(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})=0\), which indicates that \(z_{k}=C(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})\). From the definition (12), however, we know \(C(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})\notin\partial\Omega\), which breaks this equality. Thus, the negative proposition does not hold and property 2) is satisfied. For 3), we know that \(W(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})\) is a continuous function of \(z_{k}\) since \(C(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})\) is also continuous to \(z_{k}\), according to (26). Then, since property 2) addresses that \(W(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})>0\) holds for any \(z_{k}\in\partial\Omega\), \(k\in\mathcal{N}\), we know that there always exists \(\epsilon\in\mathbb{R}_{+}\), \(\epsilon<\min\limits_{j\in\mathcal{M}}\sup\limits_{\omega\in\Omega}h_{j}(\omega)\), such that \(\Omega_{\epsilon}\neq\varnothing\) and there exists \(z_{k}\in\partial\Omega_{\epsilon}\) that belongs to a LOC, such that \(W(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})=0\) and \(W(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})>0\) holds for all \(x_{k}\in\Omega-\Omega_{\epsilon}\). Proposition 1 provides several important statements on the off-LOC cost functions \(W(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})\), \(k\in\mathcal{N}\). Property 1) addresses its boundedness and Property 2) indicates that LOC does not occur on the boundary \(\partial\Omega\). Furthermore, 3) points out that there exists a margin \(\Omega-\Omega_{\epsilon}\) around the convex region \(\Omega\) where no LOC exists. They both address that all LOCs are distributed in the interior of the region and do not show up in the marginal area close to its boundary. This proposition is important for our theoretical results in Sec. IV-D. Note that \(W(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})\), \(i\in\mathcal{N}\) is also a differentiable function of \(z_{k}\), \(k\in\mathcal{N}\), since \(C(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})\) is differentiable to \(z_{k}\). According to [40], the partial derivative of \(C(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})\) to \(z_{k}\) reads \[\frac{\partial C^{\mathbb{T}}(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})}{\partial z _{k}}=\frac{D(\mathcal{Z}_{\overline{\mathscr{A}_{k}}},z_{k})}{M(\mathcal{Z}_{ \overline{\mathscr{A}_{k}}})}-P(\mathcal{Z}_{\overline{\mathscr{A}_{k}}},z_{k})C^{ \mathbb{T}}(\mathcal{Z}_{\overline{\mathscr{A}_{k}}}), \tag{28}\] where, for \(z_{i},z_{k}\in\Omega\), \(i,k\in\mathcal{N}\), \(i\neq k\), \(z_{i}\neq z_{k}\), \[D(\mathcal{Z}_{\overline{\mathscr{A}_{i}}},z_{k})=\int_{\partial\Omega_{k}^{ \mathrm{L}}}\frac{(\omega-z_{k})\omega^{\mathbb{T}}}{\|z_{k}-z_{i}\|}\Phi( \omega)\mathrm{d}\omega, \tag{29a}\] \[P(\mathcal{Z}_{\overline{\mathscr{A}_{i}}},z_{k})=\int_{\partial\Omega_{k}^{ \mathrm{L}}}\frac{\omega-z_{k}}{\|z_{k}-z_{i}\|}\Phi(\omega)\mathrm{d}\omega, \tag{29b}\] where \(\partial\Omega_{k}^{\mathrm{L}}\) is the shared boundary of adjacent partitions \(\Omega_{i}\), \(\Omega_{k}\), \(i,k\in\mathcal{N}\). Then, the partial derivative of \(W(\mathcal{Z}_{\overline{\mathscr{A}_{k}}})\) to \(z_{k}\) is \[\frac{\partial W(\mathcal{Z}_{\overline{\mathscr{A}_{i}}})}{\partial z _{k}}=\!\begin{cases}\left(\!I\!-\!\frac{\partial C^{\mathbb{T}}(\mathcal{Z}_{ \overline{\mathscr{A}_{i}}})}{\partial z_{k}}\right)\!Q\!\left(z_{i}\!-\!C( \mathcal{Z}_{\overline{\mathscr{A}_{i}}})\right),&i=k,\\ -\frac{\partial C^{\mathbb{T}}(\mathcal{Z}_{\overline{\mathscr{A}_{i}}})}{ \partial z_{k}}Q\!\left(z_{i}\!-\!C(\mathcal{Z}_{\overline{\mathscr{A}_{i}}}) \right),&i\neq k.\end{cases} \tag{30}\] **Proposition 2**.: _For any \(i,k\in\mathcal{N}\), \(i\neq k\), \(\frac{\partial C^{\mathbb{T}}(\mathcal{Z}_{\overline{\mathscr{A}_{i}}})}{\partial z _{k}}=0\) and \(\frac{\partial W(\mathcal{Z} ### _A Novel Coverage Cost Function for CSURs_ One of the main technical points of this paper is to design a BLF-based coverage cost for a novel coverage controller for multi-CSUR systems. A BLF is a non-negative function that reaches zero at the system equilibria but approaches infinity near the boundary of the confined region [41]. It forces the system states to move towards the interior of the region when they tend to violate the constraints defined by the region boundaries. For a multi-CSUR system with the agent model in (21), we define the following BLF-based coverage cost, \[V(\mathcal{Z})=\sum_{i=1}^{N}\sum_{j=1}^{M}\frac{W(\mathcal{Z}_{\overline{ \mathcal{A}_{\overline{\mathcal{A}_{i}}}}})}{h_{j}(z_{i})},\ \mathcal{Z}\in\operatorname{int}\Omega^{N}, \tag{31}\] where \(W(\mathcal{Z}_{\overline{\mathcal{A}_{\overline{\mathcal{A}_{i}}}}})\) is the off-LOC cost function defined in Sec. IV-A and \(\operatorname{int}\Omega^{N}=\underbrace{\operatorname{int}\Omega\times \cdots\times\operatorname{int}\Omega}_{N}\) depicts the product of \(N\) open sets. Note that \(V(\mathcal{Z})\) is defined on an open domain and has the following properties. **Property 1**.: _The coverage cost function \(V(\mathcal{Z})\) satisfies the following conditions for all \(\mathcal{Z}\in\operatorname{int}\Omega^{N}\). 1). \(V(\mathcal{Z})=0\) holds if and only if \(\mathcal{Z}\) is a LOC that satisfies the condition in (13), otherwise \(V(\mathcal{Z})>0\). 2). For any \(\overline{V}\in\mathbb{R}_{+}\), there always exists \(\epsilon\in\mathbb{R}_{+}\), such that for any \(h_{j}(z_{i})<\epsilon\), \(i\in\mathcal{N}\), \(\exists j\in\mathcal{M}\), \(V(\mathcal{Z})>\overline{V}\) holds. 3). For any \(\epsilon\in\mathbb{R}+\), \(\epsilon<\min\limits_{j\in\mathcal{M}}\sup\limits_{\omega\in\Omega}h_{j}(\omega)\), there always exist \(V_{*}\in\mathbb{R}_{+}\), such that \(V(\mathcal{Z})<V_{*}\), for all \(\mathcal{Z}\in\operatorname{int}\Omega_{\epsilon}^{N}\), where \(\Omega_{\epsilon}\) is the closed set defined in (27)._ The proof for Property 1 is not provided since the properties are straightforward to verify using the definition (31) and the boundedness of the off-LOC cost function \(W(\mathcal{Z}_{\overline{\mathcal{A}_{\overline{\mathcal{A}_{i}}}}})\), \(k\in\mathcal{N}\), addressed in Proposition 1. Property 1-1) indicates the equivalence between \(V(\mathcal{Z})=0\) and \(\mathcal{Z}\) being a LOC, which is important to the verification of whether a configuration is a LOC. Property 1-2) addresses that the cost function \(V(\mathcal{Z})\) becomes unbounded when the configuration \(\mathcal{Z}\) approaches any point of the region boundary \(\partial\Omega\). Property 1-3) means that \(V(\mathcal{Z})\) only becomes unbounded when a position in \(\mathcal{Z}\) approaches the region boundary \(\partial\Omega\). It is bounded when \(\mathcal{Z}\) remains in the interior of region \(\Omega\). Therefore, \(V(\mathcal{Z})\) is a BLF. As mentioned in Sec. III-B, solving the necessary condition \(\nabla V(\mathcal{Z})=0\) is important to obtain a LOC. The \(k\)-th element of \(\nabla V(\mathcal{Z})=0\), \(k\in\mathcal{N}\), is calculated as \[\frac{\partial V(\mathcal{Z})}{\partial z_{k}}=\sum_{j=1}^{M}\left(\sum_{i=1}^{ N}\frac{1}{h_{j}(z_{i})}\frac{\partial W(\mathcal{Z}_{\overline{\mathcal{A}_{i}}} )}{\partial z_{k}}+a_{j}\frac{W(\mathcal{Z}_{\overline{\mathcal{A}_{i}}})}{h_ {j}^{2}(z_{k})}\right), \tag{32}\] \(\mathcal{Z}\in\operatorname{int}\Omega^{N}\), Substituting (30) to (32), we obtain \[\frac{\partial V(\mathcal{Z})}{\partial z_{k}} =\sum_{j=1}^{M}\left(\frac{Q\Big{(}z_{k}-C(\mathcal{Z}_{ \overline{\mathcal{A}_{i}}})\Big{)}}{h_{j}(z_{k})}+\frac{a_{j}W(\mathcal{Z}_{ \overline{\mathcal{A}_{i}}})}{h_{j}^{2}(z_{k})}\right. \tag{33}\] \[\left.-\sum_{i=1}^{N}\frac{\partial C^{\mathbb{T}}(\mathcal{Z}_{ \overline{\mathcal{A}_{i}}})}{\partial z_{k}}\frac{Q\Big{(}z_{i}-C(\mathcal{Z}_ {\overline{\mathcal{A}_{i}}})\Big{)}}{h_{j}(z_{i})}\right).\] Although \(\nabla V(\mathcal{Z})\) shows a complicated form, it can be calculated in a distributed manner. Applying Proposition 2 to (33), we rewrite it as \[\frac{\partial V(\mathcal{Z}_{\overline{\mathcal{A}_{i}}})}{ \partial z_{k}} =\sum_{j=1}^{M}\left(\frac{Q\Big{(}z_{k}-C(\mathcal{Z}_{\overline{ \mathcal{A}_{i}}})\Big{)}}{h_{j}(z_{k})}+\frac{a_{j}W(\mathcal{Z}_{\overline{ \mathcal{A}_{i}}})}{h_{j}^{2}(z_{k})}\right. \tag{34}\] \[\left.-\sum_{i\in\overline{\mathcal{A}_{i}}}\frac{\partial C^{ \mathbb{T}}(\mathcal{Z}_{\overline{\mathcal{A}_{i}}})}{\partial z_{k}}\frac{Q \Big{(}z_{i}-C(\mathcal{Z}_{\overline{\mathcal{A}_{i}}})\Big{)}}{h_{j}(z_{i} )}\right),\] for \(\mathcal{Z}_{\overline{\mathcal{A}_{i}}}\in\operatorname{int}\Omega^{\overline{ \mathcal{A}_{i}}}\). This indicates that the gradient \(\frac{\partial V(\mathcal{Z}_{\overline{\mathcal{A}_{i}}})}{\partial z_{k}}\) for each agent \(k\in\mathcal{N}\) only needs the information from its own and its adjacent agents, i.e., \(i\in\overline{\mathcal{A}_{i}}\). In Sec. IV-C, we explain how to use this property to implement a distributed coverage controller for multi-CSUR systems based on a proper communication standard. Note that the partial derivative \(\nabla V(\mathcal{Z})\) is continuous, considering the continuity of the linear constraint functions \(h_{j}(z_{i})\), the virtual centers \(z_{i}(t)\), and the Voronoi centroids \(C(\mathcal{Z}_{\overline{\mathcal{A}_{i}}})\), \(i\in\mathcal{N}\), \(j\in\mathcal{M}\). Also, \(\nabla V(\mathcal{Z})\) satisfies the following condition. **Proposition 4**.: \(\nabla V(\mathcal{Z})\!=\!0\) _holds, \(\mathcal{Z}\in\operatorname{int}\Omega^{N}\), if and only if (13) holds._ Proof.: The sufficiency of this proposition is straightforward to verify by substituting (13) to (33). For the necessity, we investigate (32). Since all CSURs have identical dynamic models, the number \(N\) should not affect the equality of (32). Therefore, according to (33), considering \(h_{j}(z_{i})>0\) for all \(z_{i}\in\Omega\), \(i\in\mathcal{N}\) and all \(j\in\mathcal{M}\), we can infer that \(\nabla V(\mathcal{Z})=0\) holds if and only if \(W(\mathcal{Z}_{\overline{\mathcal{A}_{i}}}^{+})=0\) and \(z_{k}=C(\mathcal{Z}_{\overline{\mathcal{A}_{i}}}^{+})\) hold for all \(k\in\mathcal{N}\), which is equivalent to (13). This verifies the necessity of Proposition 4. ### _The Distributed Coverage Controller with Input Saturation_ For the multi-CSUR system (21), We design the following controller for the optimal coverage control problem 1, \[u_{k}(t)=\omega_{0}+\gamma\omega_{0}\,\rho\Big{(}\sigma(\mathcal{Z}_{\overline{ \mathcal{A}_{i}}},\theta_{k})\Big{)}\,, \tag{35}\] where \(\sigma(\mathcal{Z}_{\overline{\mathcal{A}_{i}}},\theta_{k})=r^{\!\!\!-}(\theta_{k}) \frac{\partial V(\mathcal{Z}_{\overline{\mathcal{A}_{i}}})}{\partial z_{k}}\), \(\gamma\in\mathbb{R}^{+}\) is the control gain, and \(\rho:\mathbb{R}\rightarrow(-1,1)\) is the following Sigmoid function, \[\rho(x)=\frac{x}{|x|+\varepsilon},\ x\in\mathbb{R}, \tag{36}\] where \(\varepsilon\in\mathbb{R}^{+}\) is a constant scalar. Saturation functions like Sigmoid functions are commonly used by previous work to design controllers subject to input constraints [42, 43]. It is straightforward to verify that \(\rho(\cdot)\) is continuous on \(\mathbb{R}\) and \(\big{|}\rho(x)\big{|}<1\) holds for any \(x\in\mathbb{R}\). Thus, it is straightforward to propose the following property for the control input \(u_{k}(t)\). **Property 2**.: _The control input \(u_{k}(t)\), in (35), for all \(k\in\mathcal{N}\), is bounded by \(|u_{k}(t)-\omega_{0}|<\gamma\omega_{0}\), for all \(t\in\mathbb{R}_{\geq 0}\)._ Property 2 indicates that the proposed controller (35) is subject to the input-dependent constraint which leads to \(|u_{k}(t)|<(1+\gamma)\,\omega_{0}\). To ensure the input saturation constraint (22), we may as well set \[(1+\gamma)\,\omega_{0}\leq\overline{U}, \tag{37}\] for which we can adjust the control gain \(\gamma\) or the nominal angular velocity \(\omega_{0}\) to achieve target 1) of Problem 1. Note that (35) is a distributed controller. Substituting (28) to (34) and then to (35), we know that the control input \(u_{k}(t)\) of each agent \(k\in\mathcal{N}\) requires the following information. 1). Its own orientation \(\theta_{k}\). 2). The positions \(z_{i}\) and the Voronoi centroids \(C(\mathcal{Z}_{\overline{\mathcal{A}_{i}}})\) of its own and all its adjacent agents \(i\in\overline{\mathcal{A}_{k}}\). 3). The Voronoi mass \(M(\mathcal{Z}_{\overline{\mathcal{A}_{i}}})\) of all adjacent agents \(i\in\mathcal{A}_{k}\). 4). The adjacency relations \(\mathcal{A}_{i}\) of its own all its adjacent agents \(i\in\overline{\mathcal{A}_{k}}\), which are used to determine \(\partial\Omega_{k}^{i}\) and calculate the Voronoi functions \(D(\mathcal{Z}_{\overline{\mathcal{A}_{i}}},z_{k})\) and \(P(\mathcal{Z}_{\overline{\mathcal{A}_{i}}},z_{k})\). Based on this, we redefine the communication standard for the multi-CSUR systems to realize a distributed optimal coverage controller. That is, every agent \(k\in\mathcal{N}\) should broadcast its position \(z_{k}\), Voronoi centroid \(C(\mathcal{Z}_{\overline{\mathcal{A}_{k}}})\), Voronoi mass \(M(\mathcal{Z}_{\overline{\mathcal{A}_{k}}})\), and adjacency relations \(\mathcal{A}_{k}\) to all its adjacent agents. In the meantime, agent \(k\) also receives the corresponding information from its adjacent agents and uses them to calculate the control input according to (35). This indicates that designing a distributed optimal coverage controller for a multi-CSUR system is feasible and implementable by defining a proper communication standard. Due to the nontrivial dynamic model of the multi-CSUR system, its communication standard is far more complicated than that of a multi-SIR system which only includes the agent positions. Note that the adjacency relations of all agents may change during the motion of the agents. The work [44, 45] provided distributed methods to solve the Voronoi partition of a convex region, which can be used to calculate the adjacency relations in a distributed manner. ### _Invariance to State-Dependent Constraints_ Substituting the controller (35) to (21), the closed-loop dynamic model of each CSUR agent is \[\dot{z}_{k}(t)=-\gamma\omega_{0}\,r(\theta_{k})\,\rho(\sigma( \mathcal{Z}_{\overline{\mathcal{A}_{k}}},\theta_{k})),\ k\in\mathcal{N}. \tag{38}\] We use the invariance theory introduced in Sec. III-E to validate whether the closed-loop dynamic model (38) achieves objective 2) in Problem 1. Note that Lemma 1 is only applicable to closed sets. Nevertheless, the closed-loop dynamic model (38) is defined on a open domain \(\mathcal{Z}_{\overline{\mathcal{A}_{i}}}\in\operatorname{int}\Omega^{[ \overline{\mathcal{A}_{i}}]}\), which brings up the challenges of the invariance analysis. In this paper, we perform an indirect manner by investigating the invariance of the closed set \(\Omega_{\epsilon}\) defined in (27) with a small \(\epsilon\), which addresses the following theorem. **Theorem 1**.: _There always exists \(\epsilon_{0}\in\mathbb{R}_{+}\), such that for all \(\epsilon<\epsilon_{0}\), \(\Omega_{\epsilon}\neq\varnothing\) and \(\Omega_{\epsilon}\) is positively invariant for system (38)._ Proof.: The critical point is to solve the tangent cone \(\mathscr{C}_{\Omega_{\epsilon}}(z_{k})\) for any \(z_{k}\in\Omega_{\epsilon}\), \(k\in\mathcal{N}\), with given \(\epsilon\) and validate whether the trajectory admitted by (38) falls in \(\mathscr{C}_{\Omega_{\epsilon}}(z_{k})\). Inspired by Lemma 1, we just need to calculate \(\mathscr{C}_{\Omega_{\epsilon}}(z_{k})\) for \(z_{k}\in\partial\Omega_{\epsilon}\) since \(\mathscr{C}_{\Omega_{\epsilon}}(z_{k})=\mathbb{R}^{2}\) for all \(z_{k}\in\operatorname{int}\Omega_{\epsilon}\). Without losing the generality, we assume that \(z_{k}\) is closest to the boundary \(\partial\Omega\) among all agent positions \(z_{r}\), \(r\in\mathcal{N}\), i.e., we always assign \(\epsilon\) such that \(z_{k}\in\partial\Omega_{\epsilon}\) while \(z_{r}\in\Omega_{\epsilon}\), \(\forall\,r\in\mathcal{N}\), \(r\neq k\). According to Proposition 1, there always exists a \(\epsilon_{0}\in\mathbb{R}_{+}\), such that \(W(\mathcal{Z}_{\overline{\mathcal{A}_{k}}})>0\) for all \(\epsilon<\epsilon_{0}\). Also, \(\Omega_{\epsilon}\neq\varnothing\) is ensured if \(\epsilon_{0}<\min\limits_{j\in\mathcal{M}}\sup\limits_{\omega\in\Omega}h_{j}(\omega)\). Thus, we define the following function for \(z_{k}\in\partial\Omega_{\epsilon}\), \(\epsilon<\epsilon_{0}\) with an arbitrary vector \(\iota\in\mathbb{R}^{2}\), \[\mathscr{V}_{\epsilon}(z_{k},\iota)=\frac{\overline{h}^{2}(z_{ k})}{W_{k}}\iota^{\top}\frac{\partial V(\mathcal{Z}_{\overline{\mathcal{A}_{k}}})}{ \partial z_{k}}, \tag{39}\] where \(W_{k}\) is the brief form of \(W(\mathcal{Z}_{\overline{\mathcal{A}_{k}}})\), \(k\in\mathcal{N}\), and \(\overline{h}(z_{k})=\min\limits_{j\in\mathcal{M}}h_{j}(z_{k})\). Substituting (32) to (39), we have \[\mathscr{V}_{\epsilon}(z_{k},\iota)\!=\!\sum_{j=1}^{M}\!\left(\sum_{i \in\overline{\mathcal{A}_{k}}}\!\frac{\overline{h}^{2}(z_{k})}{h_{j}(z_{i})} \frac{\iota^{\top}}{W_{k}}\frac{\partial W_{i}}{\partial z_{k}}\!+\!\iota^{ \top}a_{j}\frac{\overline{h}^{2}(z_{k})}{h_{j}^{2}(z_{k})}\right).\] According to Proposition 1 and Property 3, we know that both \(\frac{\partial W_{i}}{\partial z_{k}}\), \(\forall\,i\in\overline{\mathcal{A}_{k}}\), and \(W_{k}\) are all bounded. Thus, we know that \(\mathscr{V}_{\epsilon}(z_{k},\iota)\) has the following limit as \(\epsilon\to 0\), \[\mathscr{V}(z_{k},\iota)=\lim\limits_{\epsilon\to 0}\mathscr{V}_{ \epsilon}(z_{k},\iota)=\iota^{\top}a_{r}, \tag{40}\] where \(r=\arg\min\limits_{j\in\mathcal{M}}h_{j}(z_{k})\) is the number of the edge to which \(z_{k}\) is the most close. Be reminded that \(a_{r}\) is the normal vector of not only the \(r\)-th edge of \(\Omega\) but also the \(r\)-th edge of \(\Omega_{\epsilon}\) for all \(\epsilon<\epsilon_{0}\). Moreover, the direction of \(a_{r}\) points to the interior of \(\Omega\) and \(\Omega_{\epsilon}\). Then, we inspect the case when \(\iota=\dot{z}_{k}\in\mathbb{R}^{2}\), \[\mathscr{V}(z_{k},\dot{z}_{k})=\dot{z}_{k}^{\top}a_{r} \tag{41}\] which is the inner product of the system trajectory direction \(\dot{z}_{k}\) and the normal vector \(a_{j}\). The sign of \(\mathscr{V}(z_{k},\dot{z}_{k})\) indicates whether \(\dot{z}_{k}\) points to the interior of \(\Omega\) and \(\Omega_{\epsilon}\) for all \(\epsilon<\epsilon_{0}\). Based on this, we can make a relation between the function \(\mathscr{V}(z_{k},\dot{z}_{k})\) and the tangent cone \(\mathscr{C}_{\Omega_{\epsilon}}(z_{k})\). Recalling the definition of the distance function \(\mathscr{D}\) in (25), it is not difficult to find, for any \(z_{k}\in\partial\Omega_{\epsilon}\) with any \(\epsilon<\epsilon_{0}\) and \(\dot{z}_{k}\in\mathbb{R}^{2}\), \[\lim\limits_{\tau\to 0}\frac{\mathscr{D}(z_{k}\!+\!\tau\dot{z}_{k}, \Omega_{\epsilon})}{\tau}>0 \Leftrightarrow\mathscr{V}(z_{k},\dot{z}_{k})>0, \tag{42}\] \[\lim\limits_{\tau\to 0}\frac{\mathscr{D}(z_{k}\!+\!\tau\dot{z}_{k}, \Omega_{\epsilon})}{\tau}=0 \Leftrightarrow\mathscr{V}(z_{k},\dot{z}_{k})\leq 0. \tag{43}\] This indicates that the tangent cone \(\mathscr{C}_{\Omega_{\epsilon}}(z_{k})\) for any \(z_{k}\!\in\!\partial\Omega_{\epsilon}\) and \(\epsilon<\epsilon_{0}\) is \[\mathscr{C}_{\Omega_{\epsilon}}(z_{k})\!=\!\left\{\dot{z}_{k}\! \in\!\mathbb{R}^{2}\,\middle|\mathscr{V}(z_{k},\dot{z}_{k})\!\leq\!0\right\}. \tag{44}\] Now, let us validate whether the trajectory direction \(\dot{z}_{k}\) admitted by (38) falls in the tangent cone \(\mathscr{C}_{\Omega_{k}}(z_{k})\). Substituting the closed-loop dynamics (38) to (39), we have \[\mathscr{V}_{\epsilon}(z_{k},\dot{z}_{k})=-\frac{\gamma\omega and \(\lim\limits_{\epsilon\to 0}\frac{1}{\left|\sigma(\mathcal{Z}_{\mathscr{A}_{k}}, \theta_{k})\right|}\!=\!0\). Taking the limit of (45), we have \[\lim\limits_{\epsilon\to 0}\mathscr{V}_{\epsilon}(z_{k},\dot{z}_{k})= \mathscr{V}(z_{k},\dot{z}_{k})=-\gamma\omega_{0}\left|r^{\!\top}(\theta_{k})a_ {r}\right|\leq 0, \tag{47}\] which indicates that the dynamic model (38) ensures \[\dot{z}_{k}\in\mathscr{C}_{\Omega_{\epsilon}}(z_{k}),z_{k}\in\partial\Omega_{ \epsilon},\ \forall\,\epsilon<\epsilon_{0}. \tag{48}\] According to Lemma 1, the condition (48) means that \(\Omega_{\epsilon}\) is invariant for \(z_{k}\), i.e., for any initial condition \(z_{k}(0)\in\Omega_{\epsilon}\), \(z_{k}(t)\in\Omega_{\epsilon}\) holds for all \(t\in\mathbb{R}_{+}\). Note that this generally holds for any \(k\in\mathcal{N}\) that is closest to the boundary \(\partial\Omega\). Thus, we claim that \(\Omega_{\epsilon}\) is positively invariant for system (38). Theorem 1 indicates that there always exists a cluster of closed sets \(\Omega_{\epsilon}\subset\Omega\) that are positively invariant for the closed-loop system (38). This indicates that there always exists \(\epsilon\), such that for any initial state \(\mathcal{Z}(0)\in\Omega_{\epsilon}^{N}\), \(\mathcal{Z}(t)\in\Omega_{\epsilon}^{N}\subset\Omega^{N}\) holds for all \(t\in\mathbb{R}^{+}\), which satisfies the state-dependent constraint in (23). Therefore, both objectives 1) and 2) of Problem 1 are achieved by the proposed coverage controller (35). An illustration of \(\Omega_{\epsilon}\) being a positively invariant set of the system is shown in Fig. 3. The following subsection interprets the convergence of the multi-CSUR system to a LOC. ### _Convergence of the System Configuration to A LOC_ In Sec. III-D, we have introduced the condition of a LOC for the coverage control of multi-agent systems. A LOC \(\mathcal{Z}^{*}\) subject to (13) can be recognized as an equilibrium of the closed-loop system (38). The stability of this equilibrium is addressed by the following theorem. **Theorem 2**.: _For the dynamics of the CSUR agents in (21) with the control law as in (35), the equilibrium \(\mathcal{Z}^{*}\) subject to (13) is asymptotically stable._ Proof.: We take the time derivative of the energy function \(V(\mathcal{Z})\) defined in (31) as follows, \[\dot{V}(\mathcal{Z})=\sum_{k=1}^{N}\dot{z}_{k}^{\top}\frac{\partial V( \mathcal{Z}_{\mathscr{A}_{k}})}{\partial z_{k}}. \tag{49}\] Substituting (38) to (49), we have \[\begin{split}\dot{V}(\mathcal{Z})&=-\sum_{k=1}^{N} Pv_{0}\left(\sigma(\mathcal{Z}_{\mathscr{A}_{k}},\theta_{k})\right)\rho\Big{(} \sigma(\mathcal{Z}_{\mathscr{A}_{k}},\theta_{k})\Big{)}\\ &=-\sum_{k=1}^{N}\frac{ Pv_{0}\Big{|}\sigma(\mathcal{Z}_{\mathscr{A}_{k}}, \theta_{k})\Big{|}^{2}}{\left|\sigma(\mathcal{Z}_{\mathscr{A}_{k}},\theta_{k} )\right|+\varepsilon}\leq 0.\end{split} \tag{50}\] We notice that \(\dot{V}(\mathcal{Z})=0\) holds if and only if \[\sigma(\mathcal{Z}_{\mathscr{A}_{k}},\theta_{k})=r^{\!\top}\!(\theta)\frac{ \partial V(\mathcal{Z}_{\mathscr{A}_{k}})}{\partial z_{k}}=0,\ \forall\,k\in\mathcal{N}, \tag{51}\] which contains the following conditions, namely \(\frac{\partial V(\mathcal{Z}_{\mathscr{A}_{k}})}{\partial z_{k}}=0\) or \(\frac{\partial V(\mathcal{Z}_{\mathscr{A}_{k}})}{\partial z_{k}}\neq 0\) but \(r(\theta)\) and \(\frac{\partial V(\mathcal{Z}_{\mathscr{A}_{k}})}{\partial z_{k}}\) are orthogonal. We use the La Salle invariant principle [46] to verify under which condition the configurations \(\mathcal{Z}\) are stable. We take the time derivative of both sides of (51) and obtain \[\dot{r}^{\!\top}(\theta)\frac{\partial V(\mathcal{Z}_{\mathscr{A}_{k}})}{ \partial z_{k}}+r^{\!\top}\!(\theta)\frac{\partial}{\partial z_{k}^{\!\top}} \!\left(\frac{\partial V(\mathcal{Z}_{\mathscr{A}_{k}})}{\partial z_{k}} \right)\dot{z}_{k}=0,\ k\in\mathcal{N}. \tag{52}\] A configuration \(\mathcal{Z}\) serving as a stable equilibrium must satisfy both conditions (51) and (52). Note that a necessary condition for a stable equilibrium is \(\dot{z}_{k}=0\) which leads (52) to \[\ddot{r}^{\!\top}\!(\theta)\frac{\partial V(\mathcal{Z}_{\mathscr{A}_{k}})}{ \partial z_{k}}=0,\ \forall\,k\in\mathcal{N}. \tag{53}\] Then, the only solution to (51) and (53) is \(\frac{\partial V(\mathcal{Z}_{\mathscr{A}_{k}})}{\partial z_{k}}=0\) for all \(k\in\mathcal{N}\) or \(\nabla V(\mathcal{Z})=0\) which is equivalent to condition (13), according to Proposition 4. Thus, any LOC given by (13) is an asymptotically stable equilibrium of the system. Theorem 2 indicates that the closed-loop dynamic model of the multi-CSUR system in (38) asymptotically converges to a LOC with any initial conditions. Note that there may exist multiple LOCs in the target region and the stability of each LOC may not be global. Nevertheless, Theorem 2 guarantees that any initial condition must converge to some LOC in the end. Therefore, target 3) of problem 1 is also achieved. Three control parameters play important roles in our proposed coverage controller. \(\gamma\) is the control gain that adjusts the amplitude of the control input, \(\epsilon\) is the boundary layer scalar that smooths up the control inputs in zero vicinity, and \(Q\) is a gain matrix that tunes the coverage cost function. Increasing \(\gamma\) and \(Q\) and decreasing \(\varepsilon\) can improve the convergence speed of the system to the local optimal coverage solutions. A simulation case study on how these control parameters affect the performance of the system will be conducted in Sec. V-B. ## V Simulation Studies In this section, we validate the performance of the proposed coverage controller in a series of simulation studies. We first test the effectiveness of the coverage controller for six CSUR agents with different initial conditions and control parameters. Then, we apply the controller to a larger system with more Fig. 3: The illustration of \(\Omega_{\epsilon}\) being a positive invariant set of the system. For any states \(z_{1},z_{2}\in\Omega_{\epsilon}\), the directions of the system trajectories \(\dot{z}(t)\) (the colored arrows) are confined by the corresponding tangent cones (the colored regions). The gray arrows in the tangent cones indicate the feasible trajectory directions. For any interior state \(z_{1}\in\Omega_{\epsilon}\), the tangent cone at \(z_{1}\) is \(\mathbb{R}^{2}\) which allows arbitrary trajectory directions \(\dot{z}(t)\). The tangent cone of a state on the boundary \(z_{2}\in\partial\Omega_{\epsilon}\), however, only allows \(\dot{z}(t)\) pointing to the interior of \(\Omega_{\epsilon}\). Theorem 1 ensures that the \(\epsilon\) making \(\Omega_{\epsilon}\) invariant always exists. agents to verify its scalability. Finally, we conduct a comparison study to address the advantage of the conventional method in terms of avoiding infeasibility. All studies are simulated in MATLAB R2021a at a discrete sampling time \(0.05\,\)s. ### _Method Test with Initial Conditions_ This study tests the performance of the proposed method for a system with six CSUR agents with different initial conditions. The target region \(\Omega\) is a \(4\,\)m \(\times\)\(2.8\,\)m rectangular region. The boundary functions \(h_{j}(\omega)\), \(j=1,2,3,4\), \(\omega\in\Omega\), are parameterized by \(a_{1}=\left[\,-1\,\,0\,\right]\), \(b_{1}=0\), \(a_{2}=\left[\,\,1\,\,0\,\right]\), \(b_{2}=4\), \(a_{3}=\left[\,\,0\,\,1\,\right]\), \(b_{3}=2.8\), \(a_{4}=\left[\,\,0\,\,-1\,\right]\), \(b_{4}=0\). The linear speed and the nominal angular velocity of the CSURs are \(v_{0}=0.16\,\)m/s and \(\omega_{0}=0.8\,\)rad/s. Three different initial configurations are randomly generated and assigned to the CSURs, as shown in Tab. I, where \(\left[\,\zeta_{x}\,\,\zeta_{y}\,\right]^{\top}\) and \(\theta\) are the planar coordinate and the orientation of a CSUR. For all cases, the control parameters are selected as \(\gamma=1\), \(Q=I\), and \(\varepsilon=2\). The simulation results of this study are illustrated in Fig. 4. The trajectories of the robot positions, virtual centers, and Voronoi centroids of the three cases are presented in Fig. (a)a, Fig. (b)b, and Fig. (c)c, respectively. It can be seen that the trajectories of all agents (the virtual centers of the CSURs) are confined within the region for all time, which indicates the achievement of objective 2) of Problem 1. All virtual centers and their corresponding Voronoi centroids, both marked as 'o' but with different colors, coincide with each other ultimately, which verifies the ultimate achievement of optimal coverage. The coinciding points indicate the corresponding LOC. The CSURs ultimately orbit around these points at a radius \(50\,\)m which allows a low likelihood of collisions. The achievement of optimal coverage is also reflected in Fig. (d)d, Fig. (e)e, and Fig. (f)f, where the coverage function decays to zero within \(100\,\)s for all initial conditions. Besides, the control inputs of all robots shown in Fig. (g)g, Fig. (h)h, and Fig. (i)i are all strictly confined by \(|u_{k}(t)-\omega_{0}|<\gamma\omega_{0}=0.8\) for all agents and all time, which achieves objective 1) of Problem 1. Note that different initial conditions ultimately lead to different LOCs. They may also affect the convergence speed of the coverage cost. Therefore, we can conclude that the proposed coverage controller (35) achieves all three objectives of Problem 1 with different initial conditions. ### _The Influence of the Control Parameters_ This study evaluates the influence of the control parameters, namely the input gain \(\gamma\), the coverage gain \(Q\), and the boundary layer scalar \(\varepsilon\), on the performance of the proposed coverage controller. The size of the target region and the robot parameters \(v_{0}\), \(\omega_{0}\) are the same as those in Sec. V-A. The initial conditions of the agents are determined as Case # 2 in Tab. I. The simulation results with different control parameters are illustrated in Fig. 5. Note that we also compare the simulation results in Fig. 5 with Case # 2 of Fig. 4 since they have the same initial conditions. Similar to Sec. V-A, Fig. 5 indicates that optimal coverage is achieved for all cases with the trajectories of the virtual centers confined within the target region. All control inputs are restricted by \(|u_{k}(t)-\omega_{0}|<\gamma\omega_{0}\), although the bounds are different due to various \(\gamma\). Therefore, we can conclude that the proposed coverage controller (35) well solves Problem 1 with different control parameters. Comparing Fig. 5 with Case 2 in Fig. 4, we notice that these parameters affect the control performance differently. Firstly, a large \(\gamma\) increases the convergence rate of the coverage cost but also causes chattering to the control inputs. This is because the system tends to become unstable as the control gain becomes over-large due to the discrete sampling. Secondly, an over-large \(\epsilon\) may slow down the convergence to a LOC. Thirdly, a large \(Q\) can effectively increase the convergence rate of the coverage cost without causing chatting to the control inputs. Therefore, we suggest only using \(\gamma\) to restrict the control inputs while increasing the value of the coverage gain \(Q\) to improve the convergence rate. The boundary layer scalar \(\varepsilon\) is suggested to be small to maintain a decent convergence rate while ensuring the smoothness of the control inputs. ### _Optimal Coverage of A Larger-Scale System_ In this study, we test the proposed coverage controller on a larger-scale multi-agent system that contains 100 CSURs. The coverage is performed on a \(800\,\)m\(\times\)\(600\,\)m rectangular region with the same boundary coefficients as Sec. V-A, except that \(b_{2}=800\) and \(b_{3}=600\). The linear speed and the nominal angular velocity of the CSURs are \(v_{0}=10\,\)m/s and \(\omega_{0}=2\,\)rad/s which correspond to a small orbit radius \(5\,\)m such that the CSURs are not likely to collide with each other. The control parameters are selected as \(\gamma=1\), \(Q=10\,I\), and \(\varepsilon=2\). The initial positions of the robots are randomly sampled from the target region and are not listed here. The simulation results are illustrated in Fig. 6. Fig. (a)a shows that the virtual centers of all CSURs ultimately coincide with the Voronoi centroids and Fig. (b)b indicates that the coverage cost decays to zero. Thus, optimal coverage is successfully achieved for this 100-agent system. Fig. (a)a also shows that all virtual centers are strictly confined within the target region. The control inputs are limited by \(|u_{k}(t)-\omega_{0}|<\gamma\omega_{0}=2\) according to Fig. (c)c. Thus, we can conclude that the proposed coverage controller is also effective for a large-scale multi-CSUR system. ### _A Comparison Study With the Conventional Method_ As mentioned in Sec. IV, the main advantage of our proposed coverage controller (35) over the conventional gradient \begin{table} \begin{tabular}{c|c|c c c c c} \hline \# Agent & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \multirow{3}{*}{\# 1} & \(\zeta_{x}\) & 0.2546 & 0.1247 & 1.793 & 0.3006 & 1.187 & 3.144 \\ & \(\zeta_{y}\) & 1.392 & 2.629 & 0.1781 & 0.4191 & 0.1445 & 0.0658 \\ & \(\theta\) & 3.060 & 3.160 & 4.610 & 3.030 & 4.500 & 4.680 \\ \hline \multirow{3}{*}{\# 2} & \(\zeta_{x}\) & 0.9549 & 0.8286 & 3.148 & 0.2219 & 0.1023 & 3.823 \\ & \(\zeta_{y}\) & 0.0310 & 2.702 & 0.4426 & 2.705 & 0.3783 & 0.7863 \\ & \(\theta\) & 6.130 & 3.690 & 2.610 & 3.370 & 4.060 & 0.8600 \\ \hline \multirow{3}{*}{\# 3} & \(\zeta_{x}\) & 0.8690 & 1.3810 & 3.610 & 0.7773 & 0.3674 & 0.4060 \\ & \(\zeta_{y}\) & 0.1436 & 2.6980 & 0.2723 & 2.726 & 2.610 & 0.2589 \\ \cline{1-1} & \(\theta\) & 4.760 & 4.560 & 4.390 & 4.650 & 1.430 & 1.340 \\ \hline \end{tabular} \end{table} TABLE I: The Initial Configurations of Cases # 1, # 2, and # 3 based controller (16) is the additional state-dependent constraints (23) that are critical to solving the feasibility issue for a multi-CSUR system. This subsection conducts a comparison study between these two methods to address the advantage of the proposed coverage controller. The detailed formulation of the conventional coverage controller is provided in [7], which corresponds to the following closed-loop dynamics, \[\dot{z}_{k}(t)=-\gamma\frac{\partial H(\mathcal{Z})}{\partial z_{k}} \tag{54}\] where \(\frac{\partial H(\mathcal{Z})}{\partial z_{k}}\) is calculated using (11). This study is conducted in a \(800\,\mathrm{m}\times 600\,\mathrm{m}\) rectangular region with six CSURs. For both controllers, we set the same velocity constants \(v_{0}=40\,\mathrm{m}\)/s and \(\omega_{0}=0.8\,\mathrm{rad}\)/s, the same initial positions as shown in Tab. II, and the same control gain \(\gamma=0.1\). For the conventional controller (54), the coverage cost \(H(\mathcal{Z})\) is defined as in (8) with \(\Phi(\omega)=1\), \(\omega\in\Omega\). For the proposed controller (35), the other control parameters are \(Q=I\) and \(\varepsilon=2\). The trajectories of the CSUR positions, virtual centers, and Voronoi centroids are illustrated in Fig. 7. Fig. (a)a clearly shows that one virtual center is about to cross the region boundary and move towards outside of the target region while the optimal coverage is not reached yet. The situation after this is not drawn since the Voronoi partition is no more feasible. Nevertheless, the proposed controller guarantees that all virtual centers are confined within the target region and ultimately coincide with the Voronoi centroids, as shown in Fig. (b)b. This clearly verifies that the proposed controller ensures the feasibility of the optimal coverage problem even though the conventional one does not under the same conditions. ## VI Experiment Validation In this section, we conduct an experimental study on real robot platforms to verify the applicability of the proposed method. The target region is a \(4\,\mathrm{m}\times 2.8\,\mathrm{m}\) indoor area, as shown in Fig. (a)a. We use six two-wheel unicycle mobile robots provided by the Arduino Engineering Kit @, as shown in Fig. (b)b, to serve as the CSURs. Each robot is attached with four infra-tracking markers such that its motion can be tracked by a Qualisys @ motion tracking system which captures the motion of the robots at a frequency of 300 Hz with 16 cameras deployed around the target region. A Lenovo Thinkpad laptop with an Intel core I5-6200U CPU and 8GB RAM, running with the Ubuntu 16.04 operating system, is used to receive the robot motion data from the tracking system and send control commands to the robots. Each robot is encapsulated by an independent thread on the laptop within the robotic operating \begin{table} \begin{tabular}{c|c c c c c} \hline \# Agent & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \(\zeta_{x}\) & 60.68 & 624.4 & 350.6 & 579.2 & 782.5 & 430.3 \\ \(\zeta_{y}\) & 301.0 & 43.43 & 161.5 & 299.7 & 408.0 & 482.4 \\ \(\theta\) & 2.394 & 0.414 & 1.810 & 5.715 & 1.341 & 2.841 \\ \hline \end{tabular} \end{table} TABLE II: The Initial Condition of the Comparison Study Fig. 4: The simulation results of the proposed coverage controller under different initial conditions: (a)-(c) are the trajectories of the CSUR positions \(\zeta_{i}(t)\) (thin solid lines), virtual centers \(z_{i}(t)\) (thick dotted lines), and the corresponding Voronoi centroids \(C(\mathcal{Z}_{\overrightarrow{\sigma_{x}}})\) (thick dashed lines), \(i\in\mathcal{N}\), where ‘x’ and ‘o’ respectively indicate the starting and ending points of the trajectories. (d)-(f) are the values of the coverage costs as time increases. (g)-(i) are the control inputs of the agents as time changes. system (ROS) framework with an update frequency of 100 Hz for movement control. The control commands include the constant-speed \(v_{0}=0.16\) m/s and the nominal angular velocity \(0.8\) rad/s which are converted to the motor commands for the robot wheels with a simple PD controller. The motion tracking system, the laptop, and the mobile robots are connected using a common wireless network. The adjacency relation among the robots is computed using the distributed algorithm introduced in in [44, 45]. It is worth mentioning that the ROS network used to coordinate the control and measurement of the robot is not subject to hard real-time and does not ensure constant discrete sampling. Also, there exists communication delay on the network due to its limited bandwidth. Moreover, the linear speed and the nominal angular velocity of the mobile robots are not ideally constant due to the friction forces and the dynamic features of the robot motors. All these factors lead to uncertainties to the experiment. Therefore, the main purpose of this experiment is to investigate the difference between the experiment and simulation results under the same conditions and evaluate how the uncertainties affect the performance of the proposed coverage controller. For a fair comparison, the initial conditions and control parameters of the experiment study are the same as the simulation study in Sec. V-A. The results of this experiment study are illustrated in Fig. 9. We omit the trajectories of the robots, the virtual centers, and the Voronoi centroids in the transient stage in Fig. (a)a, Fig. (b)b, and Fig. (c)c and only show the ultimate virtual centers, Voronoi centroids, and circular orbits. Besides, The background of these figures is filled with the screenshots of the robot positions from a top-to-down perspective. It is clearly shown that all virtual centers coincide with their Voronoi centroids and all robots orbit around these coinciding positions, which indicates the ultimate achievement of optimal coverage. Fig. (d)d, Fig. (e)e, and Fig. (f)f show that the coverage costs monotonously decay to zero for all three cases. The control inputs shown in Fig. (g)g, Fig. (h)h, and Fig. (g)g are strictly confined by \(\|u_{k}(t)-\omega_{0}\|<\gamma\omega_{0}=0.8\), which indicates the satisfaction of the input saturation constraints. These observations clearly show that the proposed control method can well solve Problem 1 on real robot platforms. Comparing Fig. 9 with Fig. 4, it is noticed that the simulation and the experiment studies have different ultimate Voronoi partitions and LOCs, even under the same initial conditions and with the same control parameters. This is mainly due to the existence of the uncertainties of the real robots, such as the network delay, the friction forces, and the system noise. Achieving optimal coverage with these uncertainties indicates the robustness of the proposed control method, even though the ultimate LOCs may be different. A video of the experiment can be referred to at [https://youtu.be/NAvVDMRWqN8](https://youtu.be/NAvVDMRWqN8). Fig. 5: The simulation results of the proposed coverage controller subject to different control parameters: (a)-(c) are the trajectories of the robot positions \(\zeta_{i}(t)\) (thin solid lines), virtual centers \(z_{i}(t)\) (thick dotted lines), and the corresponding Voronoi centroids \(C(\mathcal{Z}_{\overline{\sigma_{\mathcal{H}}^{\prime}}})\) (thick dashed lines) of the CSURs with different control parameters, \(i\in\mathcal{N}\), where ‘x’ and ‘o’ are respectively the starting and ending points of the trajectories. (d)-(f) are the values of the coverage cost functions \(V(\mathcal{Z}(t))\) as time changes. (g)-(i) are the control inputs \(u(t)-\omega_{0}\) as time changes. ## VII Discussion In general, the coordination control of a multi-agent system with complex agent dynamics is a challenging problem. Additional challenges for optimal coverage control of a multi-CSUR system include the non-convex coverage metric function, the state- and input-dependent constraints, and the distributed realization. This paper provides the first feasible solution that solves all these issues. The main technical points of the proposed coverage controller can be summarized as follows. Firstly, to overcome the limitation of the conventional coverage metric that is intended for multi-SIR systems, we propose a novel coverage metric function for multi-CSUR systems. The gradient-based controller derived from this metric function ensures the ultimate achievement of optimal coverage and the satisfaction of state confinement. Secondly, a new communication standard allows the controller to be designed in a distributed manner. Thirdly, a Sigmoid function is used such that the control inputs satisfy the given saturation constraints. Another remark is about the applicability and generalizability of the proposed method to a wider range of practical cases. Elaborating the studies on every possible configuration is not realistic. Instead, a series of simulation and experiment studies in this paper have validated that the proposed method is effective for various initial conditions, control parameters, and number of agents. Also, the advantage and necessity of this method are addressed via a comparison study with the conventional control method. Besides, the experiment study indicates that the proposed method is still effective with the existence of uncertainties except that the ultimate coverage configuration may be changed. Thus, we can confirm the effectiveness and applicability of the proposed coverage controller in a generic sense. ## VIII Conclusion In this paper, we propose a novel optimal coverage controller for a type of multi-agent system with complex agent dynamics. We solve this non-trivial problem by proposing a novel coverage cost function and comprehensively using several theoretical tools including BLF, Lyapunov asymptotic stability, and the invariance theory. Decentralization of the controller is feasible by redefining the communication standard for the system. The effectiveness of the proposed controller and its advantage over the conventional method are validated via simulation and experiment studies. This work does not only solve a challenging problem but also can inspire the controller design of other coordinate control problems for a multi-agent system with complex agents, which is going to be investigated in future work. Also, our future work will incorporate more flexible collision avoidance into the controller design. Fig. 8: The experimental setup and the mobile robot. Fig. 6: The optimal coverage of a 100-agent robot team. Fig. 7: The trajectories of the CSUR positions \(\zeta_{i}(t)\) (thin solid lines), virtual centers \(z_{i}(t)\) (thick dotted lines), and the Voronoi centroids \(C(\mathcal{Z}_{\mathscr{A}_{i}})\) (thick dashed lines) of the CSUR, \(i\in\mathcal{N}\), where ‘x’ and ‘o’ are the starting and ending points of the trajectories. The orbit radius is determined as small to avoid collisions. ## Acknowledgment Qingchen Liu, Zengjie Zhang, and Nhan Khanh Le contribute equally to this paper. Liu led the project, proposed the main idea of involving state- and input-dependent constraints for the feasibility and control saturation problems, and specified the the structure of this paper. He also contributed to the resource coordination, related work review, and technical solutions. Zhang was responsible for the main technical results and writing of this paper, including the preliminary formulation, the proposed coverage metric function, the stability and invariance proofs, and the distributed algorithm. He also provided figures and result analysis of the case studies. Le proposed the concept of using BLF to address the state constraints and scaling gain to handle the input constraint. His bachelor thesis was an important foundation of this work. He was also devoted to the main implementation of this work in terms of both simulation and experiments. The code and data for all simulation and experiment studies are published in [https://zenodo.org/record/7600131](https://zenodo.org/record/7600131).
2307.03602
Depth Estimation Analysis of Orthogonally Divergent Fisheye Cameras with Distortion Removal
Stereo vision systems have become popular in computer vision applications, such as 3D reconstruction, object tracking, and autonomous navigation. However, traditional stereo vision systems that use rectilinear lenses may not be suitable for certain scenarios due to their limited field of view. This has led to the popularity of vision systems based on one or multiple fisheye cameras in different orientations, which can provide a field of view of 180x180 degrees or more. However, fisheye cameras introduce significant distortion at the edges that affects the accuracy of stereo matching and depth estimation. To overcome these limitations, this paper proposes a method for distortion-removal and depth estimation analysis for stereovision system using orthogonally divergent fisheye cameras (ODFC). The proposed method uses two virtual pinhole cameras (VPC), each VPC captures a small portion of the original view and presents it without any lens distortions, emulating the behavior of a pinhole camera. By carefully selecting the captured regions, it is possible to create a stereo pair using two VPCs. The performance of the proposed method is evaluated in both simulation using virtual environment and experiments using real cameras and their results compared to stereo cameras with parallel optical axes. The results demonstrate the effectiveness of the proposed method in terms of distortion removal and depth estimation accuracy.
Matvei Panteleev, Houari Bettahar
2023-07-07T13:44:12Z
http://arxiv.org/abs/2307.03602v1
# Depth Estimation Analysis of Orthogonally Divergent Fisheye Cameras with Distortion- Removal ###### Abstract Stereo vision systems have become popular in computer vision applications, such as 3D reconstruction, object tracking, and autonomous navigation. However, traditional stereo vision systems that use rectilinear lenses may not be suitable for certain scenarios due to their limited field of view. This has led to the popularity of vision systems based on one or multiple fisheye cameras in different orientations, which can provide a field of view of 180x180 degrees or more. However, fisheye cameras introduce significant distortion at the edges that affects the accuracy of stereo matching and depth estimation. To overcome these limitations, this paper proposes a method for distortion-removal and depth estimation analysis for stereovision system using orthogonally divergent fisheye cameras (ODFC). The proposed method uses two virtual pinhole cameras (VPC), each VPC captures a small portion of the original view and presents it without any lens distortions, emulating the behavior of a pinhole camera. By carefully selecting the captured regions, it is possible to create a stereo pair using two VPCs. The performance of the proposed method is evaluated in both simulation using virtual environment and experiments using real cameras and their results compared to stereo cameras with parallel optical axes. The results demonstrate the effectiveness of the proposed method in terms of distortion removal and depth estimation accuracy. Divergent stereo vision, Fisheye camera, Distortion removal, Camera models, VOLUME XX, 2017 ## 1 Introduction Camera arrays systems have been extensively used in various fields, such as robotics, autonomous vehicles, and medical imaging, for depth estimation and 3D reconstruction. For instance, they have been utilized in NASA's rovers for navigation purposes for a considerable period [1, 2]. A conventional rover overview system comprises numerous pairs of cameras that aid in navigation and environmental evaluation. These camera pairs work together to create a depth map that corresponds to the distance and space around the rover. The Yandex autonomous delivery robot developed by Yandex is equipped with 4 ultra-wide-angle cameras located on its front, back, and sides, which offer an extensive panoramic view for the operator and the odometry algorithms[3]. Similarly, various car manufacturers incorporate camera-based surround-view systems to facilitate parking or autonomous driving, with camera positions varying between models and brands. Nonetheless, the general approach is to position cameras to maximize the coverage of the surrounding area. This has led to the popularity of fisheye lenses, which can provide a field of view of 180x180 degrees or more. However, fisheye lenses introduce significant distortion that affects the accuracy of stereo matching and depth estimation, unlike stereovision systems that rely on two images of the same scene, captured by cameras with parallel optical axes. Recently, several methods have been proposed to overcome the distortion issues associated with fisheye lenses. One of the most common methods was proposed for stereo fisheye camera systems aimed in one direction with parallel optical axes [4, 5]. Deep learning techniques have also been used to improve the accuracy and efficiency of stereo vision systems. For instance, convolutional neural networks (CNNs) have been applied to extract depth information from a pair of images [6]. Nevertheless, this method requires significant computational resources. Another method involves mounting two 245\({}^{\circ}\) cameras at opposite ends of a rigid rod and aiming them towards each other to achieve a circular depth perception volume with a 65\({}^{\circ}\) vertical field-of-view [7]. While this method produces a panoramic depth view with sufficient quality for autonomous navigation and UAV localization. However, it is difficult to implement for other types of robots [8]. The third method is the least explored so far and involves a "Special Stereo Vision System" developed by Zhang et al [9]. This system uses a system of four fisheye lenses cameras with more than 180\({}^{\circ}\) field of view placed at 90\({}^{\circ}\) angle to each other. The authors discussed calibration and the epipolar rectification method. However, they have not studied the depth estimation nor analyzed the accuracy of their method. Despite the advancements in stereo vision techniques, there is still a need for robust and accurate stereo vision systems using divergent fisheye lenses cameras. This paper proposes a method for distortion-removal and depth estimation analysis for stereovision system using orthogonally divergent fisheye cameras (ODFC). The proposed method uses two virtual pinhole cameras (VPC), each VPC captures a small portion of the original view and presents it without any lens distortions, emulating the behavior of a pinhole camera. The proposed method was validated in both virtual environment and real environment using real cameras. The results demonstrate the effectiveness of the proposed method in terms of distortion removal and depth estimation accuracy. The remainder of the paper is organized as follows: Section II models the Fisheye camera, Section III describes the distortion removal concept, Section IV presents both simulation and experimental results associated with depth analysis. Section V concludes the paper and discusses future work. ## II Fisheye Camera Models The difficulties encountered when using existing stereo vision algorithms in ultrawide-angle cameras are due to the nature of their optical systems. These cameras have in their basis a complex system of lenses. The features of this system make it possible to achieve very high viewing angles, but also cause aberrations and signature image distortions. To describe the projection properties of a wide range of such cameras, researchers have resorted to approximations called camera models. The camera model for a camera is a function that describes the transformations between points in three-dimensional space in the camera's reference frame (\(\mathbf{P}=[x_{c}\quad y_{c}\quad z_{c}]^{T}\)) and points on the image plane (\(\mathbf{p}=[u\quad v]^{T}\)). Most consumer cameras may be modelled with perspective projection, as in Fig.1, but wide field-of-view cameras require a more complicated description owing to the strong radial distortion. As the distortions are considered radially symmetrical, most models work in the domain of distance from pixel to the center of an image \(\mathbf{\rho}\) and the angle of incidence of a point in 3D space \(\mathbf{\theta}\). For the implemented system only front projection function of a fisheye model, commonly denoted as \(\pi_{f}(\cdot)\), is required. ### _Kannala-Brandt Model_ Kannala-Brandt's model [10] for lenses with radially symmetric distortion is implemented in OpenCV and many less popular libraries. The authors found that five terms of an odd power polynomial were sufficient to describe typical distortions as a relation between the angle of incidence and distance between projected pixel and the center of the image. Thus, radial relations of this model can be written with the following equation. \[\mathbf{\rho}=k_{1}\theta+k_{2}\theta^{3}+k_{3}\theta^{5}+k_{4}\theta^{7}+k_{5} \theta^{9}, \tag{2}\] where \(k_{1}\dots k_{5}\) - model parameters. This model can be calibrated using OpenCV and CamOdoCal libraries. ### _Mei Model_ The Mei model [11] is a more general version of the Geyer model [12]. It was originally designed to simulate catadioptric cameras more effectively, as it allows the use of different distortion functions to simulate mirrors of different shapes but has also proved to be highly suitable for wide-angle cameras. The model consists of projection rule (3) and distortion terms (4). \[\mathbf{m}_{\mathbf{u}}=\begin{bmatrix}X_{u}\\ Y_{u}\end{bmatrix}=\begin{bmatrix}\frac{X_{s}}{Z_{s}+\xi}\\ \frac{Y_{s}}{Z_{s}+\xi}\end{bmatrix}, \tag{3}\] where \(\xi\) - model parameter, \(X_{s},Y_{s},Z_{s}\) - normalized point coordinates. The distortion terms with \(k_{1},k_{2},p_{1},p_{2}\) as model parameters: \[\mathbf{m}_{\mathbf{d}}=\mathbf{m}_{\mathbf{u}}(k_{1}\rho+k_{2}\rho^{2})+\begin{bmatrix}2p_{1 }X_{u}Y_{u}+p_{2}(\rho+2X_{u}^{2})\\ 2p_{2}X_{u}Y_{u}+p_{1}(\rho+2Y_{u}^{2})\end{bmatrix}. \tag{4}\] Finally, the resulting pixel coordinates can be obtained as the sum of the terms: \[\begin{bmatrix}u\\ v\end{bmatrix}=\mathbf{m}_{\mathbf{u}}+\mathbf{m}_{\mathbf{d}}. \tag{5}\] This model may be calibrated using the CamOdoCal library. ### _Scaramuzza Model_ The Scaramuzza model [13], which is the basis of the Matlab Omnidirectional Camera Calibration Toolbox, is also Figure 1: A 3D point in camera frame expressed in \(z_{c}\) and \(\rho\) coordinates. _f(\(\rho\))_ represents a projection function in case of polynomial models, \(a_{0}\) is a focus distance. widely used. It doesn't have a closed form back projection ruleIt associates points in the image with their corresponding points in the camera coordinates, as follows: \[\begin{bmatrix}x_{c}\\ y_{c}\\ z_{c}\end{bmatrix}=\lambda\begin{bmatrix}u\\ v\\ a_{0}+a_{2}\rho^{2}+a_{3}\rho^{3}+a_{4}\rho^{4}\end{bmatrix}, \tag{6}\] where \(a_{0}\,...\,a_{4}\) - model parameters, \(\lambda=\rho_{c}/\rho_{i}\) is a scaling factor, proportional to the ratio of the distance from the point to the optical axis \(\rho_{c}\) and the distance from its pixel-projection to the center of the image \(\rho_{i}\). As \(\rho_{i}\) depends on yet unknown pixel coordinates, some approximation algorithm (e.g. Newton's method) has to be run on every point to obtain it and calculate the projection \[\begin{bmatrix}u\\ v\end{bmatrix}=\frac{1}{\lambda}\begin{bmatrix}x_{c}\\ y_{c}\end{bmatrix}. \tag{7}\] ### _Atan Model_ This model describes an ideal equidistant fisheye projection. It lacks the flexibility of other models because it has only one parameter - the field of view but serves as a good reference model as it is used in the simulated fisheye camera chosen for the experiments. The projection is expressed as follows: \[\begin{bmatrix}u\\ v\end{bmatrix}=\begin{bmatrix}\frac{f_{x}\theta}{\sqrt{y_{c}^{2}/x_{c}^{2}+1}} \\ \frac{f_{y}\theta}{\sqrt{x_{c}^{2}/y_{c}^{2}+1}}\end{bmatrix}. \tag{8}\] ## III Distortion-removal concept The most popular and well-performing [14] stereovision algorithms are based on two images of the same scene, captured by cameras with parallel optical axes. To achieve this with divergent fisheye images, a virtual pinhole camera (VPC) needs to be introduced. A VPC captures only a small region of the original view and presents it as a pinhole camera would - with no lens distortions. By correctly selecting the regions in the two images, it is possible to form a stereo pair compatible with existing stereo matching algorithms using two VPCs (Fig. 2). The process starts with identifying the desired VPC intrinsic parameters such as resolution and field of view. Then a blank image of the chosen resolution is generated and projected using the pinhole camera back projection function \(\pi_{p}^{-1}(\cdot)\) into a 3D space as described in \[[x_{c}\quad y_{c}\quad z_{c}]^{T}=[u\cdot z_{c}/f_{x}\quad v\cdot z_{c}/f_{y} \quad z_{c}]^{T}, \tag{9}\] where \(x_{c},y_{c},z_{c}\) - are coordinates in the camera frame, \(f_{x},\ f_{y}\) are focus distances. The resulting point cloud is then rotated towards the desired direction using a rotation matrix \(\mathbf{R}\). By using a fisheye camera model projection function \(\mathbf{\pi_{f}}(\cdot)\) to project the rotated point cloud, the resulting distorted image can be matched with every pixel in the original image. This process is depicted in Fig. 3. The resulting transformation from a VPC image pixel \(\mathbf{p_{p}}\) to the corresponding fisheye image pixel \(\mathbf{p_{i}}\) is \[\mathbf{p_{i}}=\mathbf{\pi_{f}}\left(\mathbf{R}\cdot\mathbf{\pi_{p}}^{-1}(\mathbf{p_{p}})\right), \tag{10}\] Running the process for every pixel in an image might be a computationally heavy process depending on the used model. To accelerate the initial projection, the process was parallelized and a lookup table was employed to store the computed results for each pixel in a memory structure. Figure 3: **Geometric principle of the non-distortion process. \(\partial_{\nu}X_{c}V_{c}Z_{c}\) is the fisheye camera reference frame; \(\partial_{\nu}X_{c}V_{p}Z_{c}\) is the local frame of a pointcloud obtained pinhole back projection \(\mathbf{v_{p}}\): \(R\) is the rotation of VPC relative to fisheye camera: \(\partial_{\nu}X_{c}^{\nu}Y_{c}Z_{c}\) is the local frame of rotated pointcloud \(\mathbf{v_{p}}\): \(\nu_{i}\) is the patch of a fisheye image \(u_{i}v_{i}\) containing pixels for distortion removal.** Figure 2: **Principle of stereopair forming.** These stored values could be reused for subsequent frames, eliminating the need for repeated calculations and enabling real-time performance. ## IV Results and discussions ### _Depth and depth error estimation_ Using the abovementioned models presented in section II with the proposed concept for distortion removal on fisheye images, it is possible to get a pair of images for stereo matching. The output of a stereo matching is a depth map of the perceived space. The accuracy of stereo cameras can be determined by how closely this depth map conveys real-world information about the distance to surfaces. From the resulting depth maps, 3D reconstruction of the scene can be performed. The result is represented as a point cloud, similar to that shown in Fig.4. For the purpose of accuracy estimation, the points outside the vicinity of the target plane are discarded. Next, 1000 random points are selected from the remaining points, normalizing the sample size for all distances. The remaining points are used to calculate the error. It is done by measuring the standard deviation and variance of the depth points from their assumed positions. ### _Simulation results_ Reliably assessing accuracy can be problematic due to the influence of various factors. The primary sources of error in stereo cameras, as stated in [14], include sensor errors, measurement conditions, and properties of the observed surfaces. Sensor errors can result from inaccurate camera parameter selection and imperfections in optics, leading to incorrect distortion correction and systematic error in evaluating point coordinates in space. However, better optics and more precise calibration can minimize these effects. Measurement conditions, such as illumination, distance to the observed surface, and camera position, can also impact accuracy. Occlusion can occur in certain camera and object configurations, hindering depth computation, and increasing the distance between the stereo pair and the object can increase error. Lastly, the observed surfaces should have a pronounced texture and be lambert [15, 16]. To mitigate most of these problems, initial tests were conducted in the virtual environment of the Unity game engine. As a result, it is possible to place virtual cameras and targets precisely, conduct repeatable experiments, and control the environment. The only remaining variables in this setting are the distortion models and the distance to a target. The virtual nature of the experiment makes it possible to precisely know the position of the target object and accurately determine the error. The surface properties are controlled by using different textures on the target object and averaging the results over them. Since the greatest distortion in a fisheye image occurs at the edges of the image, and it's the setting in which the fields of view intersect most frequently, experimental stereo pair were constructed such that the angle between the main axes of the cameras is 90\({}^{\circ}\). Beside other advantages, the virtual environment enables to put several objects at the same coordinates. Thus, a pair of reference cameras with no distortion can be precisely aligned with VPCs and set to use the same parameters (see Fig. 5). Their accuracy will serve as reference for comparison (it is denoted as Reference). In the virtual fisheye camera setup 'lenses' had the same parameters, a field of view of 180\({}^{\circ}\), and a baseline of 20 cm. All the presented models have been calibrated on one set of calibration pattern images captured in the virtual environment. The calibration results are in Table I. As the virtual cameras are identical, only one set of parameters is required. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Software**} & \multicolumn{2}{c|}{**Mean Reprojection Error**} & \multicolumn{1}{p{56.9pt}}{**Parameters**} \\ \cline{3-5} & & & \(\xi\)=1.474, & \\ Mei & CamOdoCal & 0.102 & p1=1.66*10e-4, & p2=8*10e-5, \\ & & & k1=-0.208, k2=0.153 & \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & \multirow{4}{*}{CamOdoCal} & \multirow{4}{*}{0.099} & k2=7.58*10e-4, & \\ & & & k3=-3.26*10e-4, & k4=4.03*10e-5, \\ & & & k5=-1.86*10e-6, & \\ \hline \multirow{4}{*}{ \begin{tabular}{} \end{tabular} } & \multirow{4}{*}{MATLAB} & \multirow{4}{*}{0.121} & a0=-345.1, & \\ & & & a1=-0.0011, & \\ & & & a2=-5.762*10e-7, & \\ & & & & a3=-1.398*10-9 \\ \hline \multicolumn{2}{|p{56.9pt}|}{Atan} & - & - & Fov=180o \\ \hline \end{tabular} \end{table} TABLE I: Calibration results for Mei, Kananala-Brandt, and Scaramuzza and Atan models Figure 4: Example of a raw point cloud Figure 5: Image comparison. Image with removed distortions (left) and a shot from a reference camera (right). Only under close inspection, only small differences can be observed. One way to evaluate the effectiveness of distortion removal was to perform image subtraction. This method involved taking the difference between a reference image (without distortion) and the processed image with removed distortion, resulting in a mask that revealed potential defects, as illustrated in Fig. 6. The mean pixel intensity can be used to quantify this difference. A depth-quality evaluation was performed, and the results are displayed in Fig. 7. The x-axis shows the depth to the plane in terms of the number of stereopair baseline lengths, and the y-axis shows the root-mean-square (RMS) error between the estimated depth and the true depth, known from the target placement in the simulated world. We examined how the RMS error of the depth estimation varied with the depth for different distortion models: Scaramuzza, Mei, and Kannala-Brandt, as well as the case without distortion. We used a second order polynomial regression model to fit the data. The vertical lines show the standard deviation of the depth distribution in the pointcloud. Each line represents a different model and is calculated from the average of all the textures. As can be seen from the figure, the best depth estimation result is naturally shown by the reference stereo pair. This is closely followed by the ideal model embedded in the virtual camera, which shows that even in the case of perfect correspondence between forward and backward projections, some information is lost or degraded. Of the models examined, the Kannala-Brandt model demonstrates the smallest error, almost identical to system with no distortion. Mei model performs substantially worse, while Scaramuzza model demonstrates the worse performance overall. These experiments prove that the proposed stereo vision system design is feasible, and the decrease in quality is not substantial. The best results are shown by the Mei and Kannala-Brandt models. ### Experimental results After the virtual experiments the next step is the testing of the system with real cameras. For this purpose, 2 off-the-shelf cameras were used with the following parameters: 1.45 mm F2.2 1/1.8 FOV 190\({}^{\circ}\) (AC123B0145IRM12MM) mounted at an angle of 90\({}^{\circ}\) in a 3D printed body. The base of the stereo pair is approximately 72 mm. An image of the stereo-vision system module is shown in Fig. 8. An example of a fisheye image produced by such a camera module is shown in Fig. 9. In the experiments the measurement condition error is minimized using artificial light, and the distance to the target object is controlled by markings on the experimental table. Similar to the virtual tests, three textures printed on a matte paper sheet were used to reduce the observed surface properties influence. Each camera has been individually calibrated against checkerboard pattern images, and the resulting model parameters are shown in Table II. Example of a fragment of a fisheye image and the distortion-corrected version of it is shown Fig. 10. It is notable that they contain voids caused by the truncated format of the original image. However, these voids did not interfere with further experimentation because they left a sufficient field of view unaffected. In the experimental setup, a stereo vision system module is affixed to the table's edge, and the table is marked with 25 cm increments for positioning the target surface, which is then moved accordingly. To i Figure 6: The difference in images increased the contrast and brightness for visibility. All images demonstrate only small defective areas concentrated around sharp edges. Scaramuzza demonstrates more bright areas than the other models, which can be explained by worse calibration quality. Figure 7: Depth quality evaluation of Mei, Kannala-Brandt and Scaramuzza models as well as, the reference virtual stereo cameras without distortion. surface is situated 50 cm away from the camera, the expected depth is also 50 cm. The depth-quality evaluation was performed, and the results are displayed in Fig.11. The x-axis shows the depth to the plane in terms of the number of stereopair baseline lengths, and the y-axis shows the root-mean-square (RMS) error between the estimated depth and the expected depth. We examined how the RMS error of the depth estimation varied with the depth for different distortion models: Scaramuzza, Mei, and Kannala-Brandt. We used a second order polynomial regression model to fit the data. The vertical lines show the standard deviation of the depth distribution in the pointcloud. Each line represents a different model and is calculated from the average of all the textures. As can be seen in Fig. 11, the best depth estimation result was achieved by Kannala-Brandt model (the smallest error), while Mei and Scaramuzza models perform similarly but with lower performances compared to Kannala-Brandt model. This result demonstrates the effectiveness of the Kannala-Brandt model for our proposed method for distortion removal for divergent stereo cameras. We conducted a simulation (virtual environment) and an experiment using a real divergent stereo camera to evaluate the RMS error of the depth estimation after applying distortion removal with Kannala-Brandt model. We compared these results with the RMS error of the depth estimation of conventional stereo cameras without camera divergence reported in [17], using the same camera parameters as our experiment. After analyzing the results shown in Fig. 12, it was evident that the conventional scenario displayed the lowest depth error estimation, as projected. This is due to the stereo cameras being aligned with the optical axis, producing images with minimal distortion. However, the simulated case, which used divergent stereo cameras, had a lower error rate compared to the experimental setup with the same specifications. This indicates that although attempts were made to minimize factors such as camera errors, measurement conditions, and surface properties in the experimental case, they still had an impact. These factors were eliminated in the simulated scenario, resulting in lower error rates. The findings substantiate the proposed method's validity for removing distortion. \begin{table} \begin{tabular}{|l|l|l|l|} \hline \hline Model & Mei & Kannala-Brandt & Scaramuzza \\ \hline Software & CamdoCal & CamdoCal & MATLAB \\ \hline MRE & 0.417 & 0.418 & 0.341 \\ \hline Parameters & \(\xi\)=-2404, & \(\xi\)=-3.78*10e-4, & a0=64,747, \\ (left) & p1-4.76*10e-4, & k34.48*10e-5, & a1=-64*10e-4, \\ & 4, & k4=-5.79*10e-3, & a2=-3.31*10e-7, \\ & p2-2.97*10e-4, & k5=-2.41*10e-3 & a3=-3.02*10-10 \\ & k2-3.01 & & \\ \hline Parameters & \(\xi\)=1.678, & k2=3.80*10e-4, & a0=68, \\ (right) & p1-4.54*10e-4, & k3=1.89*10e-3, & a1=-6.46*10e-4, \\ & 4, & \(\xi\)=-1.20*10e-2, & a3=-3.19*10e-7, \\ & p2-5.15*10e-3, & 3, & \\ & 4, & k5=-1.22*10e-10 \\ & k1=-0.115, & 4 & \\ & k2-3.01 & & \\ \hline \end{tabular} \end{table} TABLE II: Real camera calibration results for Mei, Kannala-Brandt, and Scaramuzza models Figure 8: Orthogonally divergent stereo vision system prototype Figure 10: a) fragment of the original fisheye image, b) the distortion corrected version. Figure 9: An image from one of cameras. The lens circle is not fully inscribed in the frame and is off-center, thereby reducing the effective area available for distortion removal. Vignetting around edges is also noticeable and can interfere with the search for matches. This increases the contribution of the sensor to the overall error. ## V Conclusion In this paper, a method for distortion removal was proposed for orthogonally divergent fisheye cameras. The performance of the proposed method was evaluated in both simulated and experimental environments, and the results were compared with stereo cameras that have parallel optical axes. The findings indicate that the proposed method effectively removes distortion and provides accurate depth estimation. Overall, the proposed method presents a viable solution for addressing lens distortion in stereo vision systems. Further work should focus on optimizing and improving the accuracy of the distortion correction method and developing a method for automatic stereo calibration for different camera placement configurations. There is also a need to test this system for SLAM applications.
2308.13294
Training normalizing flows with computationally intensive target probability distributions
Machine learning techniques, in particular the so-called normalizing flows, are becoming increasingly popular in the context of Monte Carlo simulations as they can effectively approximate target probability distributions. In the case of lattice field theories (LFT) the target distribution is given by the exponential of the action. The common loss function's gradient estimator based on the "reparametrization trick" requires the calculation of the derivative of the action with respect to the fields. This can present a significant computational cost for complicated, non-local actions like e.g. fermionic action in QCD. In this contribution, we propose an estimator for normalizing flows based on the REINFORCE algorithm that avoids this issue. We apply it to two dimensional Schwinger model with Wilson fermions at criticality and show that it is up to ten times faster in terms of the wall-clock time as well as requiring up to $30\%$ less memory than the reparameterization trick estimator. It is also more numerically stable allowing for single precision calculations and the use of half-float tensor cores. We present an in-depth analysis of the origins of those improvements. We believe that these benefits will appear also outside the realm of the LFT, in each case where the target probability distribution is computationally intensive.
Piotr Bialas, Piotr Korcyl, Tomasz Stebel
2023-08-25T10:40:46Z
http://arxiv.org/abs/2308.13294v2
# Training normalizing flows with computationally intensive target probability distributions ###### Abstract Machine learning techniques, in particular the so-called normalizing flows, are becoming increasingly popular in the context of Monte Carlo simulations as they can effectively approximate target probability distributions. In the case of lattice field theories (LFT) the target distribution is given by the exponential of the action. The common loss function's gradient estimator based on the "reparametrization trick" requires the calculation of the derivative of the action with respect to the fields. This can present a significant computational cost for complicated, non-local actions like _e.g._ fermionic action in QCD. In this contribution, we propose an estimator for normalizing flows based on the REINFORCE algorithm that avoids this issue. We apply it to two dimensional Schwinger model with Wilson fermions at criticality and show that it is up to ten times faster in terms of the wall-clock time as well as requiring up to 30% less memory than the reparameterization trick estimator. It is also more numerically stable allowing for single precision calculations and the use of half-float tensor cores. We present an in-depth analysis of the origins of those improvements. We believe that these benefits will appear also outside the realm of the LFT, in each case where the target probability distribution is computationally intensive. + Footnote †: journal: Computer Physics Communications ## 1 Introduction Monte Carlo simulations remain a very important computational tool in many areas ranging from social sciences, Bayesian data analysis, and inference to physics. In many cases to generate samples from a given target distribution one resorts to the construction of an associated Markov chain of consecutive proposals [1]. The only limiting factor of the approach is the statistical uncertainty which directly depends on the number of statistically independent configurations. Hence, the effectiveness of any such simulation algorithm can be linked to its autocorrelation time which quantifies how many configurations are produced before a new, statistically independent configuration appears. For systems close to phase transitions the increasing autocorrelation time, a phenomenon called critical slowing down, is usually the main factor which limits the statistical precision of outputs. The recent interest in machine learning techniques has offered possible ways of dealing with this problem. Ref. [2] proposed normalizing flows based on neural networks as a mechanism for generating independent configurations in lattice field theories (LFT) which can be used as proposals in the construction of the Markov chain. The new algorithm was hence called Neural Markov Chain Monte Carlo (NMCMC). For discrete statistical systems like e.g. the Ising model, autoregressive neural networks were used in the NMCMC sampling algorithm [3; 4; 5; 6; 7; 8]. Once the neural network is sufficiently well trained, one indeed finds that autocorrelation times are significantly reduced as was demonstrated in the context of the two-dimensional Ising model in Ref. [5]. Neural networks that build up the normalizing flows have to be trained, _i.e._ their weights should be tuned so that the model can approximate the desired probability distribution. The standard approach for achieving this is using the stochastic gradient descent (SGD) algorithm which requires the estimation of gradients of the loss function with respect to the neural network weights. The most commonly used estimator of the gradient is based on the so-called "reparametrization trick" (r.t.) [9]. It is straightforward to implement but requires the calculation of gradients of the target probability. If this probability is given by a complex formula this may lead to severe degradation of performance. In Ref. [10] we have proposed to use REINFORCE (RE.) algorithm for the gradient estimator. We have shown how it can be implemented in case of reversible1 normalizing flows and that it avoids calculating the derivative of the action. In there, we have applied this estimator to \(\phi^{4}\) LFT and while it had better convergence properties, the \(\phi^{4}\) action is very simple and did not bring out the full capabilities of this approach. The same implementation for RE as in Ref. [10] was later proposed also in Ref. [11]. In this contribution, we apply this estimator to the case of the 2D lattice Schwinger model with Wilson fermions. The fermionic action requires the calculation of the determinant of the Dirac operator, which is represented by a large (\(2L^{2}\times 2L^{2}\) for \(L\times L\) lattice) matrix, so avoiding propagating gradients through those calculations may prove beneficial. This is also probably the simplest model with dynamical fermions so it is often used as a testing ground for algorithms that eventually can be used for lattice QCD [12] making it an interesting model to study. We demonstrate that the RE. estimator is significantly faster than the r.t., which is currently the most commonly used gradient estimator. Already at \(L=12\) RE. outperforms the r.t. estimator and the difference grows quickly with \(L\), reaching a factor of 10 for \(L=24\). In addition, we show that the RE. requires much less memory which plays a role for larger systems sizes. The code used in this paper is available at [13]. Footnote 1: All flows are reversible, what we mean here is that reverse transformation can be efficiently implemented. This paper is organized as follows: in section 2 we present the Neural Markov Chain Monte Carlo algorithm and explain how it can be implemented in terms of normalizing flows. In section 3 we present RE. and r.t. gradient estimators that can be used to approximate the gradient of the loss function with respect to the model parameters. In section 4 we show how the RE. can be implemented in practice. In section 5 we introduce the 2D lattice Schwinger model and in section 6 we present a detailed comparison of both estimators for this model. A gives the details of the implementation. ## 2 Neural Markov Chain Monte Carlo Monte Carlo methods rely on random samples generated from some _target_ distribution \(p(\mathbf{\phi})\). Often, _e.g._ in lattice field theories, that distribution is complicated and depends on all degrees of freedom of the system, hence there are no methods for sampling this distribution directly and independently. Instead, the approach of choice is to construct the associated Markov chain of samples, giving rise to the so-called Markov Chain Monte Carlo approach. To be more precise, each step of the algorithm has two stages: in the first stage for the given configuration \(\mathbf{\phi}_{i}\), a new trial configuration \(\mathbf{\phi}_{trial}\) is proposed from the distribution \(q(\mathbf{\phi}_{trial}|\mathbf{\phi}_{i})\). In the second stage, the trial configuration is accepted with probability \(p_{a}(\mathbf{\phi}_{trial}|\mathbf{\phi}_{i})\) usually given by the Metropolis-Hastings acceptance probability [1] \[p_{a}(\mathbf{\phi}_{trial}|\mathbf{\phi}_{i})=\min\left\{1,\frac{p(\mathbf{\phi}_{trial} )}{q(\mathbf{\phi}_{trial}|\mathbf{\phi}_{i})}\frac{q(\mathbf{\phi}_{i}|\mathbf{\phi}_{trial })}{p(\mathbf{\phi}_{i})}\right\} \tag{1}\] In order to keep the acceptance rate high, typically the configuration \(\mathbf{\phi}_{trial}\) differs from \(\mathbf{\phi}_{i}\) only on a small subset of degrees of freedom, _e.g._ single lattice site. By construction, the consecutive samples generated by the MCMC algorithm are highly correlated due to small incremental changes needed at each step. On the contrary, in the Metropolized Independent Sampling (MIS) algorithm discussed in [14] one attempts to generate _independent_ samples from some auxiliary distribution \(q(\mathbf{\phi})\)_i.e._ \[q(\mathbf{\phi}_{trial}|\mathbf{\phi}_{i})=q(\mathbf{\phi}_{trial}) \tag{2}\] and then accept or reject it with the Metropolis-Hastings step, \[p_{a}(\mathbf{\phi}_{trial}|\mathbf{\phi}_{i})=\min\left\{1,\frac{p(\mathbf{\phi}_{trial} )}{q(\mathbf{\phi}_{trial})}\frac{q(\mathbf{\phi}_{i})}{p(\mathbf{\phi}_{i})}\right\}. \tag{3}\] The MIS algorithm also introduces autocorrelations because of non-zero rejection probability, however, they can be controlled by the similarity of the distributions \(q(\mathbf{\phi})\) and \(p(\mathbf{\phi})\), i.e. if \(q(\mathbf{\phi})\) is close enough to \(p(\mathbf{\phi})\) then the acceptance rate is close to one, and subsequently the autocorrelations can be substantially smaller then in the case of MCMC (see Ref. [5] for discussion). The difficulty of the MIS approach lies in the construction of the distribution \(q(\mathbf{\phi})\) which has to, at the same time, be as close to the target distribution \(p(\mathbf{\phi})\) as possible and allow for practical generation of configurations. Neural Markov Chain Monte Carlo proposes to employ machine learning techniques, notably neural networks, to _learn_ the distribution \(q(\mathbf{\phi})\)[2; 3; 4]. Hence, one assumes that \(q(\mathbf{\phi})\) can be represented by some appropriate model parametrized by some (very large) set of parameters \(\mathbf{\theta}\) \[q(\mathbf{\phi})=q(\mathbf{\phi}|\mathbf{\theta}).\] The parameters \(\mathbf{\theta}\) are tuned by minimizing a loss function that measures the difference between \(q(\mathbf{\phi}|\mathbf{\theta})\) and target distribution \(p(\mathbf{\phi})\). A natural choice for such loss function is the Kullback-Leibler divergence [15] \[D_{KL}(q|p)=\int\mathrm{d}\mathbf{\phi}\,q(\mathbf{\phi}|\mathbf{\theta})\left(\log q(\mathbf{ \phi}|\mathbf{\theta})-\log p(\mathbf{\phi})\right)=E[\log q(\mathbf{\phi}|\mathbf{\theta})- \log p(\mathbf{\phi})]_{q(\mathbf{\phi}|\mathbf{\theta})}, \tag{4}\] sometimes called _reversed_ K.-L. divergence in this context because the target probability is given as the second argument. It often happens that the target distribution \(p(\mathbf{\phi})\) is only known up to a normalizing constant, i.e. we only have access to \(P(\mathbf{\phi})\), \[P(\mathbf{\phi})=Z\cdot p(\mathbf{\phi}),\qquad Z=\int\mathrm{d}\mathbf{\phi}P(\mathbf{\phi}). \tag{5}\] The constant \(Z\) is typically called the _partition function_. Inserting \(P\) instead of \(p\) into the Kullback-Leibler divergence definition we obtain the _variational free energy_ \[F_{q}=\int\mathrm{d}\mathbf{\phi}\,q(\mathbf{\phi}|\mathbf{\theta})\left(\log q(\mathbf{\phi} |\mathbf{\theta})-\log p(\mathbf{\phi})-\log Z\right)=F+D_{KL}(q|p), \tag{6}\] where \(F=-\log Z\) is the _free energy_. Since \(F\) is independent of \(\mathbf{\theta}\), minimizing \(F_{q}\) is equivalent to minimizing the full loss function \(D_{KL}\). We will use \(P\) and \(F_{q}\) instead of \(p\) and \(D_{KL}\) in what follows. Note that the possibility of directly estimating the free energy \(F\) is one of the additional strengths of this approach since \(F\) is very hard to access in the classical Monte Carlo simulation [16]. Normalizing flows are a particular model that allow parametrizing trial probability distributions \(q(\mathbf{\phi})\) over a space of configuration with continuous degrees of freedom. It may be defined as the tuple of functions [2; 17; 18] \[\mathbb{R}^{D}\ni z\longrightarrow(q_{pr}(z),\mathbf{\varphi}(z|\mathbf{\theta}))\in (\mathbb{R},\mathbb{R}^{D}), \tag{7}\] where the function \(q_{pr}(z)\) is the probability density defining a _prior_ distribution of random variable \(z\). \(\mathbf{\varphi}(z|\mathbf{\theta})\) has to be a _bijection_ which implies that if the input \(z\) is drawn from \(q_{pr}(z)\) then the output \(\mathbf{\phi}\) is distributed according to \[q(\mathbf{\phi}|\mathbf{\theta})=q_{z}(z|\mathbf{\theta})\equiv q_{pr}(z)\left|J(z|\mathbf{ \theta})^{-1}\right|,\quad\mathbf{\phi}=\mathbf{\varphi}(z|\mathbf{\theta}), \tag{8}\] where \[J(z|\mathbf{\theta})=\det\left(\frac{\partial\mathbf{\varphi}(z|\mathbf{\theta})}{ \partial z}\right) \tag{9}\] is the determinant of the Jacobian of \(\mathbf{\varphi}(z|\mathbf{\theta})\). For practical reasons, normalizing flows are constructed in such a way that the Jacobian determinant is relatively easy to compute. The variational free energy \(F_{q}\) defined in Eq. (6) can be rewritten in terms of \(q_{pr}(z)\), \(q_{z}(z|\mathbf{\theta})\) and \(\mathbf{\varphi}(z|\mathbf{\theta})\) as \[F_{q}=\int\mathrm{d}z\,q_{pr}(z)\left(\log q_{z}(z|\mathbf{\theta})-\log P(\mathbf{ \varphi}(z|\mathbf{\theta}))\right)=E\left[\log q_{z}(z|\mathbf{\theta})-\log P(\mathbf{ \varphi}(z|\mathbf{\theta}))\right]_{q_{pr}(z)}. \tag{10}\] Eq. (10) is known as the "reparametrization trick" because of the change of variables (reparameterization) from \(\mathbf{\phi}\) to \(z\)[9]. \(F_{q}\) can be approximated as \[F_{q}\approx\frac{1}{N}\sum_{i=1}^{N}\left(\log q_{z}(z_{i}|\mathbf{\theta})-\log P (\mathbf{\varphi}(z_{i}|\mathbf{\theta}))\right),\quad z_{i}\sim q_{pr}(z_{i}), \tag{11}\] where the \(\sim\) symbol denotes that each \(z_{i}\) is drawn from the distribution \(q_{pr}(z_{i})\). ## 3 Gradient estimators The training of the machine learning model is done with the stochastic gradient descent (SGD) method and requires the calculation of the gradient of \(F_{q}\) with respect to \(\mathbf{\theta}\). The gradient is estimated based on a random, finite sample (batch) of \(N\) configurations \(\{\mathbf{\phi}\}=\{\mathbf{\phi}_{1},\ldots,\mathbf{\phi}_{N}\}\). In the case of normalizing flows, this is pretty straightforward. We can directly differentiate the expression (11) to obtain the gradient estimator \(\mathbf{g}_{tr}[\{\mathbf{\phi}\}]\), \[\frac{\mathrm{d}F_{q}}{\mathrm{d}\mathbf{\theta}}\approx\mathbf{g}_{tr}[\{\mathbf{ \phi}\}]\equiv\frac{1}{N}\sum_{i=1}^{N}\frac{\mathrm{d}}{\mathrm{d}\mathbf{\theta} }\left(\log q(\mathbf{z}_{i}|\mathbf{\theta})-\log P(\mathbf{\varphi}(\mathbf{z}_{i}|\mathbf{\theta }))\right),\quad\mathbf{z}_{i}\sim q_{pr}(\cdot|\mathbf{\theta}). \tag{12}\] This derivative can be calculated by popular machine learning packages like PyTorch[19] using automatic differentiation [20]. While conceptually simple, this estimator has a considerable drawback as it requires calculating the gradient of the distribution \(P(\mathbf{\phi})\) with respect to the configuration \(\mathbf{\phi}\), \[\frac{\partial}{\partial\mathbf{\theta}}\log P(\mathbf{\varphi}(\mathbf{z}_{i}|\mathbf{ \theta}))=\left.\frac{\partial}{\partial\mathbf{\phi}}\log P(\mathbf{\phi})\right|_{ \mathbf{\phi}=\mathbf{\varphi}(\mathbf{z}|\mathbf{\theta})}\frac{\partial\mathbf{\varphi}(\mathbf{z}_ {i}|\mathbf{\theta})}{\partial\mathbf{\theta}}.\] In lattice field theories the probability \(P\) is given by the _action_\(S(\mathbf{\phi})\), \[\log P(\mathbf{\phi}(\mathbf{z}|\mathbf{\theta}))=-S(\mathbf{\phi}(\mathbf{z}|\mathbf{ \theta})) \tag{13}\] and so calculating the gradient of \(F_{q}\) requires the gradient of the action \(S\) with respect to the fields \(\mathbf{\phi}\). This may not pose large problems for _e.g._\(\phi^{4}\) theory discussed in [10] where the action is just a polynomial in \(\mathbf{\phi}\). Other lattice field theories however, notably Quantum Chromodynamics with dynamical fermions, may have much more complicated actions including some representation of the non-local determinant of the fermionic matrix and the calculation of the action gradient may be impractical. REINFORCE algorithm relies on differentiating the formula (6) directly without any reparameterization [3; 21] \[\begin{split}\frac{\mathrm{d}F_{q}}{\mathrm{d}\mathbf{\theta}}& =\int\mathrm{d}\mathbf{\phi}\,\frac{\partial q(\mathbf{\phi}|\mathbf{\theta })}{\partial\mathbf{\theta}}\left(\log q(\mathbf{\phi}|\theta)-\log P(\mathbf{\phi}) \right)\\ &\quad+\int\mathrm{d}\mathbf{\phi}\,q(\mathbf{\phi}|\mathbf{\theta})\frac{ \partial}{\partial\mathbf{\theta}}\log q(\mathbf{\phi}|\theta).\end{split} \tag{14}\] The last term in the above expression is zero because it can be rewritten as the derivative of a constant, \[E\left[\frac{\partial\log q(\mathbf{\phi}|\theta)}{\partial\mathbf{ \theta}}\right]_{q(\mathbf{\phi}|\mathbf{\theta})}=\int\mathrm{d}\mathbf{\phi}\,\frac{ \partial q(\mathbf{\phi}|\theta)}{\partial\mathbf{\theta}}=\frac{\partial}{\partial \mathbf{\theta}}\underbrace{\int\mathrm{d}\mathbf{\phi}\,q(\mathbf{\phi}|\theta)}_{1}=0. \tag{15}\] The first term in the expression (14) can be further rewritten as \[\begin{split}\frac{\mathrm{d}F_{q}}{\mathrm{d}\mathbf{\theta}}& =\int\mathrm{d}\mathbf{\phi}\,q(\mathbf{\phi}|\mathbf{\theta})\frac{\partial \log q(\mathbf{\phi}|\mathbf{\theta})}{\partial\mathbf{\theta}}\left(\log q(\mathbf{\phi}| \theta)-\log P(\mathbf{\phi})\right)\\ &=E\left[\frac{\partial\log q(\mathbf{\phi}|\mathbf{\theta})}{\partial \mathbf{\theta}}\left(\log q(\mathbf{\phi}|\theta)-\log P(\mathbf{\phi})\right)\right]_{ q(\mathbf{\phi}|\theta)},\end{split} \tag{16}\] and approximated as \[\frac{\mathrm{d}F_{q}}{\mathrm{d}\mathbf{\theta}}\approx\mathbf{g}_{tr}[\{\mathbf{\phi }\}]\equiv\frac{1}{N}\sum_{i=1}^{N}\frac{\partial\log q(\mathbf{\phi}_{i}|\mathbf{ \theta})}{\partial\mathbf{\theta}}\left(\log q(\mathbf{\phi}_{i}|\mathbf{\theta})-\log P( \mathbf{\phi}_{i})\right), \tag{17}\] which defines another gradient estimator \(\mathbf{g}_{RE}[\{\mathbf{\phi}\}]\). In practice, this estimator has a huge variance and one has to use some variance-reducing method [3; 10; 11; 21]. Following the Ref. [3] we define the final version of this estimator as \[\mathbf{g}_{RE}[\{\mathbf{\phi}\}]=\frac{1}{N}\sum_{i=1}^{N}\frac{ \partial\log q(\mathbf{\phi}_{i}|\mathbf{\theta})}{\partial\mathbf{\theta}}\left(s(\mathbf{ \phi}_{i}|\mathbf{\theta})-\overline{s(\mathbf{\phi}|\theta)_{N}}\right), \tag{18}\] where \[s(\mathbf{\phi}|\mathbf{\theta})\equiv\log q(\mathbf{\phi}|\mathbf{\theta})-\log P(\mathbf{\phi}) \quad\text{and}\quad\overline{s(\mathbf{\phi}|\mathbf{\theta})_{N}}=\frac{1}{N}\sum_{i=1 }^{N}s(\mathbf{\phi}_{i}|\mathbf{\theta}). \tag{19}\] Contrary to \(\mathbf{g}_{rt}\) the \(\mathbf{g}_{RE}\) estimator is slightly biased \[E\left[\mathbf{g}_{RE}[\{\mathbf{\phi}\}]\right]=\frac{N-1}{N}E\left[\mathbf{g}_{ rt}[\{\mathbf{\phi}\}]\right]. \tag{20}\] The proof of this fact is presented in [10]. Of course, such multiplicative bias does not play any role when the estimator is used in the gradient descent algorithm and is very small anyway when \(N\sim 10^{3}\). For all practical purposes, we can treat both estimators as unbiased, so any differences must stem from higher moments, most importantly from the variance. Although not much can be said about the variances of these estimators in general, we can show that for perfectly trained model _i.e._ when \(q(\mathbf{\phi}|\mathbf{\theta})=p(\mathbf{\phi})\) \[\text{var}\left[\mathbf{g}_{RE}[\{\mathbf{\phi}\}]\right]_{q(\mathbf{\phi}|\mathbf{\theta })=p(\mathbf{\phi})}=0 \tag{21}\] The proof is presented in [10]. As for the estimator \(\mathbf{g}_{rt}\), we cannot make any claims as to the value of its variance but in Ref. [10] we showed that it does not need to vanish even for \(q(\mathbf{\phi}|\mathbf{\theta})=p(\mathbf{\phi})\). ## 4 Eliminating action derivative We notice that contrary to \(\mathbf{g}_{rt}\), the estimator \(\mathbf{g}_{RE}\) does not require calculating the derivatives of \(P(\mathbf{\phi})\). This is due to the fact that we can first generate a configuration \(\mathbf{\phi}\) from the distribution \(q(\mathbf{\phi}|\mathbf{\theta})\) and then obtain its probability directly. In the case of normalizing flows, we do not have direct access to the function \(q(\mathbf{\phi}|\mathbf{\theta})\) since the probability of the configuration is determined simultaneously with the generation, by passing \(\mathbf{z}\) through the network (see Figure 1a). However, by leveraging the reversibility of normalizing flows we can adapt the \(\mathbf{g}_{RE}\) estimator to that case (see Figure 1b). The \(\mathbf{g}_{RE}\) estimator requires the \(q(\mathbf{\phi}|\mathbf{\theta})\) function, and while it is not explicit in the normalizing flows formulation (7), it can be inferred from Eq. (8). Using the fact that the Jacobian determinant of transformation \(\mathbf{\varphi}^{-1}(\mathbf{\phi}|\mathbf{\theta})\) \[\bar{J}(\mathbf{\phi}|\mathbf{\theta})\equiv\det\left(\frac{\partial\mathbf{\varphi}^{-1 }(\mathbf{\phi}|\mathbf{\theta})}{\partial\mathbf{\phi}}\right)\] is the inverse of Jacobian determinant of \(\mathbf{\varphi}(\mathbf{z}|\mathbf{\theta})\), \[\bar{J}(\mathbf{\phi}|\mathbf{\theta})=J(\mathbf{z}|\mathbf{\theta})^{-1},\] we can write \(q(\mathbf{\phi}|\mathbf{\theta})\) as \[q(\mathbf{\phi}|\mathbf{\theta})=q_{pr}(\mathbf{z}^{\prime})\left|\bar{J}(\mathbf{\phi}|\mathbf{ \theta})\right|\quad\mathbf{z}^{\prime}=\mathbf{\varphi}^{-1}(\mathbf{\phi}|\mathbf{\theta}). \tag{22}\] Figure 1: Schematic picture of two algorithms for gradient estimation discussed in the paper: a) reparametrization trick b) REINFORCE. Double line arrows represent the flow: upward-pointing arrows represent forward propagation, and downward-pointing arrows represent reversed propagation. Dashed arrows denote propagation which does not require gradient calculations. Given that, the calculation of \(\mathbf{g}_{RE}\) using auto-differentiation capabilities of modern machine learning frameworks would proceed as illustrated with the pseudo-code in Algorithm 1 and schematically in Figure 0(b). This requires running the flow two times: forward to obtain \(\boldsymbol{\phi}_{t}\), then backward to calculate \(\boldsymbol{z}^{\prime}\), but the gradients have to be calculated only on the second pass. The stripped-down version of the actual Python code is presented in Listing 2 in the Appendix A. ``` 1:\(\#\) generate \(\boldsymbol{\phi}\) 2:Switch off gradient calculations 3:\(\boldsymbol{z}\sim q_{pr}(\boldsymbol{z})\) # generate \(z\) from prior distribution 4:\(\boldsymbol{\phi}\leftarrow\boldsymbol{\varphi}(\boldsymbol{z}|\boldsymbol{ \theta})\)# Forward pass 5:\(\#\) Calculate signal 6:\(s\leftarrow\log q(\boldsymbol{\phi}|\boldsymbol{\theta})-\log P(\boldsymbol{ \phi})\) 7:\(\#\) Calculate \(\mathbf{g}_{RE}\) 8: Switch on gradient calculations 9:\(\boldsymbol{z}^{\prime}\leftarrow\boldsymbol{\varphi}^{-1}(\boldsymbol{\phi}| \boldsymbol{\theta})\)# Backward pass 10:\(q\gets q_{pr}(\boldsymbol{z}^{\prime}|\boldsymbol{\theta})\det\left( \frac{\partial\boldsymbol{z}^{-1}(\boldsymbol{\phi}|\boldsymbol{\theta})}{ \partial\boldsymbol{\phi}}\right)\) 11:\(loss\leftarrow\log q\times(s-\bar{s})\) ``` Algorithm 1: Calculation of the \(\mathbf{g}_{RE}\) estimator for normalizing flows. The resulting \(loss\) can be used for automatic differentiation. The hash symbol denotes comments. ## 5 Schwinger model We compare the RE and r.t. estimators simulating the two-dimensional Schwinger model with two flavours of Wilson fermions defined on a \(L\times L\) lattice. The action consists of two parts: the pure gauge plaquette action and the fermionic determinant which following Ref. [12] we calculate directly (using built-in PyTorch function). \[S(U)=-\beta\sum_{x}\text{Re}\,P(x)-\log\det D[U]^{\dagger}D[U] \tag{23}\] where \(P(x)\) is the plaquette \[P(x)=U_{1}(x)U_{0}(x+\hat{1})U_{1}^{\dagger}(x+\hat{0})U_{0}^{\dagger}(x). \tag{24}\] \(U_{\mu}(x)\) is a link starting from \(x\) in direction \(\mu=0,1\) and \(\hat{\mu}\) is the displacement vector of one lattice site in the direction \(\mu\). The Wilson-Dirac operator is defined as \[D[U](y,x)^{\alpha\beta}=\delta(y-x)\delta^{\alpha\beta}-\kappa\sum_{\mu=0,1} \left\{[1-\sigma^{\mu}]^{\beta\alpha}U_{\mu}(y-x+\hat{\mu})\delta(y-x+\hat{ \mu})+[1+\sigma^{\mu}]^{\beta\alpha}U_{\mu}^{\dagger}(y-\hat{\mu})\delta(y-x- \hat{\mu})\right\} \tag{25}\] where \(\sigma^{\mu}\) are the Pauli matrices. In our implementation, we have recreated the normalizing flow architecture described in Refs. [12; 22]. We did this on top of the code provided in [23], which provided the implementation of the pure \(U(1)\) gauge model with non-compact projection as the plaquette coupling layer. We have provided our own implementation of the circular splines plaquette coupling layer [24; 25], more complicated masking patterns with \(2\times 1\) loops described in [12] and fermionic action of the Schwinger model. More details can be found in A. The full code is provided in [13], which is, up to our knowledge, the only open access code for normalizing flow sampling of the Schwinger 2D model. Following [12], we concentrate in this paper only on a single point in the phase space: \(\beta=2\) and \(\kappa=0.276\) where the model is expected to be at criticality, as this is where the most severe critical slowing down is expected. ## 6 Results We start with \(16\times 16\) lattice for direct comparison with [12]. To monitor the progress of training we have used the effective sample size (ESS), \[ESS\left[\left|\boldsymbol{\phi}\right|\right]=\frac{E\left[w(\boldsymbol{\phi} )\right]_{q(\boldsymbol{\phi}|\boldsymbol{\theta})}^{2}}{E\left[w(\boldsymbol{ \phi})\right]_{q(\boldsymbol{\phi}|\boldsymbol{\theta})}}\approx\frac{\left( \sum_{i=1}^{N}w(\boldsymbol{\phi}_{i})\right)^{2}}{N\sum_{i=1}^{N}w( \boldsymbol{\phi}_{i})^{2}},\qquad\boldsymbol{\phi}_{i}\sim q(\boldsymbol{ \phi}_{i}|\boldsymbol{\theta}), \tag{26}\] where \[w(\boldsymbol{\phi})=\frac{P(\boldsymbol{\phi})}{q(\boldsymbol{\phi}| \boldsymbol{\theta})} \tag{27}\] are so called unnormalized importance ratios. It is easy to see that \(0\leq ESS\leq 1\) and \(ESS=1\) if and only if \(q(\boldsymbol{\phi}|\boldsymbol{\theta})=p(\boldsymbol{\phi})\), namely for a perfectly trained network. The results are presented in the left panel of Figure 2. The ESS was calculated after each gradient step on a small batch of 1536 configurations. Those values fluctuate wildly for the batch size we used. In order to obtain smoother curves we present an average of 500 consecutive measurements. We made several runs for each estimator and chose to present two of them as most typical. In the right panel in Figure 2 we show the acceptance of the Metropolis-Hastings step (3). This was calculated offline: for a given state of the neural network, we generated a Markov Chain of \(2^{16}\) configurations and measured the acceptance rate. Measurements were repeated after 1000 gradient steps in the training process. We show only the results of the run with the best ESS. The first thing to notice is that the REINFORCE estimator leads to much more efficient training: the ESS and acceptance rate grow much faster with gradient steps than for the r.t. estimator. The ESS reached after 120k gradient steps is 4-5 times larger for RE than in the r.t. case. The training using r.t. was finished at 40k gradient steps, because no further improvement was observed during the training. The superiority of the RE in terms of the training efficiency was already observed in [10] in the case of the \(\phi^{4}\) theory and was attributed to the smaller variance of this estimator. Please note, however, that the training efficiency may depend on the actual physical parameters of the model and may vary. Another possible cause is the numerical instability of the r.t.estimator. In Ref. [12] the double precision was used for all but the neural network evaluations. As our aim was not to reproduce exactly the results of that paper, but to compare two different gradient estimators, we did all our calculations in single precision. We did not encounter any problems while training using the REINFORCE estimator. We also tried the automatic mixed precision (amp) features of the PyTorch library which enabled the use of tensor cores on GPU using half-float precision. This was not possible for \(\mathbf{g}_{rt}\) as it crashed almost immediately. The crashes also happened when using amp with \(\mathbf{g}_{RE}\) estimator but very rarely. Nevertheless, because of the possibility of encountering crashes and the fact that the speed up was only around 20% (see table 1) it is probably not worthwhile to use amp in this situation. After \(\sim 120k\) gradients updates we achieved the acceptance rate of 22% and autocorrelation time of \(\sim 9\) Monte Carlo steps for the chiral condensate, \[\langle\bar{\boldsymbol{\psi}}\psi\rangle=\frac{1}{V\operatorname{Tr}D^{-1}[U]}. \tag{28}\] In figure 3 we present an excerpt from the Monte Carlo history for the chiral condensate and \[\sigma=\operatorname{sign}(\operatorname{Re}\det D). \tag{29}\] The value of \(\sigma\) is positive (negative) for even (odd) topological sectors, and the changes in its value are correlated with tunneling events [12]. We include this plot to show that the algorithm is not "frozen" in one of the topology sectors. We can see some small "bridges" characteristic for the metropolized independent sampler with low acceptance, when the algorithm is stuck at a single configuration and new proposals get rejected. These bridges are responsible for the autocorrelation time. Due to the improvements presented above, the REINFORCE estimator allows to simulate bigger systems and we have also tried the \(24\times 24\) lattice. We present the results of ESS and acceptance rate as a function of gradient steps in figure 4. We do not include the \(\mathbf{g}_{rt}\) in the comparison considering its poor performance which would be only Figure 3: An excerpt from the Monte Carlo history for \(\sigma\) (upper panel) and chiral condensate (lower) for \(L=16\). Series of non-accepted configurations (bridges) that are over 100 in length are marked in red. Figure 2: Training history for the Schwinger model on a \(16\times 16\) lattice at criticality \(\beta=2.0\), \(\kappa=0.276\). Each gradient step was calculated on a batch of \(3\times 512\) samples (the batch was split into three parts to fit on the GPU). Left: the effective sample size (ESS) defined in eq. (26) as a function of the number of gradient steps for two gradient estimators. Red curves were obtained using REINFORCE estimator with automatic mixed precision (amp) which enabled the use of tensor cores on GPU using half-float precision. We present the history of two different runs for each estimator. Right: the acceptance rate of the MCMC algorithm calculated for a particular state of the network after a given number of gradient steps. exacerbated on bigger lattices. An excerpt from the Monte Carlo history for \(L=24\) is presented in figure 5. Bridges are much longer than for \(L=16\) as the acceptance is smaller \(\sim 8\%\) and, in consequence, the autocorrelation time is much larger \(\sim 68\), but still, the algorithm is able to explore many topological sectors. The significant advantage of the RE estimator over r.t. visible in figure 2 is probably dependent on the parameters of the Schwinger model (\(\beta\) and \(\kappa\)) as well as the hyper-parameters used in training and requires further studies. However, this paper is focused on purely computational advantages of the RE estimator. These are not dependent on neither, the parameters of the Schwinger model, nor the hyper-parameters and are presented in the following sections. ### Timing Due to the fact that the RE. algorithm does not require propagating the gradient through the complicated determinant, we expect it to also reduce the time required for the evaluation of one gradient step. To check this, we compared the wall clock time of 100 gradient steps during the training for r.t. and RE.. The times of the latter estimator were measured separately for amp features on and off. The batch size was 1536, but it was split differently depending on the lattice size to fit on a single GPU. Results are shown in table 1 for different lattice sizes from \(L=8\) to \(L=24\). We see that the REINFORCE algorithm starts to outperform the reparameterization trick at \(L=12\). At \(L=24\) one gradient step with the RE estimator is almost nine times faster than with the r.t.. Activating the amp features we gain an additional \(15-20\%\) speed-up for RE.. In the case of r.t., amp cannot be used due to numerical instability. To see why RE. is faster than r.t. we have performed detailed timing measurements on a single call to loss function and subsequent backward propagation using CUDA event timers. The results are summarized in table 2. The "loss" column presents timings for one call to the loss function grt_loss or gre_loss (see listings 1 and 2 ). The "back." columns present timings of one call to the.backward() method. We see that RE. is slightly slower than r.t. in calculating the loss (which is understandable since it requires an additional pass through the network). On the other hand, the backward propagation part is several times faster in RE. For the r.t. case, the backward propagation is the bottleneck of the algorithm, especially at larger lattice sizes. In the case of RE, the timings of the loss computation and backward propagation are comparable. The difference in backward propagation timing between the two gradient estimators can be attributed to the size of the computational graph constructed during the forward pass. When calculating any expression that involves tensors requiring gradient computations, PyTorch during the forward pass constructs a directed acyclic graph (DAG) containing the information needed to calculate gradients during the subsequent backward pass [26]. The DAG stores not only the operations needed to be performed but also all partial results (tensors) when needed (see next section). Figure 4: Training history for the Schwinger model on a \(24\times 24\) lattice at criticality \(\beta=2.0\), \(\kappa=0.276\). Each gradient step was calculated on a batch of \(4\times 384\) samples (the batch was split into four parts to fit on the GPU). Left: the effective sample size (ESS) defined in eq. (26) as a function of the number of gradient steps for the REINFORCE gradient estimator. Red curves were obtained using REINFORCE estimator with automatic mixed precision (amp) which enabled the use of tensor cores on GPU using half-float precision. We present the history of two different runs for each estimator. Right: the acceptance rate of the MCMC algorithm calculated for a particular state of the network after a given number of gradient steps. \begin{table} \begin{tabular}{|c c c|c|c c|c c|} \hline \hline & & & r.t. & \multicolumn{2}{c|}{RE.} & \multicolumn{2}{c|}{amp} \\ \hline L & bs. & nb. & t[s] & t[s] & sp. & t[s] & sp. \\ \hline \hline 8 & 1536 & 1 & 113 & 137 & 0.82 & 120 & 0.94 \\ \hline 12 & 1536 & 1 & 242 & 189 & 1.28 & 159 & 1.52 \\ \hline 16 & 768 & 2 & 784 & 331 & 2.37 & 285 & 2.75 \\ 16 & 1536 & 1 & & 268 & 2.93 & 215 & 3.65 \\ \hline 20 & 512 & 3 & 2411 & 527 & 4.50 & 460 & 5.24 \\ 20 & 768 & 2 & & 450 & 5.36 & 372 & 6.48 \\ \hline 24 & 384 & 4 & 6496 & 786 & 8.26 & 678 & 9.58 \\ 24 & 512 & 3 & & 690 & 9.41 & 598 & 10.86 \\ \hline \end{tabular} \end{table} Table 1: Time (t) in seconds required for 100 gradients steps. Missing entries in the table could not be computed because these \(L\) and batch size combinations did not fit into GPU memory. ”bs.” stands for batch size as sent to GPU, ”nb.” stands for the number of such batches used to calculate the gradient and ”sp.” is the speed-up factor compared to the r.t. estimator. When a r.t. entry is missing the speed-up factor was calculated relative to the best r.t. time on the same lattice size, those sp. factors are underlined. amp denotes the use of automatic mixed precision entailing the use of tensor cores. Timings were measured on NVIDIA A100-SXM4-40GB GPU. Figure 5: An excerpt from the Monte-Carlo history for \(\sigma\) and chiral condensate for \(L=24\). Series of non-accepted configurations (bridges) that are over 100 in length are marked in red. Thus, each operation (function) in PyTorch has a forward and backward implementation. Forward implementation evaluates this function and optionally registers in the DAG a node with backward implementation, storing intermediate results if necessary. Backward implementation calculates the gradient of this function with respect to its inputs using stored intermediate results. For example, if calculating \(\mathbf{x}^{2}\) PyTorch would add a node to the graph calculating \(2*\mathbf{x}\) and store \(\mathbf{x}\) in this node. We have counted the number of nodes in DAG for different values of \(L\) and the results are presented in table 3. As we can see in the case of the REINFORCE algorithm, the number of nodes is constant. This is because all operations relevant to the gradient computation were implemented as calls to PyTorch functions without any loops depending on the size of the lattice. The architecture of the neural networks was also not changed. This resulted in exactly the same graph with the only difference being that each node of the graph would process bigger tensors in case of bigger lattices or bigger batches. In the case of the reparameterization trick, we can see that the size of the tree grows with the lattice size. In fact, the number of nodes is _exactly_\(36L^{2}+12036\). This \(\propto L^{2}\) dependence comes from the gradient calculations with the fermionic determinant. While this is also a single call to a PyTorch function, it first requires an assembly of the Dirac matrix, which is done using for loops. Each iteration of the loops adds its operation to the DAG resulting in the growing size. While the size of the Dirac operator is \(\propto L^{4}\), it has only \(\propto L^{2}\) non-zero elements thus explaining the scaling of the tree growth. Using the PyTorch feature allowing to register callbacks on the nodes of the computational graph we have measured the time taken to execute each individual node. The results are presented in the table 4 where we list the top 10 time-consuming functions for each algorithm. For each function we present the number of times this function was called (num. op.) _i.e._ the number of nodes in the DAG performing this function and the total time (in milliseconds) taken by all these calls together. The functions presented in this table are PyTorch internal (backward) functions implementing gradient computations. One can deduce which function gradient was calculated by omitting the Backward0 suffix. We have found out that most of the time difference between the two algorithms can be attributed to the CopySlices function which takes over 20 times more time in r.t. than in the RE case. The LinalgSlogdetBackward0 function responsible for determinant calculation has a negligible effect. This function is not present at all in RE., as this estimator does not require backpropagating through the determinant. The CopySlices function is the only function in the table that does not have the Backward0 suffix. That, and the name, would again point out to the assembly of the Dirac matrix being the most time-consuming operation. Please note that those measurements were taken on a different GPU so they cannot be directly compared to the results from Tables 1 and 2. This breakdown is of course characteristic of only this particular model but we are of the opinion that such analysis may be beneficial in general. The code for obtaining such measurements is also provided in [13]. ### Memory Another advantage of the REINFORCE estimator is that it utilizes less memory. This is again due to the workings of the torch.autograd, the module responsible for automatic differentiation. As mentioned before, the computational graph created during the forward pass stores in each node partial results required for gradient computation on \begin{table} \begin{tabular}{|r r|r r r|r r r|} \hline \hline L & b.s. & \multicolumn{3}{c|}{r.t.} & \multicolumn{3}{c|}{RE.} \\ \hline & & total & loss & back. & total & loss & back. \\ \hline \hline 8 & 1536 & 0.97 & 0.34 & 0.63 & 1.20 & 0.61 & 0.59 \\ \hline 12 & 1536 & 2.25 & 0.48 & 1.76 & 1.70 & 0.81 & 0.89 \\ \hline 16 & 768 & 3.85 & 0.53 & 3.32 & 1.57 & 0.82 & 0.75 \\ 16 & 1536 & & & & 2.48 & 1.14 & 1.35 \\ \hline 20 & 512 & 7.98 & 0.67 & 7.03 & 1.69 & 0.93 & 0.75 \\ 20 & 768 & & & & 2.14 & 1.10 & 1.05 \\ \hline 24 & 384 & 16.27 & 0.84 & 15.43 & 1.85 & 1.08 & 0.77 \\ 24 & 512 & & & & 2.23 & 1.24 & 0.99 \\ \hline \hline \end{tabular} \end{table} Table 2: Timings in seconds for one call to loss function and backward propagation. ”loss” column presents timings for one call to the loss function grt_loss or gre_loss (see listings 1 and 2 ). ”back.” presents timings to one call to.backward() method. Timings were measured on NVIDIA A100-SXM4-40GB GPU. Missing entries could not be computed because of the memory constraints. the backward pass. Using the hook mechanism of PyTorch we have measured the number of different tensors stored in the graph and their total size. The results are presented in table 5, \(M_{1}\) is the total amount of memory taken by tensors stored in the computational graph, \(M_{2}\) is the memory usage as reported by the torch.cuda.max_memory_allocated function and n.t. is the number of different tensors stored in the graph. We can see that in the case of the REINFORCE algorithm, the number of tensors does not grow with the lattice size, contrary to the reparameterization trick. This is consistent with what we have said already about the DAG in the previous section. The difference in memory usage is of the order of 20% for \(L=16\) and over 50% for \(L=24\). The numbers reported in table 5 are only a lower bound on the memory usage, in practice total allocated memory as reported by nvidia-smi utility can be substantially higher. This reduction in the allocated memory translates directly into the gains in speed. This can be seen in table 1, lower memory usage resulted in larger batches fitting on the GPU which in turn resulted in faster gradient evaluation. ## 7 Summary In this paper, we have advocated the use of the REINFORCE type gradient estimator while training the normalizing flows. The advantage of this estimator is that it avoids calculating gradients of the target distribution which may be \begin{table} \begin{tabular}{|r r r|r r r|} \hline \hline & \multicolumn{2}{c|}{r.t} & \multicolumn{4}{c|}{REINF.} \\ \hline time[ms] & num. op. & name & time[ms] & num. op. & name \\ \hline 2743.56 & 2027 & CopySlices & 270.27 & 1342 & IndexBackward0 \\ 189.55 & 960 & IndexBackward0 & 47.78 & 144 & ConvolutionBackward0 \\ 146.63 & 144 & ConvolutionBackward0 & 129.15 & 1003 & CopySlices \\ 110.57 & 3406 & SliceBackward0 & 105.70 & 2378 & SliceBackward0 \\ 82.90 & 1 & LinalgSlogdetBackward0 & 42.19 & 1813 & MulBackward0 \\ 48.68 & 3677 & MulBackward0 & 18.39 & 1475 & SubBackward0 \\ 25.67 & 3925 & SelectBackward0 & 11.70 & 96 & LeakyReluBackward0 \\ 11.72 & 96 & LeakyReluBackward0 & 8.55 & 848 & SelectBackward0 \\ 10.47 & 1002 & SubBackward0 & 6.60 & 144 & DivBackward0 \\ 7.97 & 192 & DivBackward0 & 5.60 & 283 & WhereBackward0 \\ \hline \hline \end{tabular} \end{table} Table 4: Top ten operations, in terms of time used, performed while backpropagating through the computational graph for a \(16\times 16\) lattice and batch size of 512. The table presents the number of times this function was called (num. op.) and the total time (in milliseconds) taken by all those calls together. The names are internal PyTorch names for gradient computing functions. Measurements were taken on NVIDIA GeForce RTX 3090 24GB GPU. \begin{table} \begin{tabular}{|r|r r r r r r r r|} \hline \hline L & batch & \multicolumn{4}{c|}{r.t.} & \multicolumn{4}{c|}{REINF.} \\ \hline \hline & & \(M_{1}\) & \(M_{2}\) & n.t. & \(M_{1}\) & \(M_{2}\) & n.t. \\ \hline 16 & 128 & 3.09 & 3.48 & 5296 & 2.6 & 2.76 & 3401 \\ 16 & 256 & 6.16 & 6.87 & 5292 & 5.19 & 5.45 & 3401 \\ 24 & 128 & 7.88 & 9.41 & 7847 & 5.37 & 5.65 & 3401 \\ 24 & 256 & 15.72 & 18.72 & 7827 & 10.74 & 11.25 & 3401 \\ \hline \hline \end{tabular} \end{table} Table 5: GPU memory (in GB) allocated during a single call to loss function, \(M_{1}\) is the total amount of memory taken by tensors stored in computational graph, \(M_{2}\) is the memory usage as reported by the torch.cuda.max_memory_allocated function and n.t. is the number of different tensors stored in the graph. beneficial if this distribution is computationally intensive. We have applied this estimator to the 2D Schwinger model with two flavours of Wilson fermions which has an action in the form of the determinant of the Dirac operator. We have found that this estimator has better convergence properties, at least for the parameters used in this study, and is more numerically stable. We demonstrated that it is much faster and takes less memory. Of course, one needs to check if those benefits can be obtained for other models, especially those outside of the lattice field theory domain. A comparison with other than r.t. estimators, notably path gradients [11], is also in order. This would be the subject of further work. ## Acknowledgment Computer time allocation grant plnglft on the Ares and Athena supercomputers hosted by AGH Cyfronet in Krakow, Poland were used through the Polish PLGRID consortium. T.S. kindly acknowledges the support of the Polish National Science Center (NCN) Grants No. 2019/32/C/ST2/00202 and 2021/43/D/ST2/03375 and support of the Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University Grant No. LM/23/ST. This research was partially funded by the Priority Research Area Digiworld under the program Excellence Initiative - Research University at the Jagiellonian University in Krakow. ## Appendix A Implementation As mentioned before we have implemented the same model architecture as in the Ref. [12]. The source code for our implementation can be found in [13]. We have used the gauge-equivariant coupling layers with each layer updating a subset of links given by \[M_{\mu\nu}^{k}=\{U_{\mu}((4n+k)\hat{\mu}+2m\hat{\nu})|\;\forall n,m\in\mathbb{ Z}\}\cup\{U_{\mu}((4n+2+2k)\hat{\mu}+(2m+1)\hat{\nu})|\;\forall n,m\in\mathbb{Z}\} \tag{16}\] All links were updated in eight layers by first iterating over \(k\) with \(\mu=0,\nu=1\) and then again with with \(\mu=1,\nu=0\) We used 48 layers so each link was updated six times. Plaquette couplings were given by the eight knots _circular splines_[24; 25]. The parameters of the splines were set by a neural network in each coupling layer. The network took as input not only the inactive plaquettes but also \(2\times 1\) and \(1\times 2\) Wilson loops. For \(U(1)\) theory, each plaquette/loop was given by a single angle \(\theta\) and we used the \((\cos\theta,\sin\theta)\) pairs as the input to the network. The neural network was built from three convolutional layers with kernel size three and dilation factors \(1,2,3\). We used 64 channels between convolutions and LeakyReLU activation after each but the last layer. For more details please consult the Ref. [12] and/or the source code[13]. The gradient estimators described in sections 3 and 4 were implemented as _loss functions_. Each loss function took as the input a batch of generated prior configurations, model and the action and returned the overall loss on this batch, as well as the logarithms of the probabilities \(\log q\) and \(\log P\). The returned loss could be used for automatic gradient calculations by calling.backward() on it. Striped-down versions of the loss function for both estimators are presented in listings 1 and 2. Those loss functions could be subsequently used in a generic training step presented in listing 3, so the whole difference between the r.t. and RE. implementation was confined to those two functions. ## References * (1) N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, E. Teller, Equation of state calculations by fast computing machines, The Journal of Chemical Physics 21 (1953) 1087-1092. doi:10.1063/1.1699114. W. K. Hastings, Monte Carlo sampling methods using Markov chains and their applications, Biometrika 57 (1970) 97-109. doi:10.1093/biomet/57.1.97. * (2) M. S. Albergo, G. Kanwar, P. E. Shanahan, Flow-based generative models for markov chain monte carlo in lattice field theory, Phys. Rev. D 100 (2019) 034515. * (3) D. Wu, L. Wang, P. Zhang, Solving statistical mechanics using variational autoregressive networks, Phys. Rev. Lett. 122 (2019) 080602. * (4) K. A. Nicoli, S. Nakajima, N. Strodthoff, W. Samek, K.-R. Muller, P. Kessel, Asymptotically unbiased estimation of physical observables with neural samplers, Phys. Rev. E 101 (2020) 023304. * (5) P. Biatas, P. Korcyl, T. Stebel, Analysis of autocorrelation times in neural Markov chain Monte Carlo simulations, Phys. Rev. E 107 (2023) 015303. doi:10.1103/PhysRevE.107.015303. arXiv:2111.10189. * (6) P. Biatas, P. Korcyl, T. Stebel, Hierarchical autoregressive neural networks for statistical systems, Comput. Phys. Commun. 281 (2022) 108502. doi:10.1016/j.cpc.2022.108502. arXiv:2203.10989. defgrt_loss(z,log_prob_z,*,model,action,use_amp): layers=model['layers'] withautocast(enabled=use_amp): x,logq=nf.apply_flow(layers,z,log_prob_z) logp=-action(x) loss=nf.calc_dkl(logp,logq) returnloss,logq.detach(),logp.detach() Listing 1: Loss function for reparameterization based \(\mathbf{g}_{rt}\) estimator. defgre_loss(sub_mean,z_a,log_prob_z_a,*, model,action,use_amp): layers,prior=model['layers'],model['prior'] withtorch.no_grad(): withautocast(enabled=use_amp): phi,logq=nf.apply_flow(layers,z_a,log_prob_z_a) logp=-action(phi) signal=logq-logp withautocast(enabled=use_amp): z,log_q_phi=nf.reverse_apply_flow(layers,phi, torch.zeros_like(log_prob_z_a,device=phi.device)) prob_z=prior.log_prob(z) log_q_phi=prob_z-log_q_phi loss=torch.mean(log_q_phi*(signal-signal.mean())) returnloss,logq,logp Listing 2: Loss function for the REINFORCE based \(\mathbf{g}_{RE}\) estimator. deftrain_step(*, model, action, loss_fn, batch_size, optimizer, scheduler=None, n_batches=1, use_amp): optimizer.zero_grad(set_to_none=True) prior = model['prior'] for i inrange(n_batches): with autocast(enabled=use_amp): z = prior.sample_n(batch_size=batch_size) log_prob_z = prior.log_prob(z) 1, logq, logp = loss_fn(z, log_prob_z, model=model, action=action, use_amp=use_amp) 1.backward() #Gradientclippinghere optimizer.step() if scheduler is not None: scheduler.step() ``` Listing 3: Single training step. It accumulates gradient over n_batches of size batch_size. The difference between different gradient estimators is encapsulated in the loss function loss_fn.
2301.10296
Rewritable Photonic Integrated Circuits Using Dielectric-assisted Phase-change Material Waveguides
Photonic integrated circuits (PICs) have the potential to drastically expand the capabilities of optical communications, sensing, and quantum information science and engineering. However, PICs are commonly fabricated using selective material etching, a subtractive process. Thus, the chip's functionality cannot be substantially altered once fabricated. Here, we propose to exploit wide-bandgap non-volatile phase-change materials (PCMs) to create a rewritable PIC platform. A PCM-based PIC can be written using a nano-second pulsed laser without removing any material, akin to rewritable compact disks. The whole circuit can then be erased by heating, and a completely new circuit can be rewritten. We designed a dielectric-assisted PCM waveguide consisting of a thick dielectric layer on top of a thin layer of wide-bandgap PCMs Sb2S3 and Sb2Se3. The low-loss PCMs and our engineered waveguiding structure lead to a negligible optical loss. Furthermore, we analyzed and specified the spatio-temporal laser pulse shape to write the PCMs. Our proposed platform will enable low-cost manufacturing and have a far-reaching impact on the rapid prototyping of PICs, validation of new designs, and photonic education.
Forrest Miller, Rui Chen, Johannes E. Froech, Hannah Rarick, Sarah Geiger, Arka Majumdar
2023-01-24T20:34:36Z
http://arxiv.org/abs/2301.10296v2
# Rewritable Photonic Integrated Circuits Using Dielectric-assisted Phase-change Material Waveguides ###### Abstract Photonic integrated circuits (PICs) have the potential to drastically expand the capabilities of optical communications, sensing, and quantum information science and engineering. However, PICs are commonly fabricated using selective material etching, a subtractive process. Thus, the chip's functionality cannot be substantially altered once fabricated. Here, we propose to exploit wide-bandgap non-volatile phase-change materials (PCMs) to create a rewritable PIC platform. A PCM-based PIC can be written using a nano-second pulsed laser without removing any material, akin to rewritable compact disks. The whole circuit can then be erased by heating, and a completely new circuit can be rewritten. We designed a dielectric-assisted PCM waveguide consisting of a thick dielectric layer on top of a thin layer of wide-bandgap PCMs \(Sb_{2}S_{3}\) and \(Sb_{2}Se_{3}\). The low-loss PCMs and our engineered waveguiding structure lead to a negligible optical loss. Furthermore, we analyzed and specified the spatio-temporal laser pulse shape to write the PCMs. Our proposed platform will enable low-cost manufacturing and have a far-reaching impact on the rapid prototyping of PICs, validation of new designs, and photonic education. ## 1 Introduction Photonic Integrated Circuits (PICs) are becoming essential for various applications, including optical communication [1], sensing [2], and quantum information processing [3]. While PICs can significantly expand and enhance the performance of these systems, the fabrication methodology for the PICs is complex and expensive: they require high-resolution lithography and etch processes, which must take place in a sophisticated nanofabrication facility. Additionally, these processes are inherently subtractive, _i.e._, once fabricated, the wafer cannot be used to fabricate other structures. A low-cost method to fabricate PICs and the ability to rewrite the PIC in the same wafer can help with rapid prototyping. Chalcogenide-based non-volatile phase change materials (PCMs) provide a promising route to create such rewritable PICs [4]. These PCMs exhibit large changes in their refractive index (\(\Delta\)n \(>\) 0.5) when they undergo structural phase transition between the amorphous (aPCM) and crystalline states (cPCM) [5]. Crystallization, the process of switching PCMs from the amorphous to the crystalline phase, can be actuated by holding the PCM above its glass transition temperature (\(T_{g}\)) but below the melting temperature (\(T_{mp}\)) until a crystal lattice forms. Amorphization, the reverse process of crystallization, is achieved by melting and rapidly quenching the PCM. Notably, this micro-structural phase transition is non-volatile, _i.e._, no external power is required to maintain the state after the material phase is changed. These PCMs have been cycled thousands of times without degradation [6] and can potentially be switched for more than \(10^{12}\) times [7]. Consequently, PCMs are widely used in rewritable compact disks (CDs) to store information. A writing laser fires pulses to heat segments of the PCMs which, though amorphization or crystallization, write or erase the stored information. The information is then read out using a probing CW laser. However, rewritable CDs are fundamentally different from rewritable PICs. In a CD, the prob ing light is reflected off the surface, but in PICs, it is confined in an optical waveguide and propagates along the surface. Therefore, to build PICs, PCMs must provide enough contrast to guide the light in-plane while simultaneously avoiding significant optical loss. While researchers have already experimentally demonstrated laser-written rewritable meta-optics [8], [9] in PCM, an expensive femto-second laser was used, and the light did not propagate in a waveguide over a long path. In another work, researchers demonstrated only one-way writing of photonic circuits in PCM [10], which lacks the rewritable functionality. Some PIC structures have been written in GST using nano-second lasers [11], but a low-loss operation has yet to be demonstrated. Here, we present a design of a rewritable PIC based on PCMs, and theoretically analyze the spatio-temporal shape of the laser pulses to write that PIC. As the probing light is guided in the high-index crystalline PCM, we must ensure near-zero absorptive loss in the PCM. At 1.55 \(\mu\)m, \(Ge_{2}Sb_{2}Te_{5}\) (GST) is too lossy with an extinction coefficient of cGST \(\kappa_{cGST}\approx 1\)[12]. However, wide-bandgap PCMs, such as \(Sb_{2}S_{3}\) and \(Sb_{2}Se_{3}\), are suitable thanks to their negligible loss in the amorphous phase and low loss in the crystalline phase [6]. These wide-bandgap PCMs also exhibit large enough index contrast between their amorphous and crystalline states (\(\Delta n_{SbS}=0.6\) and \(\Delta n_{SbSe}=0.77\) at 1.55\(\mu\)m [6] to confine an optical mode. To further reduce loss, we designed a dielectric-assisted PCM structure (Fig. 1b). The propagation loss is estimated at 0.0100 dB/\(\mu\)m (0.0086 dB/\(\mu\)m) using \(Sb_{2}S_{3}\) (\(Sb_{2}Se_{3}\)). We envision that the probing light will be coupled in and out of the chip using pre-fabricated grating couplers, akin to input/ output pins in an electronic field-programmable gate array (Fig. 1a). Finally, we simulate switching dynamics to optimize the spatio-temporal beam shape of the writing laser to achieve a complete and reversible phase transition. Specifically, we show that a nano-second pulsed laser can actuate the phase transition with a spatially Gaussian and temporally rectangular shape. Our proposed rewritable PIC platform could democratize PIC prototyping thanks to its lower cost than lithography and its reusable capability. This frugal innovation can help with educating students about PIC and rapid prototyping of circuits to validate designs. ## 2 Low-Loss PCM Waveguides A major challenge for a PCM-based waveguide is to ensure a low propagation loss. Additionally, thick PCMs are generally harder to fully switch due to the temperature gradient. Based on the literature, the maximum thickness of the switched \(Sb_{2}S_{3}\) is about 70 nm [13]. Moreover, thinner PCM layers are necessary to ensure high endurance[4], [14]. While wide-bandgap PCMs have a negligible loss in the amorphous state, a small but non-negligible loss is still present in the crystalline state. We investigated prior works on PCM-integrated ring resonators and estimate the extinction coefficient in the crystalline state to be \(\kappa_{cSbS}=0.016\)[15] for \(cSb_{2}S_{3}\), and \(\kappa_{cSbSe}=0.0043\)[16], [17] for \(cSb_{2}Se_{3}\). We obtained these values by simulating the reported experi Figure 1: a) Schematic of the proposed rewritable PIC platform. A nano-second pulsed laser switches the PCM from the crystalline (light grey) to the amorphous (dark grey) state. The chip can then be heated to reset the PCM to the crystalline phase, erasing the written PICs. Subsequently, a different PIC can be written in the same region. b) The proposed dielectric-assisted PCM waveguide geometry for low-loss waveguiding. A simulated guided mode is overlayed, depicting a well-confined mode in the dielectric silicon nitride layer for a low propagation loss. mental structures and adjusting the extinction coefficient \(\kappa\) of the PCMs until our simulated loss matches the reported experimental results. The loss in the crystalline state is critical since the probing light is more efficiently confined in crystalline PCM, which has a higher refractive index than the amorphous phase but also higher loss. As a rough estimation, we assume the light is perfectly confined in a \(cSb_{2}S_{3}\) (\(cSb_{2}Se_{3}\)) core, then the unit propagation loss at 1.55 \(\mu m\) can be obtained: \(-20\log_{10}[\exp(-\frac{2\pi}{1.55\mu m}\cdot\kappa_{cSbS(cSbSe)}\cdot 1\mu m)]\). The calculated unit loss for \(cSb_{2}S_{3}\) (\(cSb_{2}Se_{3}\)) is \(\approx 0.56(0.15)dB/\mu m\), which is still relatively high. We note that this overestimates the loss due to the perfect confinement assumption, but the result suggests the necessity of a careful waveguide design to achieve a low propagation loss. Reducing the PCM thickness decreases the interaction between the optical mode and the cPCMs, leading to lower loss. Additionally, a thinner PCM also improves switching uniformity and endurance. However, an extremely thin PCM layer does not provide enough index contrast to establish a guided mode. We mitigate this trade-off by exploiting a dielectric-assisted PCM waveguide architecture [18], where a thick dielectric layer of \(Si_{3}N_{4}\) is deposited on a thin PCM layer (Fig. 1a). The optical mode is mainly confined in the dielectric layer due to the geometry of the PCM layer, mitigating the absorptive loss of crystalline PCMs. The \(Si_{3}N_{4}\) layer accounts for \(\sim\)10% of the loss, but since it contains most of the mode, it offers a significant improvement over a purely PCM based waveguide. Such a waveguide will be written from the chip's "erased" state, where the PCM layer is uniformly crystalline. A PCM waveguide is created by selectively switching the PCM to the amorphous state. We assume a layer of PCM with a thickness \(T_{PCM}\) will be deposited on 2 \(\mu\)m of thermal oxide (\(SiO_{2}\)) film on a silicon wafer. Our previous work has shown that a conformal capping material prevents material reflowing and oxidation during switching [19]. Therefore, we plan to encapsulate the PCM with 20 nm of atomic layer deposited (ALD) \(Al_{2}O_{3}\) and then deposit a \(Si_{3}N_{4}\) layer with thickness \(T_{clad}\) to enable low-loss waveguiding (Fig. 1b). It is worth noting that extending the thickness of the alumina layer to \(T_{clad}\) instead of adding the \(Si_{3}N_{4}\) layer can also lead to a low-loss operation. However, growing such thick alumina will be challenging in practice using oxidation, ALD, or Evaporation. We optimize the waveguide geometry, including the thickness for the PCM and the dielectric \(Si_{3}N_{4}\) layers and the width for the waveguide core, using the Finite Element Eigenmode (FEM) solver in Ansys Lumerical. Here both \(Sb_{2}S_{3}\) and \(Sb_{2}Se_{3}\) waveguides are designed for a low-loss operation. We start by sweeping the \(Sb_{2}S_{3}\) thickness \(T_{PCM}\), and the \(cSb_{2}S_{3}\) core width \(W_{core}\) (Fig. 2a). Unsurprisingly, a thinner \(T_{PCM}\) yields a lower loss. However, below a thickness of 12 nm, the structure cannot contain a physical mode. The values for \(T_{PCM}<12\) nm in Fig. 2a are numerical artificial modes that arise from the finite dimensions of the simulation region. Requiring physical solutions, we find a compromise between loss and mode confinement at a layer thickness of 15 nm and a core width of 2.0 \(\mu\)m. The thickness of the \(Si_{3}N_{4}\) layer is designed around 0.4 \(\mu\)m as shown in Fig. 2b. Our optimized structure exhibits a loss of 0.0100 dB/\(\mu\)m. The loss in the \(Sb_{2}Se_{3}\) waveguide exhibits much weaker dependence on \(W_{core}\) (Fig. 2c). This is due to the higher real refractive index of \(Sb_{2}Se_{3}\), which gives tighter mode confinement. We choose 1.5\(\mu\)m as the optimized core width to improve integration density. Similar to the \(Sb_{2}S_{3}\) design, the loss increases with increasing \(T_{PCM}\). Here we choose 20 nm as the layer thickness. This \(T_{PCM}\) is thin enough to switch and provide low loss while guiding a mode reliably. The \(Si_{3}N_{4}\) thickness was set to 400 nm to minimize loss while still confining the mode as measured by the mode area (Fig. 2d). This geometry demonstrates a loss of 0.0086 \(dB/\mu m\) at 1.55 \(\mu m\). While neither the \(Sb_{2}S_{3}\) or the \(Sb_{2}Se_{3}\) losses are negligible, they are a significant improvement over the losses we expect from a purely PCM based waveguide. ## 3 Fixed In/out ports using grating couplers The PICs presented here must interface with free-space optics to couple and read the probe laser. We propose to accomplish this using fixed grating couplers on the chip, forming optical input/output ports, between which designs can be written, erased, and re-written (Fig. 1a). These grating couplers are formed by etching the \(Si_{3}N_{4}\) layer since the optical modes for both PCM designs are primarily confined in the \(Si_{3}N_{4}\) layer (Fig. 1b). While this etching step is exactly what we intend to eliminate with this platform, we note that after this one etch, thousands of PIC designs can be written and tested on this platform without further etching. It is possible to write grating couplers into the PCM, but such couplers suffer from low coupling efficiencies and must be re-written after each anneal. Consequently we propose fixed gratings for this platform. We optimize the geometry for these gratings using Lumerical's Finite Difference Time Domain (FDTD) simulation. In the \(Sb_{2}S_{3}\) design, a grating pitch of 1.01 \(\mu\)m, a duty cycle of 0.8, and an etch depth of 180 nm resulted in a coupling efficiency of 21%. In the \(Sb_{2}Se_{3}\) design, a grating pitch of 0.97 \(\mu\)m, a duty cycle of 0.56, and an etch depth of 240 nm also resulted in a coupling efficiency of 21%. Connecting the gratings and the waveguides is a taper that matches the waveguide's mode. We chose a taper end width of 4 \(\mu\)m for both designs, which yields a mode overlap of more than 95% with the guided modes. Figure 2: a) Waveguide loss (dB/\(\mu\)m) for a joint parameter sweep of the \(Sb_{2}S_{3}\) thickness and waveguide core (\(cSb_{2}S_{3}\)) width. b) The loss and effective mode area as a function of the \(Si_{3}N_{4}\) thickness in the \(Sb_{2}S_{3}\) design. c) Waveguide loss (dB/\(\mu\)m) for a joint parameter sweep of the \(Sb_{2}Se_{3}\) thickness and waveguide core (\(cSb_{2}Se_{3}\)) width. d) The loss and effective mode area as a function of the \(Si_{3}N_{4}\) thickness in the \(Sb_{2}Se_{3}\) design. ## 4 Spatio-temporal shapes of amorphization pulses The arbitrary patterning for the PICs can be achieved using a pulsed writing laser paired with a three-axis translation stage. We numerically optimize the spatio-temporal profile of the laser pulse to switch the PCM from the crystalline to the amorphous states. To erase any previous writing, one can crystallize the full PCM layer by annealing the wafer on a hot plate. We simulated four pulse schemes with different spatio-temporal pulse shapes (4a-d) in COMSOL Multiphysics for the amorphization step. We expect the ideal shape as a rectangular spatial shape to provide a smooth sidewall and an increasing temporal shape to produce uniformly switching in the depth direction. However, later we show that a natural Gaussian spatial shape and rectangular temporal shape can switch thin \(Sb_{2}S_{3}\) entirely, offering a simple experimental realization. The pulses were delivered directly to the 15 nm film of \(Sb_{2}S_{3}\) in Fig. 1b, so reflection off the alumina and \(Si_{3}N_{4}\) layers is not considered. The pulses had either a Gaussian or uniform spatial distribution and either a rectangular or exponentially decaying temporal distribution. All pulses last approximately 14 ns in duration and the power was adjusted to achieve a maximum temperature of approximately 900 \({}^{o}C\). A successful amorphization must satisfy the following criteria. First, a large enough thermal energy must be applied to melt the PCM completely. This requires the delivered energy to heat PCM to its melting temperature Figure 3: A grating coupler and the two-dimensional cross-sectional view cutting along the dotted line. P, the optimal pitch, is 1.01 (0.97) \(\mu\)m for the \(Sb_{2}S_{3}\) (\(Sb_{2}Se_{3}\)) design. The optimal duty cycle, \(\frac{a}{b}\), is 0.8 (0.56) for the \(Sb_{2}S_{3}\) (\(Sb_{2}Se_{3}\)) design. The etch depth was 180 (240) nm for \(Sb_{2}S_{3}\) (\(Sb_{2}Se_{3}\)). In both cases, a taper end width, w, of 4 \(\mu\)m provides ¿95% mode overlap between the coupler and the waveguide. and further overcome the latent heat, given by the multiplication of enthalpy of fusion (\(H_{f}\)) and mass of the PCM. Second, the most crucial factor is a cooling rate exceeding \(\sim 1\)\(K/ns\)[5]. This ensures the PCM is frozen in the meta-stable amorphous state [20] and does not undergo unintentional recrystallization. Third, the upper bound of the absorbed thermal energy is heating the PCM above its boiling point \(T_{b}\), which irreversibly ablates the material. We obtain \(Sb_{2}S_{3}\) parameters from the literature for the simulations: the melting point is \(T_{m}=547\)\({}^{o}C\)[6], the enthalpy of fusion is \(H_{f}=47.9\) kJ/mol [21], and the boiling point is \(T_{b}=1149\)\({}^{o}C\)[22]. Lastly, The boundary morphology between switched and non-switched regions defines the waveguide edge. This boundary contains material heated to its melting point but not enough to exceed its enthalpy of fusion. This implies a partial amorphization, where a portion of the crystalline structure remains. This is undesirable as a more abrupt index change at the boundary could lead to better mode confinement. We examine different pulse conditions against these criteria. As shown in Fig. 4a-d, all pulse conditions exceed the melting point and achieve a faster cooling rate than \(1K/ns\). Our hypothesis with temporal modulation was that the exponential ramp would more uniformly heat the PCM layer along its thickness in the vertical direction. This statement is supported by the more linear average temperature curve in 4c,d, but since the \(Sb_{2}S_{3}\) layer is so thin, there was little variation in temperature from the top to the bottom of the PCM film. In a thicker PCM layer, we anticipate that temporally modulating the beam power could enable more uniform heating in the vertical direction. However, such modulation was not significant in this thin \(Sb_{2}S_{3}\) case. Therefore, we proceed in this discussion considering pulses with rectangular temporal modulation. The performance difference between the spatial beam shapes is pronounced in the feature size and the boundary region width. A Gaussian mode is better if the desired circuit requires fine features. This is a direct result of the narrower intensity distribution of a Gaussian compared to a uniform beam. With a Gaussian beam, the width of the switched PCM could be smaller than the laser's spot size if the laser power is tuned such that the beam's full-width-half-maximum is lower than the amorphization threshold. If the intensity is decreased, a smaller area of PCM will reach the amorphization threshold. A uniform beam lacks this property of a smaller feature size. Its advantage is a narrower boundary region in the PCM. We define the boundary region width as the partially amorphized region width. The boundary Figure 4: a-d) The transient thermal dynamics for a) temporally rectangular, spatially Gaussian beam pulse, b) temporally rectangular, spatially uniform pulse, c) temporally exponentially decaying, spatially Gaussian pulse, d) temporally exponentially decaying, spatially uniform pulse. The temperature profiles in c,d increase more linearly than those in a,b, but due to the 15nm thickness of the \(Sb_{2}S_{3}\), this did not result in a more uniform heating through the thickness of the film. e, f) Static temperature profile along the spatio-temporal line cut at y = \(T_{SbS/2}\) and t = 14 ns for e) a spatially Gaussian beam and f) a spatially uniform beam. The Gaussian beam switches a smaller area of \(Sb_{2}S_{3}\) than the uniform beam, but the uniform beam has a shorter boundary region (the distance between \(T_{mp}\) and full amorphization). Thus a Gaussian beam can write finer features, but a uniform beam will likely have better mode confinement. region for the Gaussian beams traverses a distance of 105 pm (Fig. 4e). In comparison, the boundary is only 27 pm (Fig. 4f) for a spatially uniform beam. This shorter boundary resembles the step-index profiles used in our previous simulations and leads to better mode confinement. Our thermal simulation verifies that the thin PCM layer can be switched entirely with nano-second laser pulses and offers a simple experimental realization with a natural laser beam with a Gaussian spatial and rectangular temporal shape. Improved mode confinement is likely from a uniform beam, but such a beam would have a lower resolution than a Gaussian beam. ## 5 Conclusion In conclusion, we have proposed and numerically verified a rewritable, cost-efficient PIC platform using wide-bandgap PCMs and a cost-efficient nano-second pulse laser. PICs are envisioned to be written (amorphized) by nano-second laser pulses and erased (crystallized) by rapid thermal annealing or even a simple hotplate. We have designed a dielectric-assisted PCM waveguide configuration, allowing low propagation loss for both \(Sb_{2}Se_{3}\) and \(Sb_{2}S_{3}\). Efficient grating couplers, working as optical I/O ports, were optimized for both types of waveguides. Comprehensive thermal transfer dynamic simulations were used verify and optimize the spatio-temporal pulse conditions to ensure complete amorphization with nano-second pulses. This etch-free platform could accelerate PIC fabrication and testing, potentially democratizing PIC fabrication. ## 6 Backmatter ### Funding The research is funded by DARPA-YFA Award. F.M. is supported by a Draper Scholarship. ### Author Contributions A.M. and F.M. conceived the project. F.M. performed simulation. R.C., J.F., and H.R. assisted with simulation. A.M. and S.G. supervised the project. F.M. wrote the manuscript with input from all the authors. ### Disclosures The authors declare no conflicts of interest. ### Data Availability Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon request.
2303.11097
Time- versus event-triggered consensus of a single-integrator multi-agent system
Event-triggered control has shown the potential for providing improved control performance at the same average sampling rate when compared to time-triggered control. While this observation motivates numerous event-triggered control schemes, proving it from a theoretical perspective has only been achieved for a limited number of settings. Inspired by existing performance analyses for the single-loop case, we provide a first fundamental performance comparison of time- and event-triggered control in a multi-agent consensus setting. For this purpose, we consider undirected connected network topologies without communication delays, a level-triggering rule for event-triggered control, and the long-term average of the quadratic deviation from consensus as a performance measure. The main finding of our analysis is that time-triggered control provably outperforms event-triggered control beyond a certain number of agents in our particular setting. We thereby provide an illustrative distributed problem setup in which event-triggered control results in a performance disadvantage when compared to time-triggered control in the case of large networks. Moreover, we derive the asymptotic order of the performance measure under both triggering schemes which gives more insights into the cost relationship for large numbers of agents. Thus, by presenting an analysis for a particular setup, this work points out that transferring an event-triggering scheme from the single-loop to the multi-agent setting can lead to a loss of the often presumed superiority of event-triggered control over time-triggered control. In particular, the design of performant decentralized event-triggering schemes can therefore pose additional challenges when compared to the analogue single-loop case.
David Meister, Frank Aurzada, Mikhail A. Lifshits, Frank Allgöwer
2023-03-20T13:32:57Z
http://arxiv.org/abs/2303.11097v3
# Time- versus Event-Triggered Consensus of a Single-Integrator Multi-Agent System1 ###### Abstract Event-triggered control has shown the potential for providing improved control performance at the same average sampling rate when compared to time-triggered control. While this observation motivates numerous event-triggered control schemes, proving it from a theoretical perspective has only been achieved for a limited number of settings. Inspired by existing performance analyses for the single-loop case, we provide a first fundamental performance comparison of time- and event-triggered control in a distributed multi-agent consensus setting. For this purpose, we consider undirected connected network topologies and the long-term average of the quadratic deviation from consensus as a performance measure. The main finding of our analysis is that time-triggered control provably outperforms event-triggered control beyond a certain number of agents in our particular setting. We thereby provide an exemplary distributed problem setup in which event-triggered control results in a performance disadvantage when compared to time-triggered control in the case of large networks. Moreover, we derive the asymptotic orders of the performance measure under both triggering schemes which give more insights into the cost relationship for large numbers of agents. Thus, by presenting an analysis for a particular setup, this work points out that the often presumed superiority of event-triggered control over time-triggered control might not generally be provided if we consider distributed settings. keywords: Event-triggered control, Multi-agent systems, Networked control systems, Sampled-data systems + Footnote †: journal: Nonlinear Analysis: Hybrid Systems ## 1 Introduction Event-triggered control (ETC) schemes have shown the potential to be more performant than time-triggered control (TTC) schemes when communication channels are loss- and delay-free, and average triggering rates are equal, as demonstrated in [1] for single-integrator systems. In ETC, the system only initiates communication when a triggering condition is met, while in TTC, it establishes communication at fixed time intervals. Findings like the one from [1] have led to a variety of ETC schemes aiming at the reduction of the sampling frequency while still fulfilling a certain control goal, such as maintaining a performance level. The reduction in "unnecessary" communication often appears as an argument for ETC also being advantageous for communication channels with limited bandwidth. The idea of using ETC to decrease shared medium utilization has been adopted from the field of networked control systems (NCS), e.g., [2; 3], to the field of multi-agent systems (MAS), as seen in works such as [4; 5]. To distinguish between NCS that are only coupled through their usage of a shared communication medium and MAS in which agents also cooperate to achieve a common goal, the former will be referred to as non-cooperative NCS throughout this paper. The setup from [1] with impulsive inputs has been extended in various ways in order to find ETC schemes that are optimal with respect to a defined performance measure. Most works in this paragraph use a performance measure that is quadratic in the system state and linear in the triggering rate including a scalar trade-off factor. For first-order linear systems, [2] introduces a minimum inter-event time and aims to find the optimal triggering condition. The work [6, Paper II] establishes a closed-form solution for optimal triggering rules for the multidimensional integrator case and shows simulation-based results for the generalization to linear time-invariant systems. The authors in [7] provide a numerical design method for optimal triggering rules in an LQG setting with output feedback. Their work builds upon [8; 9; 10] which provide an \(\mathcal{H}_{2}\)-optimal controller design method for any given uniformly bounded sampling pattern in a linear system setup. Moreover, they prove that the design of optimal triggering rule and optimal controller are separable, which allows [7] to focus on the former. For discrete time systems, [11] proposes an optimal periodic ETC design method for linear time-invariant systems. Since finding optimal ETC schemes remains challenging, [12; 13; 14; 15; 16] have introduced and evaluated a so-called consistency property of ETC schemes in various LQ- and \(\mathcal{L}_{2}/\ell_{2}\)-settings. In short, an ETC scheme is considered consistent with respect to the chosen performance criterion (LQ, \(\mathcal{L}_{2}/\ell_{2}\)) if it guarantees the same performance level as any periodic TTC scheme while having a smaller (or equal) average triggering rate. An analogous definition of consistency is that the ETC scheme results in a better (or equal) performance level compared to any periodic TTC scheme while having the same average triggering rate. Thus, one can consider the work in [1] as an evaluation of LQ-consistency in a single-integrator setup with a particular choice of cost matrices. Moreover, considering the optimality perspective from the previous paragraph together with the consistency viewpoint, [17] presents an \(\mathcal{H}_{\infty}\)-optimal ETC design method for continuous-time LTI systems. Their method co-designs the controller and triggering rule according to their \(\mathcal{H}_{\infty}\)-performance and guarantees consistency with respect to the corresponding optimal periodic sampled-data controller. Following another research direction, [18] extended the results from [1] to incorporate also network effects such as packet loss in the comparison of TTC and ETC for the non-cooperative single-integrator NCS case. They point out that ETC can perform worse than TTC above a certain packet loss probability. In [19] and [20], transmission delays are included into the comparison and the packet loss probability is determined based on the medium access protocol. At last, [21] provides a performance comparison of TTC and ETC schemes for single-integrator systems considering various medium access protocols. The authors demonstrate the impact of the network load on the performance of the single-integrator NCS for various triggering schemes and medium access protocols. Thereby, they establish the importance of taking the properties of the communication network into consideration when designing triggering schemes for NCS. Another performance comparison in this realm is presented in [16] which analyzes linear discrete time systems and contrasts purely stochastic with stochastic event-based triggering rule performance. Analyzing more general NCS and their behavior under (periodic) ETC schemes is an active field of research, e.g., [22; 23]. Although some fundamental considerations have shown that TTC can sometimes outperform ETC if network effects are taken into account, ETC is still very popular for NCS. As discussed previously in this section, many settings not suffering from network losses provably yield a performance improvement under ETC when compared to TTC. This also led to various ETC approaches for MAS while there exists no work on the fundamental characteristics of TTC compared to ETC in this case. As pointed out by [5], the event-triggered consensus literature is still missing performance analyses that quantify the benefit of ETC over TTC schemes. This work aims to close this gap in order to understand whether qualitative results are the same for MAS as in the non-cooperative NCS case, or whether new effects might arise. With the previously discussed works in mind, we provide a first theoretical evaluation of ETC and TTC performance by analyzing a simple MAS problem. Our main contribution is the finding that, for this particular setup, ETC is not always superior to TTC even without considering packet loss or transmission delays. The performance relationship turns out to depend on the number of participating agents. Moreover, we provide the asymptotic order of the performance measure for ETC and TTC as a function of the number of agents. This gives further insights into the relationship between ETC and TTC in our particular MAS setup. Compared to the corresponding conference paper [24], we provide extensive proof details for all our results. In addition, we extend our statements to more general communication topologies than all-to-all networks. What is more, we generalize our setup to a class of provably optimal control inputs. Firstly, this allows us to show that our results hold for a broader class of problems. Secondly, further potential performance improvements via a better choice of the control input can be ruled out in our analysis. Furthermore, we improve our simulation results by increased sample numbers, simulations for more network sizes and by complementing the obtained results with a confidence interval. Our paper is structured as follows: We start by stating some preliminaries on background knowledge, especially graph theory, and on our notation in Section 2. In Section 3, we introduce the setup and formulate the considered problem. After that, we present our theoretical results in Section 4, while we demonstrate our findings in a numerical simulation in Section 5. We conclude this work in Section 6 and provide additional proofs and details in the appendix. ## 2 Preliminaries In this section, we introduce relevant notation, especially but not exclusively regarding graph theory. A graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) consists of a set of vertices \(\mathcal{V}=\{1,\ldots,N\}\), also referred to as nodes, and a set of edges \(\mathcal{E}\subset\mathcal{V}\times\mathcal{V}\). The graph \(\mathcal{G}\) is called undirected if \((i,j)\in\mathcal{E}\Leftrightarrow(j,i)\in\mathcal{E}\) for any \(i,j\in\mathcal{V}\). We will focus on definitions for undirected graphs throughout the rest of this section. If an edge \((i,j)\in\mathcal{E}\) exists between two nodes, they are called adjacent. All nodes that are adjacent to node \(i\) are also referred to as node \(i\)'s neighbors \(j\in\mathcal{N}_{i}=\{j\in\mathcal{V}\mid(i,j)\in\mathcal{E}\}\). Note that we generally exclude self-loops, i.e., \((i,i)\notin\mathcal{E}\). In the MAS context, the adjacency of nodes \(i\) and \(j\) indicates that agents \(i\) and \(j\) are able to communicate with each other. Nodes \(i\) and \(j\) are referred to as connected if there exists a path between those nodes, i.e., a sequence of distinct nodes, starting at \(i\) and ending at \(j\), such that each pair of consecutive nodes is adjacent. If all pairs of nodes in graph \(\mathcal{G}\) are connected, the graph is called connected. Furthermore, the adjacency matrix \(A\) consists of elements \(a_{ij}=1\) if \(i\) and \(j\) are adjacent and \(a_{ij}=0\) otherwise. For an undirected graph \(\mathcal{G}\), the adjacency matrix is symmetric. In addition, the degree \(d_{i}\) of a node \(i\) denotes the number of neighbors of node \(i\), i.e., the cardinality of the neighbor set \(|\mathcal{N}_{i}|\). The degree matrix \(D\) of graph \(\mathcal{G}\) is the diagonal matrix \(D=\operatorname{diag}(d_{1},\ldots,d_{N})\). With the definitions of adjacency and degree matrix, the Laplace matrix \(L\) of graph \(\mathcal{G}\) is defined as \(L=D-A\). For an undirected graph \(\mathcal{G}\), the Laplace matrix \(L\) is symmetric and positive semi-definite. Moreover, it is column and row stochastic and, thus, has the eigenvector of all ones corresponding to the eigenvalue \(0\). If \(\mathcal{G}\) is also connected, the Laplace matrix \(L\) has exactly one zero eigenvalue. Note that we can compute the cardinality of the edge set as \(|\mathcal{E}|=\operatorname{tr}(L)=\operatorname{tr}(D)\). Due to the definition via directed edges, the cardinality of \(\mathcal{E}\) is twice as large as the number of undirected edges in the graph \(\mathcal{G}\). Beyond graph theory, we utilize two notation alternatives regarding the series of transmission events in this paper: On the one hand, we refer to the series of triggering time instants with the notation \((t_{k}^{j})_{k\in\mathbb{N}}\) for agent \(j\in\{1,\ldots,N\}\) where \(\mathbb{N}\) represents the set of positive integers. On the other hand, we denote the event series of the complete MAS by \((t_{k})_{k\in\mathbb{N}}\). For some formulations in this work, let us additionally define \(t_{0}=0\). Naturally, ordering the event series \((t_{k}^{j})_{k\in\mathbb{N}}\) for all agents \(j\in\{1,\ldots,N\}\) in an increasing fashion yields the sequence \((t_{k})_{k\in\mathbb{N}}\). If any elements in \((t_{k}^{j})_{k\in\mathbb{N}}\) for all agents \(j\in\{1,\ldots,N\}\) should be equal, we subsume them in a single \(t_{k}\) in the series \((t_{k})_{k\in\mathbb{N}}\). Finally, let us denote the expected value and the variance of a random variable by \(\mathbb{E}[\cdot]\) and \(\mathbb{V}[\cdot]\), respectively. In addition, let \(\delta(\cdot)\) refer to the Dirac delta impulse and \(\mathds{1}_{(\cdot)}\) denote the indicator function. Moreover, let \(\mathbb{R}\) abbreviate the set of all real numbers and \(\lim_{\epsilon\downarrow 0}\) indicate the right-sided limit. ## 3 Problem Formulation In this section, we introduce the considered setup and derive the optimal control input for the formulated problem. ### Setup We consider an MAS consisting of \(N\) single-integrator agents that are perturbed by noise \[\mathrm{d}x_{i}=u_{i}\mathrm{d}t+\mathrm{d}v_{i}, \tag{1}\] starting in consensus, i.e., initial states \(x_{i}(0)=0\) for all \(i\in\{1,\ldots,N\}\), and with \(v_{i}(t)\) referring to a standard Brownian motion and \(u_{i}(t)\) to the control input. Let the agents be able to communicate according to an undirected connected communication graph with \(N\) nodes representing the agents and Laplacian \(L\). Therefore, an agent \(i\) is able to communicate with its neighbors \(j\in\mathcal{N}_{i}\). Furthermore, we presume that the agents can continuously monitor their own state and trigger discrete transmission events in order to share information with their neighbors. The shared information is then used to preserve consensus between the agents as well as possible. Thus, the control inputs \(u_{i}(t)\) are required to be causal in the sense that they only depend on information transmitted up to time \(t\). As explained in the introduction, we aim at comparing TTC and ETC schemes for triggering transmissions. For that purpose, let us consider the cost functional \[J\coloneqq\limsup_{M\to\infty}\frac{1}{M}\int_{0}^{M}\mathbb{E}\Big{[}x(t)^{ \top}Lx(t)\Big{]}\,\mathrm{d}t \tag{2}\] as a performance measure where \(x(t)=[x_{1}(t),\ldots,x_{N}(t)]^{\top}\). It quantifies the expected quadratic deviation from consensus and can also be written as \[J=\limsup_{M\to\infty}\frac{1}{M}\int_{0}^{M}\mathbb{E}\Bigg{[}\frac{1}{2} \sum_{(i,j)\in\mathcal{E}}\Big{(}x_{i}(t)-x_{j}(t)\Big{)}^{2}\Bigg{]}\,\mathrm{ d}t.\] **Remark 1**.: The quadratic term \(x^{\top}Lx\) is a typical measure for the deviation of an MAS from consensus and, for example, also often used as a Lyapunov function, see, e.g., [25]. From an optimal control viewpoint, we consider a quadratic state cost with positive semi-definite weight matrix and no input cost. **Remark 2**.: We do not incorporate a cost term on the triggering rate in (2) since we will compare TTC and ETC under equal average triggering rates, cf. Section 4.3 and, e.g., [14]. Any cost component related to the average triggering rate can therefore be neglected for the comparison. We are well aware that the considered setup is simple and does not cover the vast variety of practically relevant settings available in the literature on cooperative control. However, the simplicity of the setup allows for its detailed analysis and understanding. The motivation behind this work is to provide theoretical results for the performance comparison between TTC and ETC in such a simple cooperative setup and, thereby, uncover new phenomena and differences in the outcome when compared to existing results. For that purpose, note additionally that we study the same setup as in [1] except for the fact that we consider a cooperative control goal in a distributed setting. This will allow us to contrast the findings later on. ### Optimal Control Input Given the setup described in the previous section, we can now also characterize the optimal control input with respect to (2). As it turns out to be independent of the deployed triggering scheme, we consider it to be part of the problem formulation. **Proposition 1**.: _Given the performance measure (2), the agents (1) are optimally controlled with a causal impulsive control input \(u(t)=[u_{1}(t),\ldots,u_{N}(t)]^{\top}\) that ensures_ \[\lim_{\epsilon\downarrow 0}x(t_{k}+\epsilon)^{\top}Lx(t_{k}+\epsilon)=0\quad \forall(t_{k})_{k\in\mathbb{N}},\] _and is zero otherwise, i.e., \(u(t)\) resets all agents instantaneously to consensus at each triggering time instant._ Proof.: Can be found in the appendix. The resulting consensus point is irrelevant for our analysis. The key point is that any impulsive control input that resets the agents to any consensus configuration instantaneously when an event is triggered is optimal under the considered performance measure (2). Between those triggering time instants, the agents behave according to standard Brownian motions. Let us give a few examples of control schemes that belong to the class in Proposition 1 to provide some intuition: 1. _All-to-all communication topology and one-to-all broadcast:_ The agents are controlled by the impulsive control input \[u_{i}(t)=\sum_{k\in\mathbb{N}}\sum_{j\in\mathcal{N}_{i}}\delta(t-t_{k}^{j})(x _{j}(t_{k}^{j})-x_{i}(t_{k}^{j})),\] (3) where \(\mathcal{N}_{i}=\{1,\ldots,N\}\backslash\{i\}\) and \(t_{k}^{j}\) denotes the transmission time instant of packet \(k\) from agent \(j\). Thus, the system is reset to consensus by transmitting an agent's state to all other agents. We can therefore also consider this as a leader-follower consensus problem, potentially with time-varying leader assignment. 2. _Multi-hop communication and network flooding:_ The same scheme also works for arbitrary connected graphs if all agents pass on received messages to their respective neighbors. This is referred to as multi-hop communication and distributes transmitted information within the network beyond local neighbor clusters, cf. [26] for a multi-hop protocol involving continuous communication. As long as the information is spread throughout the complete network of agents, we are able to apply (3) analogously in this case. A related method is the flooding algorithm in networks which refers to nodes passing on received information until it is known to all network participants, see, e.g., [27] and, for an advanced scheme, [28]. Note that the proposed methods usually induce a significant communication delay. In this work, we consider an idealized setup without delays such that consensus can be achieved instantaneously. We leave the incorporation of communication delays in the analysis for future research. 3. _Reset to the origin:_ The setup in which all agents are reset to the origin at each triggering instant also belongs to the defined control input class. In this special case, only the reset time instants need to be communicated to all agents in the network. In Fig. 1, we provide an exemplary state evolution of an MAS with 3 agents under a TTC scheme. The MAS is reset to consensus at two time instants. The specified examples are not an exhaustive list of possible schemes that belong to the described control input class. The following performance analysis holds for any scheme that fits into this class of control inputs. **Remark 3**.: Note that the performance analysis even remains the same if we have an unconnected graph since the performance measure would then also only require cluster consensus for minimal cost. ## 4 Main Results In this section, we introduce the two triggering schemes and derive and compare the related cost according to (2). ### Preliminaries Let us first establish some facts on the considered problem which we can build upon in the following analysis. Similar to [18], we find **Fact 1**.: _If the sequence of inter-event times is independent and identically distributed, it suffices to evaluate the cost over the first sampling interval_ \[J(T)=\frac{\mathbb{E}\Big{[}\frac{1}{2}\sum_{(i,j)\in\mathcal{S}}\int_{0}^{T} \left(x_{i}(t)-x_{j}(t)\right)^{2}\mathrm{d}t\Big{]}}{\mathbb{E}[T]},\] _where \(T=t_{1}\) is the inter-event time determined by the respective triggering scheme introduced in Sections 4.2 and 4.3._ Figure 1: Exemplary MAS state evolution under TTC with constant inter-event time \(T_{\mathrm{TT}}=1\). Proof.: Can be found in the appendix. Denoting \(Q(T)\coloneqq\mathbb{E}\!\left[\frac{1}{2}\sum_{(i,j)\in\mathcal{E}}\int_{0}^{T} \left(x_{i}(t)-x_{j}(t)\right)^{2}\mathrm{d}t\right]\), we can express the numerator of the cost as follows. **Fact 2**.: _Let \(T\) be a symmetric stopping time, i.e., if one replaces \(v_{i}\) by \(-v_{i}\) for any \(i\in\{1,\ldots,N\}\) the value of \(T\) does not change, as well as independent of the direction, i.e., \(T\) does not change if \(v_{i}\) is interchanged with \(v_{j}\) for any \(i,j\in\{1,\ldots,N\}\). Then, given Fact 1 being applicable, we can establish_ \[Q(T)=|\mathcal{E}|\cdot\mathbb{E}\!\left[\int_{0}^{T}v_{1}(t)^{2}\,\mathrm{d} t\right].\] Proof.: We start with the expression \[Q(T)=\mathbb{E}\!\left[\frac{1}{2}\sum_{(i,j)\in\mathcal{E}}\int_{0}^{T}(v_{i }(t)-v_{j}(t))^{2}\,\mathrm{d}t\right]=\mathbb{E}\!\left[\frac{1}{2}\sum_{(i, j)\in\mathcal{E}}\int_{0}^{T}(v_{i}(t)^{2}-2v_{i}(t)v_{j}(t)+v_{j}(t)^{2})\, \mathrm{d}t\right].\] By assumption, the stopping time \(T\) is symmetric. Observe that the distribution of the random variable \(\int_{0}^{T}v_{i}(t)v_{j}(t)\,\mathrm{d}t\) is symmetric as well since replacing \(v_{i}\) by \(-v_{i}\) only changes the sign of the integrand. Therefore, the expectation of the mixed term is zero for any \(i\neq j\). This shows \[Q(T)=\mathbb{E}\!\left[\int_{0}^{T}\sum_{(i,j)\in\mathcal{E}:\atop t<j}(v_{i} (t)^{2}+v_{j}(t)^{2})\,\mathrm{d}t\right]=\mathbb{E}\!\left[\int_{0}^{T}\sum _{i=1}^{N}d_{i}v_{i}(t)^{2}\,\mathrm{d}t\right]=|\mathcal{E}|\cdot\mathbb{E} \!\left[\int_{0}^{T}v_{1}(t)^{2}\,\mathrm{d}t\right],\] using that \(T\) is independent of the direction. ### Time-Triggered Control As a comparison benchmark for ETC, we choose a TTC scheme in which the transmission events are scheduled periodically with a constant inter-event time \(T_{\mathrm{TT}}=t_{k+1}-t_{k}=\mathrm{const.}\) for all \(k\in\mathbb{N}_{0}\). This is in line with the consistency definition for ETC introduced and considered in [12; 13; 14; 15; 16]. Given the first setup example from Section 3.2, an all-to-all communication topology and one-to-all broadcast, this implies that the transmission of one agent's state to all the others takes place with a fixed frequency. This state information is then used by all other agents to reset their states to consensus. How the transmitting agent is chosen in this particular setting plays no role for the performance analysis to come. An exemplary MAS state evolution under TTC is shown in Fig. 1. Deploying this triggering scheme in the considered setup leads to the following theorem. **Theorem 1**.: _Suppose agents (1) are controlled by the impulsive input from Proposition 1 with constant inter-event times \(T_{\mathrm{TT}}\). Then, the cost (2) is given by_ \[J_{\mathrm{TT}}(T_{\mathrm{TT}})=|\mathcal{E}|\cdot\frac{T_{\mathrm{TT}}}{2}.\] Proof.: Since the inter-event times \(T_{\mathrm{TT}}\) are identical and constant, it suffices to analyze the interval between two transmissions and Facts 1 and 2 hold. Thus, we can write (2) as \(J_{\mathrm{TT}}(T_{\mathrm{TT}})=Q(T_{\mathrm{TT}})/T_{\mathrm{TT}}\) with \[Q(T_{\mathrm{TT}})=|\mathcal{E}|\cdot\int_{0}^{T_{\mathrm{TT}}}\mathbb{E}\! \left[v_{1}(t)^{2}\right]\mathrm{d}t=|\mathcal{E}|\cdot\int_{0}^{T_{\mathrm{ TT}}}t\,\mathrm{d}t=|\mathcal{E}|\cdot\frac{T_{\mathrm{TT}}^{2}}{2},\] as required. **Remark 4**.: The result for the cost in the TTC case is the same as in [1] but scaled by twice the number of connected agent pairs \(|\mathcal{E}|\). This is also related to the results for non-cooperative NCS in [19] and related papers where the cost scales with the number of network participants \(N\). ### Event-Triggered Control In ETC, the necessity to communicate is captured by a continuously evaluated triggering condition. Once the condition is fulfilled, a transmission event is initiated by the respective agent. Since we are operating in a distributed setting, each agent evaluates its triggering condition locally. Consequently, only local information is to be used in the respective triggering rule. As the agents are for example incorporating local state information in the triggering decision, ETC is often argued to lower the communication rate while maintaining the same performance level as TTC, see, e.g., [4]. For this work, we use \[|x_{i}(t)-x_{i}(t_{\hat{k}})|\geq\Delta \tag{4}\] as the triggering condition where \(\hat{k}=\max\left\{k\in\mathbb{N}_{0}\mid t_{\hat{k}}\leq t\right\}\) and \(\Delta>0\). It compares the local state deviation from the state at the last event \(x_{i}(t_{\hat{k}})\) to a threshold \(\Delta\). This form of triggering rule is quite common in distributed setups, see, for example, [25]. Note that we use a triggering condition that is analogous to the one in [18; 21; 1]. **Remark 5**.: We choose the same threshold \(\Delta\) for all agents. Note that strictly speaking this is only the best choice if the contribution of each agent's state to the cost is equal, namely if all agents \(i\) have the same degree \(d_{i}\). For heterogeneous degrees \(d_{i}\), a heterogeneous choice of \(\Delta_{i}\) might be advantageous. Deriving the optimal choice for \(\Delta_{i}\) in this case is beyond the scope of this paper and, thus, the analysis for heterogeneous \(\Delta_{i}\) is not considered in the remainder of this paper. Considering the first setup example from Section 3.2 with an all-to-all communication topology and one-to-all broadcast, the described ETC scheme leads to one agent broadcasting its state to the others once the respective local triggering condition is fulfilled. As in the time-triggered case, this state information is then used by all other agents to reset their states to consensus. The triggering agent is thus chosen as the transmitting agent resulting in a distributed scheme. An exemplary MAS state evolution under the analyzed ETC scheme is shown in Fig. 2. Utilizing Facts 1 and 2 again allows us to analyze the cost on the first sampling interval, also in the ETC case. In contrast to the TTC analysis, the length of this time interval is described by a probabilistic stopping time \(T_{\text{ET}}(\Delta)=\inf[t>0\mid\exists i\in\{1,\ldots,N\}:|x_{i}(t)|=\Delta]\). While we are not able to derive an explicit expression for the cost \(J_{\text{ET}}(\Delta):=J(T_{\text{ET}}(\Delta))\) for the ETC case, we can still arrive at results on its relationship to the TTC cost \(J_{\text{TT}}(T_{\text{TT}})\) derived in Section 4.2. Note that the latter relationship is also what we are primarily interested in for this work. In order to facilitate a fair comparison between \(J_{\text{ET}}(\Delta)\) and \(J_{\text{TT}}(T_{\text{TT}})\), we require \(T_{\text{TT}}=\mathbb{E}[T_{\text{ET}}(\Delta)]\) which results in the same average triggering frequency for both schemes. This is again inspired by the line of thought for the consistency property of ETC schemes considered in [12; 13; 14; 15; 16]. Note that this constraint embodies the bridge between the triggering threshold \(\Delta\) determining \(\mathbb{E}[T_{\text{ET}}(\Delta)]\) for the ETC scheme and the constant inter-event time \(T_{\text{TT}}\) in the TTC case. Let us first establish the following fact and lemma, which will enable us to focus on the case \(\Delta=1\) for the derivations to come. Figure 2: Exemplary MAS state evolution under ETC with \(\Delta=2\). **Fact 3**.: _We can show that the following scaling relationships hold true_ \[Q_{\mathrm{ET}}(\Delta)=\Delta^{4}Q_{\mathrm{ET}}(1),\qquad\mathbb{E}[T_{ \mathrm{ET}}(\Delta)]=\Delta^{2}\mathbb{E}[T_{\mathrm{ET}}(1)],\qquad\mathbb{V }[T_{\mathrm{ET}}(\Delta)]=\Delta^{4}\mathbb{V}[T_{\mathrm{ET}}(1)],\] _where \(Q_{\mathrm{ET}}(\Delta):=Q(T_{\mathrm{ET}}(\Delta))\)._ Proof.: Let us show the first equality. Indeed, \[Q_{\mathrm{ET}}(\Delta) =|\mathcal{E}|\cdot\mathbb{E}\left[\int_{0}^{T_{\mathrm{IT}}( \Delta)}v_{1}(s)^{2}\,\mathrm{d}s\right]\] \[=|\mathcal{E}|\cdot\mathbb{E}\left[\int_{0}^{\inf\{r>0\}( \mathbb{R}:\lambda\in]\nu_{1}(t)=\Delta\}v_{1}(s)^{2}\,\mathrm{d}s\right]\] \[=|\mathcal{E}|\cdot\mathbb{E}\left[\int_{0}^{\inf\{r>0\}( \mathbb{R}:\lambda\in]\nu_{2}(t/\Delta^{2})|=\Delta\}\Delta^{2}v_{1}(s/ \Delta^{2})^{2}\,\mathrm{d}s\right]\] \[=\Delta^{2}|\mathcal{E}|\cdot\mathbb{E}\left[\int_{0}^{\Lambda^ {2}\inf\{\Delta^{2}r>0\}(\mathbb{R}:\lambda\in]\nu_{1}(t^{\prime})=1\}v_{1}(s ^{\prime})^{2}\Delta^{2}\,\mathrm{d}s^{\prime}\right]\] \[=\Delta^{4}|\mathcal{E}|\cdot\mathbb{E}\left[\int_{0}^{\inf\{r>0 \}(\mathbb{R}:\lambda\in]\nu_{2}(t^{\prime})=1\}v_{1}(s^{\prime})^{2}\,\mathrm{ d}s^{\prime}\right]\] \[=\Delta^{4}Q_{\mathrm{ET}}(1).\] In the fourth step, we used the scaling property of Brownian motions and, in the fifth step, we applied linear integral substitution. All other formulas are proved similarly. Thus, we can derive relevant quantities for the considered setup for \(\Delta=1\) and use Fact 3 to generalize the found expressions to arbitrary choices of \(\Delta\). Moreover, we obtain the following lemma as a direct consequence of Fact 3. **Lemma 1**.: _Let \(J_{\mathrm{TT}}(\mathbb{E}[T_{\mathrm{ET}}(\Delta)])\) denote the cost under constant inter-event times \(T_{\mathrm{TT}}=\mathbb{E}[T_{\mathrm{ET}}(\Delta)]\). Then, the cost comparison between \(J_{\mathrm{ET}}(\Delta)\) and \(J_{\mathrm{TT}}(\mathbb{E}[T_{\mathrm{ET}}(\Delta)])\) is not influenced by the choice of \(\Delta\)._ Proof.: Due to Fact 3 together with Fact 1 and Theorem 1, we have \[J_{\mathrm{ET}}(\Delta) =\Delta^{2}J_{\mathrm{ET}}(1),\] \[J_{\mathrm{TT}}(\mathbb{E}[T_{\mathrm{ET}}(\Delta)]) =\Delta^{2}J_{\mathrm{TT}}(\mathbb{E}[T_{\mathrm{ET}}(1)]).\] Thus, we can neglect the scaling factor \(\Delta^{2}\) when comparing the two costs or computing their ratio. In summary, Fact 3 and Lemma 1 allow us to concentrate on the case \(\Delta=1\) for the remainder of this section. In addition, they enable us to focus on \(\Delta=1\) in the simulation in Section 5. Before arriving at the main result of this section, we need to characterize the asymptotic order of the moments of \(T_{\mathrm{ET}}(1)\). **Lemma 2**.: _We have_ \[\mathbb{E}[T_{\mathrm{ET}}(1)] \sim\frac{1}{2\ln N}, \tag{5}\] \[\mathbb{E}\Big{[}T_{\mathrm{ET}}(1)^{2}\Big{]} \sim\frac{1}{(2\ln N)^{2}},\] (6) \[\mathbb{V}[T_{\mathrm{ET}}(1)] \sim\frac{\pi^{2}/24}{(\ln N)^{4}}, \tag{7}\] _where \(a_{n}\sim b_{n}\) means that \(\lim_{n\to\infty}a_{n}/b_{n}=1\) for arbitrary series \((a_{n})_{n\in\mathbb{N}}\), \((b_{n})_{n\in\mathbb{N}}\)._ Proof.: Throughout the proof, we drop the arguments indicating \(\Delta=1\) to simplify notation. Let \(T_{j}\coloneqq\inf\{t>0:|x_{j}(t)|=1\}\) for all \(j\in\{1,\ldots,N\}\) and, thus, \(T_{\mathrm{ET}}=\inf_{1\leq j\leq N}T_{j}\). Using the tail behavior derived from [29], Theorem 7.45, \[\mathbb{P}(T_{j}\leq w)=\mathbb{P}(\sup_{0\leq j\leq w}|v_{j}(t)|\geq 1)= \mathbb{P}(\sup_{0\leq t\leq 1}|v_{j}(t)|\geq w^{-1/2})\stackrel{{ w \to 0}}{{\sim}}\frac{\kappa}{w^{-1/2}}\,\exp(-w^{-1}/2),\] for \(\kappa=\sqrt{2/\pi}\), and the independence of the exit times \(T_{j}\), one can derive the limit theorem \[2(\ln N)^{2}\left(T_{\mathrm{ET}}-a_{N}\right)\Rightarrow G,\qquad\text{as }N \rightarrow\infty, \tag{8}\] with \[a_{N}\coloneqq\frac{1}{2\ln N}-\frac{\ln\frac{\kappa}{(2\ln N)^{1/2}}}{2(\ln N )^{2}},\] and where \(\Rightarrow\) stands for convergence in distribution. Moreover, \(G\) is a Gumbel-distributed random variable, \[\mathbb{P}(G\geq r)=\exp(-\exp(r)).\] Equation (8) can be derived from [30], Theorem 2.1.6. A direct proof is given here: Indeed, for any \(r\in\mathbb{R}\), we have \[\mathbb{P}(2(\ln N)^{2}\left(T_{\mathrm{ET}}-a_{N}\right)\geq r) =\mathbb{P}(T_{\mathrm{ET}}\geq\frac{r}{2(\ln N)^{2}}+a_{N})= \mathbb{P}(\forall j=1,\ldots,N:T_{j}\geq\frac{r}{2(\ln N)^{2}}+a_{N})\] \[=\mathbb{P}(T_{1}\geq\frac{r}{2(\ln N)^{2}}+a_{N})^{N}=\left(1- \mathbb{P}(T_{1}<\frac{r}{2(\ln N)^{2}}+a_{N})\right)^{N}\] \[\sim\left(1-\frac{\kappa}{c_{N}}\exp\!\left(-\frac{1}{2}\!\left( \frac{r-\ln\frac{\kappa}{c_{N}}}{2(\ln N)^{2}}+\frac{1}{2\ln N}\right)^{\!-1 }\right)\right)^{N}\] \[=\left(1-\frac{\kappa}{c_{N}}\exp\!\left(-\ln N\left(\frac{r-\ln \frac{\kappa}{(2\ln N)^{1/2}}}{\ln N}+1\right)^{\!-1}\right)\right)^{N}\] \[\sim\left(1-\frac{\kappa}{c_{N}}\exp\!\left(-\ln N\left(1-\frac{r -\ln\frac{\kappa}{(2\ln N)^{1/2}}}{\ln N}\right)\right)\right)^{N}\] \[=\left(1-\frac{\kappa}{c_{N}}\frac{1}{N}\exp\!\left(r-\ln\frac{ \kappa}{c_{N}}\right)\right)^{N}=\left(1-\frac{1}{N}\exp(r)\right)^{N}\sim e^{ -r},\] as required and with \(c_{N}=(2\ln N)^{1/2}\). The limit theorem (8) is accompanied by the convergence of the first and second moment. The proof for this is provided in the appendix to allow for a more concise presentation of the results. It builds upon Lebesgue's dominated convergence theorem where we need to show that \(\mathbb{P}(2(\ln N)^{2}\left(T_{\mathrm{ET}}-a_{N}\right)\geq r)\) and \(2r\mathbb{P}(2(\ln N)^{2}\left(T_{\mathrm{ET}}-a_{N}\right)\geq r)\) are upper bounded by integrable functions. Taking expectations in (8) gives \[2(\ln N)^{2}(\mathbb{E}[T_{\mathrm{ET}}]-a_{N})\rightarrow\mathbb{E}[G]\,.\] This shows \[\mathbb{E}[T_{\mathrm{ET}}] =a_{N}+\frac{\mathbb{E}[G]}{2(\ln N)^{2}}(1+o(1)) \tag{9}\] \[=\frac{1}{2\ln N}+\mathcal{O}\!\left(\frac{\ln\ln N}{(\ln N)^{2} }\right).\] Similarly, taking second moments in (8) gives \[4(\ln N)^{4}\mathbb{E}\!\left[\left(T_{\mathrm{ET}}-a_{N}\right)^{2}\right] \rightarrow\mathbb{E}\!\left[G^{2}\right].\] This shows \[\mathbb{E}\Big{[}T_{\mathrm{ET}}^{2}\Big{]}-2a_{N}\mathbb{E}[T_{\mathrm{ET}}]+a_{ N}^{2}=\frac{\mathbb{E}\Big{[}G^{2}\Big{]}}{4(\ln N)^{4}}(1+o(1)),\] which, together with (9), yields \[\mathbb{E}\Big{[}T_{\mathrm{ET}}^{2}\Big{]}=a_{N}^{2}+2a_{N}\frac{\mathbb{E}[G ]}{2(\ln N)^{2}}(1+o(1))+\frac{\mathbb{E}\Big{[}G^{2}\Big{]}}{4(\ln N)^{4}}(1+o (1))=\frac{1}{4(\ln N)^{2}}+\mathcal{O}\bigg{(}\frac{1}{(\ln N)^{3}}\bigg{)}.\] Finally, the limit theorem can be re-written as \[2(\ln N)^{2}(T_{\mathrm{ET}}-\mathbb{E}[T_{\mathrm{ET}}])+2(\ln N)^{2}( \mathbb{E}[T_{\mathrm{ET}}]-a_{N})\Rightarrow G.\] Squaring, taking expectations, and dividing by \(4(\ln N)^{4}\) gives \[\mathbb{E}\Big{[}(T_{\mathrm{ET}}-\mathbb{E}[T_{\mathrm{ET}}])^{2}\Big{]}+( \mathbb{E}[T_{\mathrm{ET}}]-a_{N})^{2}=\frac{\mathbb{E}\Big{[}G^{2}\Big{]}}{4 (\ln N)^{4}}(1+o(1)).\] This implies \[\mathbb{V}[T_{\mathrm{ET}}]=\frac{\mathbb{E}\Big{[}G^{2}\Big{]}}{4(\ln N)^{4 }}(1+o(1))-(\mathbb{E}[T_{\mathrm{ET}}]-a_{N})^{2}=\frac{\mathbb{E}\Big{[}G^{ 2}\Big{]}}{4(\ln N)^{4}}(1+o(1))-\frac{\mathbb{E}[G]^{2}}{4(\ln N)^{4}}(1+o(1) )=\frac{\mathbb{V}[G]}{4(\ln N)^{4}}(1+o(1)),\] which proves (7) because \(\mathbb{V}[G]=\pi^{2}/6\). With this result, we have also shown a logarithmic dependence of the moments of \(T_{\mathrm{ET}}(\Delta)\) on the number of agents. This is a crucial difference to the non-cooperative NCS case, e.g., studied in [1, 18, 21], and caused by the distributed nature of the considered problem. Leveraging the previous lemma, we arrive at the following main theorem. **Theorem 2**.: _Suppose agents (1) are controlled by the impulsive input from Proposition 1 with inter-event times \(T_{\mathrm{ET}}(\Delta)=\inf\{t>0\mid\exists t\in\{1,\ldots,N\}:|x_{i}(t)|=\Delta\}\). Then, there exists an \(N_{0}\) such that for all \(N\geq N_{0}\), we have_ \[J_{\mathrm{ET}}(\Delta)>J_{\mathrm{TT}}(\mathbb{E}[T_{\mathrm{ET}}(\Delta)]),\] _i.e., TTC outperforms ETC for all \(N\geq N_{0}\) under equal average triggering rates._ Proof.: Due to Lemma 1, we can concentrate on the case \(\Delta=1\). Thus, we again use simplified notation and do not state \(\Delta=1\) explicitly in our formulas. As before, let \(T_{j}\coloneqq\inf\{t>0:|x_{j}(t)|=1\}\) for all \(j\in\{1,\ldots,N\}\) and, thus, \(T_{\mathrm{ET}}=\inf_{1\leq j\leq N}T_{j}\). Moreover, let \(\tau\coloneqq\inf_{2\leq j\leq N}T_{j}\geq T_{\mathrm{ET}}\). The key estimate is \[\int_{0}^{T_{\mathrm{ET}}}v_{1}(t)^{2}\,\mathrm{d}t\geq\int_{0}^{\tau}v_{1}(t) ^{2}\,\mathrm{d}t\,(1-\mathds{1}_{\tau\neq T_{\mathrm{ET}}}). \tag{10}\] Let us evaluate the expectations. By independence of \(\tau\) and \(v_{1}\), we have \[\mathbb{E}\bigg{[}\int_{0}^{\tau}v_{1}(t)^{2}\,\mathrm{d}t\bigg{]}=\int_{0}^{ \infty}\mathbb{E}\Big{[}\mathds{1}_{t\leq\tau}v_{1}(t)^{2}\Big{]}\,\mathrm{d}t =\int_{0}^{\infty}\mathbb{E}[\mathds{1}_{t\leq\tau}]\,\mathbb{E}\Big{[}v_{1}(t )^{2}\Big{]}\,\mathrm{d}t=\mathbb{E}\bigg{[}\int_{0}^{\tau}t\,\mathrm{d}t \bigg{]}=\frac{\mathbb{E}\Big{[}\tau^{2}\Big{]}}{2}>\frac{\mathbb{E}\Big{[}T_{ \mathrm{ET}}^{2}\Big{]}}{2}, \tag{11}\] while since \(\tau\leq T_{2}\) and using the Cauchy-Schwarz inequality \[\mathbb{E}\bigg{[}\int_{0}^{\tau}v_{1}(t)^{2}\,\mathrm{d}t\cdot \mathds{1}_{\tau\neq T_{\mathrm{ET}}}\bigg{]} \leq\mathbb{E}\bigg{[}\int_{0}^{T_{2}}v_{1}(t)^{2}\,\mathrm{d}t \cdot\mathds{1}_{\tau\neq T_{\mathrm{ET}}}\bigg{]}\leq\mathbb{E}\bigg{[}\left( \int_{0}^{T_{2}}v_{1}(t)^{2}\,\mathrm{d}t\right)^{2}\bigg{]}^{1/2}\cdot\mathbb{E }\Big{[}\mathds{1}_{\tau\neq T_{\mathrm{ET}}}^{2}\Big{]}^{1/2}\] \[=C\cdot\mathbb{P}(\tau\neq T_{\mathrm{ET}})^{1/2}=C\,N^{-1/2}, \tag{12}\] where \(C=\mathbb{E}[(\int_{0}^{T_{2}}v_{1}(t)^{2}\,\mathrm{d}t)^{2}]^{1/2}\) does not depend on the number of agents \(N\). The last step holds because \(\tau\neq T_{\mathrm{ET}}\) if and only if the process \(v_{1}\) is the first to exit \([-1,1]\). By symmetry, this has probability equal to \(1/N\). Putting (10), (11), and (12) together, one obtains \[\mathbb{E}\!\left[\int_{0}^{T_{\mathrm{ET}}}v_{1}(t)^{2}\,\mathrm{d}t\right] >\frac{\mathbb{E}\!\left[T_{\mathrm{ET}}^{2}\right]}{2}-C\,N^{-1/2}=\frac{ \mathbb{E}\!\left[T_{\mathrm{ET}}\right]^{2}}{2}+\frac{\mathbb{V}\!\left[T_{ \mathrm{ET}}\right]}{2}-C\,N^{-1/2}. \tag{13}\] Next, the definition of the limit shows that (7) implies \[\frac{\mathbb{V}\!\left[T_{\mathrm{ET}}\right]}{(\pi^{2}/24)/(\ln N)^{4}}> \frac{1}{2}\] for all \(N\geq N_{1}\). Furthermore, let \(N_{2}\) be such that \(\frac{1}{4}\cdot\frac{\pi^{2}/24}{(\ln N)^{4}}-C\,N^{-1/2}>0\) for all \(N\geq N_{2}\) and set \(N_{0}\coloneqq\max(N_{1},N_{2})\). Plugging the inequalities into (13), we see that for \(N\geq N_{0}\) \[\frac{1}{\left|\mathcal{E}\right|}\,Q_{\mathrm{ET}}=\mathbb{E}\!\left[\int_{0 }^{T_{\mathrm{ET}}}v_{1}(t)^{2}\,\mathrm{d}t\right]>\frac{\mathbb{E}\!\left[T _{\mathrm{ET}}\right]^{2}}{2}+\frac{1}{4}\cdot\frac{\pi^{2}/24}{(\ln N)^{4}}-C \,N^{-1/2}>\frac{\mathbb{E}\!\left[T_{\mathrm{ET}}\right]^{2}}{2}=\frac{1}{ \left|\mathcal{E}\right|}\,Q(T_{\mathrm{TT}}=\mathbb{E}\!\left[T_{\mathrm{ET} }\right]),\] where we also used Fact 2 in the first step and Theorem 1 in the last step. Multiplying both sides with \(\left|\mathcal{E}\right|\!/\mathbb{E}\!\left[T_{\mathrm{ET}}\right]\) gives the desired inequality. Thus, we have proved that ETC is not necessarily outperforming TTC if we consider a distributed setup with cooperative agents. The common result supported by works like [1], that ETC schemes outperform TTC, can therefore not be simply transferred to this distributed setting. We have also shown that this finding holds for any undirected connected communication topology in the considered setting. Note that the independence from the topology is rooted in the infinitely fast control capabilities by impulsive control inputs and instantaneous communication. The communication topology might indeed play a role under more realistic assumptions like bounded inputs or non-zero communication delays. Building upon the results so far, we are also able to characterize the asymptotic order of the performance measure in the following corollary. **Corollary 1**.: _The asymptotic order of the cost (2) as a function of the number of agents under both triggering schemes can be expressed as_ \[J_{\mathrm{ET}}(1)\sim J_{\mathrm{TT}}(\mathbb{E}\!\left[T_{\mathrm{ET}}(1) \right])\sim\frac{\left|\mathcal{E}\right|}{4\ln N}.\] Proof.: Let us again omit the arguments indicating \(\Delta=1\). Utilizing Theorem 1 and plugging in (5) shows the relationship for \(J_{\mathrm{TT}}(\mathbb{E}\!\left[T_{\mathrm{ET}}\right])\). The lower bound for \(J_{\mathrm{ET}}\) follows from Theorem 2. For the upper bound, observe that \[\mathbb{E}\!\left[\int_{0}^{T_{\mathrm{ET}}}v_{1}(t)^{2}\,\mathrm{d}t\right] \leq\mathbb{E}\!\left[\int_{0}^{\tau}v_{1}(t)^{2}\,\mathrm{d}t\right]=\frac{ \mathbb{E}\!\left[\tau^{2}\right]}{2}\sim\frac{1}{2(2\ln(N-1))^{2}}\sim\frac{ 1}{2(2\ln N)^{2}},\] where we used the notation from the last proof and the fact that \(\tau\) has the same distribution as \(T_{\mathrm{ET}}\) for the dimension \(N-1\). Utilizing \(J_{\mathrm{ET}}=\left|\mathcal{E}\right|\cdot\mathbb{E}\!\left[\int_{0}^{T_{ \mathrm{ET}}}v_{1}(t)^{2}\,\mathrm{d}t\right]\!/\mathbb{E}\!\left[T_{\mathrm{ ET}}\right]\) yields the desired result. Thus, on the one hand, the cost for ETC and TTC grows with the same order for large numbers of agents. On the other hand, this does not imply that the difference between \(J_{\mathrm{ET}}(\Delta)\) and \(J_{\mathrm{TT}}(\mathbb{E}\!\left[T_{\mathrm{ET}}(\Delta)\right])\) vanishes for large \(N\). Given the results in this section, especially Theorem 2, we can thus conclude that, in the considered distributed setting, the ETC scheme is inferior to the TTC scheme for large numbers of agents and equal average triggering rates. This result is in contrast with the findings of similar analyses for non-cooperative setups under the assumption of delay- and loss-free communication such as [1]. Moreover, the relationship between TTC and ETC performance might well depend on the number of agents or network participants \(N\) when considering cooperative settings. While it remains unclear how these results generalize to other distributed problems, they still point out that performance advantages of ETC can behave differently in some distributed settings when compared to their non-cooperative counterpart. This work might therefore serve as a starting point for a careful evaluation of ETC performance for a wider range of distributed settings. ## 5 Simulation In this section, we perform simulations to support our theoretical findings. A simulative performance comparison involving ETC schemes is in general challenging since closed-form expressions for the expected inter-event time are rarely obtainable. Therefore, the constraint of equal expected inter-event times for different triggering schemes is hard to enforce exactly in simulation. In our case though, we can resolve this problem by estimating the expected inter-event time of the ETC scheme from simulation results and then using this estimate as inter-event time in the closed-form expression for the cost of the TTC scheme. Consequently, we are able to enforce the constraint \(T_{\text{TT}}=\mathbb{E}[T_{\text{ET}}(\Delta)]\) exactly in our setup when comparing costs of the triggering schemes. Thus, we simulate the described MAS including the impulsive control law under the ETC scheme. With the aforementioned strategy, this allows us to estimate the cost ratio \(J_{\text{ET}}(\Delta)/J_{\text{TT}}(\mathbb{E}[T_{\text{ET}}(\Delta)])\) for a varying number of agents \(N\) and thereby relate the two schemes in terms of performance. To be precise, we can use the simulation estimate of \(\mathbb{E}[T_{\text{ET}}(\Delta)]\) for a given \(N\) with the result from Theorem 1 to compute \(J_{\text{TT}}(\mathbb{E}[T_{\text{ET}}(\Delta)])\) exactly. Therefore, only the cost \(J_{\text{ET}}(\Delta)\) needs to be estimated based on simulation results. We refer the reader to the appendix for additional information on how to achieve this. For the simulation, we set \(\Delta=1\). We can do so without loss of generality due to Lemma 1. Moreover, we simulate the MAS with the Euler-Maruyama method for \(N\in\{2,3,\ldots,9,10,12,15,20,25,\ldots,80\}\) with a step size of \(10^{-4}\) s. For each \(N\), we perform \(10\,000\) Monte Carlo runs for estimating \(\mathbb{E}[T_{\text{ET}}(1)]\) and \(250\,000\) Monte Carlo runs for estimating \(Q_{\text{ET}}(1)\). As an analysis of the first sampling interval suffices for the cost, we can terminate a Monte Carlo run when the first event occurs. The resulting cost ratios with a minimum \(95\%\)-confidence interval are shown in Fig. 3. The derivations regarding the confidence interval can also be found in the appendix. As predicted by our theoretical results, we find that the ETC scheme is outperformed by TTC for larger numbers of agents. For low numbers of agents \(N\), we observe a clear performance advantage of the ETC scheme in simulation. Note that this advantage is not guaranteed by the theoretical findings in this paper. In addition, the simulation results indicate that the critical number of agents \(N_{0}\) from Theorem 2 are likely between \(30\) and \(55\) agents, also supported by the depicted confidence intervals. Thus, for this particular setting, the critical number of agents \(N_{0}\) beyond which the TTC scheme outperforms the ETC scheme might well lie in a practically relevant range. ## 6 Conclusion In this work, we examined TTC and ETC performance in an MAS consensus setup with single-integrator agents, optimal control inputs and arbitrary undirected connected communication topologies. For this particular setting, we provided a complete proof that TTC outperforms ETC beyond a certain number of agents given any communication topology in the considered class. This is in striking contrast to the outcome of similar analyses for non-cooperative Figure 3: Cost ratio of ETC over TTC. NCS. In addition, we characterized the asymptotic order of the performance measure in the number of agents and evaluated our results in a numerical simulation. This work points out that consistency considerations for ETC of MAS can be influenced by additional factors when compared to the non-cooperative NCS case. In particular, performance advantages of ETC over TTC might provably vanish in a distributed setting if the number of agents is large enough. The transfer of experience from the non-cooperative NCS to the MAS field and the creation of new event-triggering schemes should therefore be accompanied by a careful consideration of the impact of the number of agents on the performance relationship between TTC and ETC. In future work, we plan to investigate the root of the discovered performance disadvantage of ETC compared to TTC for sufficiently large agent numbers more closely. Only this will allow examining and arguing about options to overcome or alleviate the found phenomenon. Moreover, we aim to incorporate network effects such as communication delays and packet loss in the analysis. Previous work in the non-cooperative NCS field has shown the relevance of such considerations. ## Appendix A Proof of Proposition 1 First, note that the considered performance measure (2) is minimized if \(\mathbb{E}[x(t)^{\top}Lx(t)]\) is minimized for all \(t\geq 0\). Second, the control input \(u(t)\) can only utilize state information up to the last triggering time instant, i.e., up to time \(t_{k}\) with \(\hat{k}=\max\{k\in\mathbb{N}_{0}\mid t_{k}\leq t\}\). Consequently, we can compute as follows \[\mathbb{E}\Big{[}x(t)^{\top}Lx(t)\Big{]} =\mathbb{E}\Bigg{[}\Bigg{(}v(t)+\int_{0}^{t}u(s)\,\mathrm{d}s\Bigg{)} ^{\top}L(*)\Bigg{]}\] \[=\mathbb{E}\Big{[}(v(t)-v(t_{k}))^{\top}L(*)\Big{]}+\sum_{k\in \mathbb{N}}H(t-t_{k})\mathbb{E}\Big{[}(v(t_{k})-v(t_{k-1}))^{\top}L(*)\Big{]}\] \[\quad+2\,\mathbb{E}\Bigg{[}(v(t)-v(t_{k}))^{\top}L\int_{0}^{t}u(s )\,\mathrm{d}s\Bigg{]}+2\sum_{k\in\mathbb{N}}H(t-t_{k})\mathbb{E}\Bigg{[}(v(t_ {k})-v(t_{k-1}))^{\top}L\int_{0}^{t}u(s)\,\mathrm{d}s\Bigg{]}\] \[\quad+\mathbb{E}\Bigg{[}\left(\int_{0}^{t}u(s)\,\mathrm{d}s\right) ^{\top}L(*)\Bigg{]}\] \[=\mathbb{E}\Big{[}(v(t)-v(t_{k}))^{\top}L(*)\Big{]}+\mathbb{E} \Bigg{[}\Bigg{(}v(t_{k})+\int_{0}^{t}u(s)\,\mathrm{d}s\Bigg{)}^{\top}L(*)\Bigg{]}, \tag{10}\] where \((*)\) abbreviates the respective counterpart in the quadratic form, \(v(t)=[v_{1}(t),\ldots,v_{N}(t)]\), \(H(\cdot)\) denotes the Heaviside step function, and \(v(t_{k})=\sum_{k\in\mathbb{N}}H(t-t_{k})(v(t_{k})-v(t_{k-1}))\). Moreover, we utilized the integrated version of the agent dynamics (1), the fact that the increments of a standard Brownian motion are independent, and that \(u(s)\) must be independent of \(v(s)\) for \(s,\tilde{s}\in(t_{k},t]\). The latter yields \(\mathbb{E}\Big{[}(v(t)-v(t_{k}))^{\top}L\int_{0}^{t}u(s)\,\mathrm{d}s\Big{]}=0\). As both terms in (A) are non-negative, we arrive at the following optimality condition \[\mathbb{E}\Bigg{[}\Bigg{(}v(t_{k})+\int_{0}^{t}u(s)\,\mathrm{d}s\Bigg{)}^{ \top}L(*)\Bigg{]}=\mathbb{E}\Bigg{[}\Bigg{(}x(t_{k})+\int_{t_{k}}^{t}u(s)\, \mathrm{d}s\Bigg{)}^{\top}L(*)\Bigg{]}\overset{!}{=}0\quad\forall t\geq 0. \tag{11}\] Evaluating the condition at \((t_{k})_{k\in\mathbb{N}}\) yields \(\mathbb{E}[x(t_{k})^{\top}Lx(t_{k})]=0\) which can be satisfied by ensuring \[x(t_{k})^{\top}Lx(t_{k})=0\quad\forall(t_{k})_{k\in\mathbb{N}} \tag{12}\] through the control inputs \(u(t_{k})\). As explained in Section 2, the Laplace matrix \(L\) is positive semi-definite in the considered setup. Thus, we can decompose it according to \(L=L^{\top}L\) which allows us to satisfy (11) by requiring \(L\left(x(t_{k})+\int_{t_{k}}^{t}u(s)\,\mathrm{d}s\right)=0\) for all \(t\geq 0\) or, equivalently, \[L\left(x(t_{k})+\int_{t_{k}}^{t}u(s)\,\mathrm{d}s\right)=0\quad\forall t\geq 0.\] Evaluating at \((t_{k})_{k\in\mathbb{N}}\) gives us \(Lx(t_{k})=0\). Thus, for any \(t\in\{t>0\mid t\notin(t_{k})_{k\in\mathbb{N}}\}\), we have \(L\int_{t_{1}}^{t}u(s)\,\mathrm{d}s=0\), and, consequently, \[Lu(t)=0\quad\forall t\in\{t>0\mid t\notin(t_{k})_{k\in\mathbb{N}}\}. \tag{10}\] While we have now characterized a class of optimal control inputs with conditions (11) and (10), we decide to fulfill the latter by choosing \(u(t)=0\) for all \(t\in\{t>0\mid t\notin(t_{k})_{k\in\mathbb{N}}\}\). This appears as the practically most relevant case and reduces the complexity in Section 3.2. Given this special case, we arrive at the result stated in Proposition 1. Examples for such control inputs are provided in Section 3.2. Note that all results shown in this paper generally hold for the class of optimal control inputs characterized by (11) and (10). ## Appendix B Proof of Fact 1 First, we compute as follows \[\mathbb{E}\!\left[\int_{0}^{M}\frac{1}{2}\sum_{(i,j)\in\mathcal{ E}}\left(x_{i}(t)-x_{j}(t)\right)^{2}\mathrm{d}t\right] =\frac{1}{2}\sum_{(i,j)\in\mathcal{E}}\mathbb{E}\!\left[\int_{0} ^{M}\left(x_{i}(t)-x_{j}(t)\right)^{2}\mathrm{d}t\right]\] \[=\frac{1}{2}\sum_{(i,j)\in\mathcal{E}}\!\left[\mathbb{E}\!\left[ \sum_{k=1}^{m(M)}\int_{t_{k-1}}^{t_{k}}\left(x_{i}(t)-x_{j}(t)\right)^{2} \mathrm{d}t\right]+\mathbb{E}\!\left[\int_{t_{m\in\mathbb{N}}}^{M}\left(x_{i}( t)-x_{j}(t)\right)^{2}\mathrm{d}t\right]\right], \tag{11}\] where \((m(M))_{M\in[0,\infty)}\) is a renewal process for the renewal time series \((t_{k})_{k\in\mathbb{N}}\). As the sequence of inter-event times is independent and identically distributed, the quantities \[y_{k}^{(i,j)}\coloneqq\int_{t_{k-1}}^{t_{k}}\left(x_{i}(t)-x_{j}(t)\right)^{2} \mathrm{d}t\] are independent and identically distributed as well. To see this, let \(\bar{v}_{i}(t)=v_{i}(t)-v_{i}(t_{k})\) for all \(t\in[t_{k},t_{k+1})\) and \(i\in\{1,\ldots,N\}\). Note that \(x_{i}(t)=x_{i}(t_{k})+\bar{v}_{i}(t)\) for all \(t\in[t_{k},t_{k+1})\), \(i\in\{1,\ldots,N\}\) and \(x_{i}(t_{k})=x_{j}(t_{k})\) for all \(i\), \(j\in\{1,\ldots,N\}\). By Wald's equation, we have \(\mathbb{E}\!\left[\sum_{k=1}^{m(M)}y_{k}^{(i,j)}\right]=\mathbb{E}\!\left[m(M )\right]\mathbb{E}\!\left[y_{1}^{(i,j)}\right]\). In addition, the second term in (11) has the upper bound \[\int_{t_{m\in\mathbb{N}}}^{M}\left(x_{i}(t)-x_{j}(t)\right)^{2}\mathrm{d}t \leq y_{m(M)+1}^{(i,j)}.\] Dividing by \(M\) and letting \(M\to\infty\) yields \[J=\frac{1}{2}\sum_{(i,j)\in\mathcal{E}}\lim_{M\to\infty}\frac{\mathbb{E}\! \left[m(M)\right]}{M}\cdot\mathbb{E}\!\left[y_{1}^{(i,j)}\right]=\frac{1}{ \mathbb{E}\!\left[T\right]}\cdot\frac{1}{2}\sum_{(i,j)\in\mathcal{E}}\mathbb{ E}\!\left[\int_{0}^{T}\left(x_{i}(t)-x_{j}(t)\right)^{2}\mathrm{d}t\right],\] since, by the renewal theorem, \(\lim_{M\to\infty}\frac{\mathbb{E}\!\left[m(M)\right]}{M}=\frac{1}{\mathbb{E} \!\left[T\right]}\). ## Appendix C Convergence of moments - Proof of Lemma 2 We still have to show the following lemma to complete the proof of Lemma 2. **Lemma 3**.: _The limit theorem (8) also implies the convergence of the first and second moment, namely_ \[\mathbb{E}\!\left[X_{N}\right]\to\mathbb{E}\!\left[G\right]\quad\text{and} \quad\mathbb{E}\!\left[X_{N}^{2}\right]\to\mathbb{E}\!\left[G^{2}\right],\] _where_ \[X_{N}\coloneqq 2(\ln N)^{2}\left(T_{\mathrm{ET}}-a_{N}\right)\] _and \(G\) is a Gumbel-distributed random variable. Furthermore, \(\kappa\) and \(a_{N}\) are defined as in (8)._ In the language of Lemma 3, the limit theorem (8) says that \(X_{N}\) converges weakly to \(G\). Proof.: The proof builds upon Lebesgue's dominated convergence theorem where we need to show that \(\mathbb{P}(X_{N}\geq r)\) and \(2r\mathbb{P}(X_{N}\geq r)\) are upper bounded by integrable functions. We will show the existence of integrable majorants for \(r\in\mathbb{R}\) by showing their existence on different ranges in \(r\) whose union covers \(\mathbb{R}\). Thereby, we arrive at a majorant for \(r\in\mathbb{R}\) as the sum of the majorants for the subregions in \(r\). _Preliminaries_ As in the beginning of the proof for Lemma 2, we have \[\mathbb{P}(T_{j}<w)=\mathbb{P}(\sup_{0\leq t\leq w}|v_{j}(t)|\geq 1)=\mathbb{P}( \sup_{0\leq t\leq 1}|v_{j}(t)|\geq w^{-1/2}).\] Using \[\mathbb{P}(\sup_{0\leq t\leq 1}v_{j}(t)\geq w^{-1/2})\leq\mathbb{P}(\sup_{0\leq t \leq 1}|v_{j}(t)|\geq w^{-1/2})\leq 2\mathbb{P}(\sup_{0\leq t\leq 1}v_{j}(t) \geq w^{-1/2}),\] as well as the fact that \(\sup_{0\leq t\leq 1}v_{j}(t)\) has the same distribution as \(|v_{j}(1)|\), and the standard Gaussian tail estimate, we see that there exist \(0<\kappa_{1}<\kappa_{2}<\infty\) and \(w_{0}>0\) such that, for all \(0<w<w_{0}\), we have \[\frac{\kappa_{1}}{w^{-1/2}}\,e^{-w^{-1}/2}\leq\mathbb{P}(T_{j}<w)\leq\frac{ \kappa_{2}}{w^{-1/2}}\,e^{-w^{-1}/2}.\] (C.1) Furthermore, we will use the inequalities \[1-x\leq(1+x)^{-1}\leq 1-x+x^{2},\qquad x\geq 0,\] (C.2) and \[\left(1-\frac{\alpha}{N}\right)^{N}\leq e^{-\alpha},\qquad\alpha\in\mathbb{R}.\] (C.3) _The range \(0\leq r\leq\frac{1}{2}\ln N\)_ For \(r>0\), we have for large \(N\) (uniformly in \(r\)) \[w\coloneqq\frac{r}{2(\ln N)^{2}}+a_{N}=\frac{r-\ln\frac{\kappa}{(2\ln N)^{1/2 }}}{2(\ln N)^{2}}+\frac{1}{2\ln N}\geq\frac{1}{2\ln N},\] (C.4) i.e., \(w^{-1/2}\leq(2\ln N)^{1/2}\). Furthermore, since \(r\leq\frac{1}{2}\ln N\), we have for large \(N\) (uniformly in \(r\)) \[(r-\ln\frac{\kappa}{(2\ln N)^{1/2}})^{2}\leq(r+\ln\ln N)^{2}=r^{2}+2r\ln\ln N +(\ln\ln N)^{2}\leq\frac{3}{4}r\ln N+(\ln\ln N)^{2}.\] (C.5) This shows that \[\mathbb{P}(X_{N}\geq r) =\mathbb{P}(T_{1}\geq\frac{r}{2(\ln N)^{2}}+a_{N})^{N}=\mathbb{P }(T_{1}\geq w)^{N}\] (C.6) \[=\,(1-\mathbb{P}(T_{1}<w))^{N}\] \[\stackrel{{\eqref{eq:preliminaries}}}{{\leq}}\left(1- \frac{\kappa_{1}}{w^{-1/2}}\,\exp\!\left(-\frac{w^{-1}}{2}\right)\right)^{N}\] \[\stackrel{{\eqref{eq:preliminaries}}}{{\leq}}\left(1- \frac{\kappa_{1}}{c_{N}}\,\exp\!\left(-\frac{1}{2}\left(\frac{r-\ln\frac{ \kappa}{c_{N}}}{2(\ln N)^{2}}+\frac{1}{2\ln N}\right)^{-1}\right)\right)^{N} =\left(1-\frac{\kappa_{1}}{c_{N}}\,\exp\!\left(-(\ln N)\left(\frac{r-\ln\frac{ \kappa}{c_{N}}}{\ln N}+1\right)^{-1}\right)\right)^{N}\] \[\stackrel{{\eqref{eq:preliminaries}}}{{\leq}}\left(1- \frac{\kappa_{1}}{c_{N}}\,\exp\!\left(-(\ln N)(1-b_{N}+b_{N}^{2})\right) \right)^{N}=\left(1-\frac{\kappa_{1}}{c_{N}}\,\frac{1}{N}\exp\!\left(r-\ln \frac{\kappa}{c_{N}}-\frac{(r-\ln\frac{\kappa}{c_{N}})^{2}}{\ln N}\right) \right)^{N}\] \[=\left(1-\frac{\kappa_{1}/\kappa}{N}\exp\!\left(r-\frac{(r-\ln \frac{\kappa}{c_{N}})^{2}}{\ln N}\right)\right)^{N}\] \[\stackrel{{\eqref{eq:preliminaries}}}{{\leq}}\exp\! \left(-\frac{\kappa_{1}}{\kappa}\,\exp\!\left(r-\frac{(r-\ln\frac{\kappa}{c_{N }})^{2}}{\ln N}\right)\right)\] \[\stackrel{{\eqref{eq:preliminaries}}}{{\leq}}\exp\! \left(-\frac{\kappa_{1}}{\kappa}\,\exp\!\left(\frac{r}{4}-1\right)\right),\] (C.7) where \(b_{N}=\frac{r-\ln(\kappa/c_{N})}{\ln N}\) and \(c_{N}=(2\ln N)^{1/2}\). In addition, we utilized \(r\in[0,\frac{1}{2}\ln N]\) for applying (C.1), (C.2), (C.4) and (C.5). _The range \(r>(\ln N)^{2}\)_ On this range, we have that for \(N\) large enough (uniformly in \(r\)) \[w=\frac{r-\ln\frac{\kappa}{(2\ln N)^{1/2}}}{2(\ln N)^{2}}+\frac{1}{2\ln N}\geq \frac{r}{2(\ln N)^{2}},\] (C.8) and \(r>(\ln N)^{2}\) implies \(w\geq 1/2\). Here, we can use the standard small deviation estimate as in [31, (1.3)]: For a constant \(c>0\) and all \(w\geq 1/2\), we have \[\mathbb{P}(T_{1}\geq w)=\mathbb{P}(\sup_{0\leq r\leq w}|v_{1}(t)|\leq 1)\leq e^ {-cw}.\] (C.9) Therefore, continuing in (C.6), we obtain \[\mathbb{P}(X_{N}\geq r)=\mathbb{P}(T_{1}\geq w)^{N}\stackrel{{ \eqref{eq:N}}}{{\leq}}e^{-cwN}\stackrel{{\eqref{eq:N}}}{{ \leq}}\exp\left(-c\frac{rN}{2(\ln N)^{2}}\right)\leq e^{-r},\] (C.10) where we utilized \(r>(\ln N)^{2}\) to apply (C.9). _The range \(\frac{1}{2}\ln N\leq r\leq(\ln N)^{2}\)_ Here, for \(N\) large enough (uniformly in \(r\)), we have \[w=\frac{r-\ln\frac{\kappa}{(2\ln N)^{1/2}}}{2(\ln N)^{2}}+\frac{1}{2\ln N} \geq\frac{3}{4\ln N}.\] (C.11) Therefore, we obtain \[\mathbb{P}(X_{N}\geq r)=\mathbb{P}(T_{1}\geq w)^{N} \stackrel{{\eqref{eq:N}}}{{\leq}}\mathbb{P}\left(T_{1 }\geq\frac{3}{4\ln N}\right)^{N}=\left(1-\mathbb{P}\left(T_{1}<\frac{3}{4\ln N }\right)\right)^{N}\] \[\stackrel{{\eqref{eq:N}}}{{\leq}}\left(1-\frac{ \kappa_{1}}{(\frac{4}{3}\ln N)^{1/2}}e^{-\frac{3}{3}\ln N}\right)^{N}=\left(1- \frac{\kappa_{1}}{N^{2/3}(\frac{4}{3}\ln N)^{1/2}}\right)^{N}=\left(1-\frac{ \kappa_{1}N^{1/3}}{N(\frac{4}{3}\ln N)^{1/2}}\right)^{N}\] \[\stackrel{{\eqref{eq:N}}}{{\leq}}\exp\left(-\frac{ \kappa_{1}N^{1/3}}{(\frac{4}{3}\ln N)^{1/2}}\right)\leq\exp\left(-N^{1/6} \right)\leq\exp\left(-e^{\sqrt{r}/6}\right),\] (C.12) where we utilized \(\frac{1}{2}\ln N\leq r\) to apply (C.11) and \(r\leq(\ln N)^{2}\) for the last step. Putting the three estimates (C.7), (C.10) and (C.12) together shows that we can find an integrable majorant for \(r\mapsto\mathbb{P}(X_{N}\geq r)\). Therefore, using the limit theorem (8) \[\mathbb{E}[X_{N}\mathds{1}_{X_{N}\geq 0}]=\int_{0}^{\infty}\mathbb{P}(X_{N} \geq r)\,\mathrm{d}r\to\int_{0}^{\infty}\mathbb{P}(G\geq r)\,\mathrm{d}r= \mathbb{E}[G\mathds{1}_{G>0}]\,.\] (C.13) Similarly, the three estimates above show that an integrable majorant for \(r\mapsto 2r\mathbb{P}(X_{N}\geq r)\) can be found giving \[\mathbb{E}\big{[}X_{N}^{2}\mathds{1}_{X_{N}\geq 0}\big{]}=\int_{0}^{\infty}2r \mathbb{P}(X_{N}\geq r)\,\mathrm{d}r\to\int_{0}^{\infty}2r\mathbb{P}(G\geq r) \,\mathrm{d}r=\mathbb{E}\Big{[}G^{2}\mathds{1}_{G>0}\Big{]}\,.\] (C.14) _The range \(r<0\)_ We finally handle \(\mathbb{E}[X_{N}\mathds{1}_{X_{N}<0}]\). Note that \[-\mathbb{E}[X_{N}\mathds{1}_{X_{N}<0}]=\int_{0}^{\infty}\mathbb{P}(-X_{N}>r)\, \mathrm{d}r=\int_{-\infty}^{0}\mathbb{P}(X_{N}<r)\,\mathrm{d}r,\] and since \(X_{N}\geq-\ln N+\ln(\kappa/(2\ln N)^{1/2})=:r_{\min}\), the integral is actually on the range \([r_{\min},0]\). This time, we will find an integrable majorant for \(r\mapsto\mathbb{P}(X_{N}<r)\) on this range. Recall from (C.6) that \[\mathbb{P}(X_{N}<r)=1-\mathbb{P}(X_{N}\geq r)=1-\mathbb{P}(T_{1}\geq w )^{N}=1-(1-\mathbb{P}(T_{1}<w))^{N},\] where, as above, \[w=w_{r}=\frac{r-\ln\frac{\kappa}{(2\ln N)^{1/2}}}{2(\ln N)^{2}}+ \frac{1}{2\ln N}\in\left[0,\,\frac{1}{2\ln N}-\frac{\ln\frac{\kappa}{(2\ln N)^{ 1/2}}}{2(\ln N)^{2}}\,\right].\] (C.15) Thus, it suffices to find an integrable majorant for \(r\mapsto 1-(1-\mathbb{P}(T_{1}<w_{r}))^{N}\). Recall that for \(x\in[0,1]\) we have \(1-(1-x)^{N}\leq Nx\). Therefore, we obtain \[1-(1-\mathbb{P}(T_{1}\leq w))^{N} \leq N\mathbb{P}(T_{1}<w)\] \[\stackrel{{\eqref{eq:x_1}}}{{\leq}}N\frac{\kappa_{2 }}{w^{-1/2}}\ \exp\left(-\frac{w^{-1}}{2}\right)=N\frac{\kappa_{2}}{w^{-1/2}}\ \exp\left(-\frac{1}{2}\left(\frac{r-\ln\frac{\kappa}{c_{N}}}{2(\ln N)^{2}}+ \frac{1}{2\ln N}\right)^{-1}\right)\] \[=N\frac{\kappa_{2}}{w^{-1/2}}\ \exp\left(-\ln N\left(\frac{r-\ln \frac{\kappa}{c_{N}}}{\ln N}+1\right)^{-1}\right)\] \[\stackrel{{\eqref{eq:x_1}}}{{\leq}}N\frac{\kappa_{2 }}{w^{-1/2}}\ \exp\left(-\ln N\left(1-\frac{r-\ln\frac{\kappa}{c_{N}}}{\ln N}\right) \right)=\frac{\kappa_{2}}{w^{-1/2}}\ \exp\left(r-\ln\frac{\kappa}{c_{N}}\right)\] \[=\frac{\kappa_{2}}{w^{-1/2}}\,e^{r}\,\frac{c_{N}}{\kappa}=e^{r} \,\frac{\kappa_{2}}{\kappa}\ (w\cdot 2\ln N)^{1/2}\] \[\stackrel{{\eqref{eq:x_1}}}{{\leq}}e^{r}\,\frac{ \kappa_{2}}{\kappa}\ \sqrt{2},\] for large \(N\) (uniformly in \(r\)) and \(c_{N}=(2\ln N)^{1/2}\). This gives an integrable majorant for \(r\mapsto\mathbb{P}(X_{N}<r)\) for \(r\in(-\infty,0]\). Therefore, we have \[-\mathbb{E}[X_{N}\mathds{1}_{X_{N}<0}]=\int_{-\infty}^{0}\mathbb{P}(X_{N}<r) \,\mathrm{d}r\to\int_{-\infty}^{0}\mathbb{P}(G<r)\,\mathrm{d}r=-\mathbb{E}[G \mathds{1}_{G<0}]\,.\] The same argument applies to \(\mathbb{E}\big{[}X_{N}^{2}\mathds{1}_{X_{N}<0}\big{]}\) since the above estimate can also be used for \(r\mapsto 2(-r)\mathbb{P}(X_{N}<r)\). This together with (C.13) and (C.14) shows that \(\mathbb{E}[X_{N}]\to\mathbb{E}[G]\) and \(\mathbb{E}\big{[}X_{N}^{2}\big{]}\to\mathbb{E}\big{[}G^{2}\big{]}\), as required. ## Appendix D Estimation of the cost ratio in simulation Let us first provide details on the utilized approach to estimate the event-triggered cost in the cost ratio and, subsequently, to obtain the confidence interval depicted in Fig. 3. Consider the Bessel process \(R(t):=\sqrt{\sum_{i=1}^{N}v_{i}(t)^{2}}\) of dimension \(N\) started at \(R(0)=0\). **Fact 4**.: _For any stopping time \(T\) satisfying the assumptions in Facts 1 and 2, we have_ \[Q(T)=\frac{|\mathcal{E}|}{2N(N+2)}\,\mathbb{E}\Big{[}R(T)^{4}\Big{]}\,.\] Proof.: The stochastic differential equation solved by \((R(t))_{t\in[0,\infty)}\) is given by \[\,\mathrm{d}R(t)=\frac{N-1}{2R(t)}\,\mathrm{d}t+\,\mathrm{d}v(t),\] where \(v(t)\) is a standard Brownian motion. Let \(Af(x):=\frac{N-1}{2x}f^{\prime}(x)+\frac{1}{2}f^{\prime\prime}(x)\) be the infinitesimal generator of the Markov process \((R(t))_{t\in[0,\infty)}\). Then, Dynkin's formula says that, for any stopping time \(T\), \[\mathbb{E}\big{[}f(R(T))\big{]}=\mathbb{E}\Bigg{[}\int_{0}^{T}Af(R(t))\, \mathrm{d}t\Bigg{]}\,.\] Let \(f(x)=x^{4}\). Then, \(Af(x)=\frac{N-1}{2x}\,4x^{3}+\frac{4.3}{2}\,x^{2}=2(N+2)x^{2}\). This gives \[\mathbb{E}\Big{[}R(T)^{4}\Big{]}=2(N+2)\mathbb{E}\Bigg{[}\int_{0}^{T}R(t)^{2}\, \mathrm{d}t\Bigg{]}\,.\] Utilizing \[\mathbb{E}\Bigg{[}\int_{0}^{T}R(t)^{2}\,\mathrm{d}t\Bigg{]}=N\mathbb{E}\Bigg{[} \int_{0}^{T}v_{1}(t)^{2}\,\mathrm{d}t\Bigg{]}=\frac{N}{|\mathcal{E}|}Q(T),\] based on Fact 2 gives the stated formula for \(Q(T)\). We can leverage this fact to estimate \(Q_{\mathrm{ET}}(\Delta)\) in simulation and thereby arrive at an estimate for \(J_{\mathrm{ET}}(\Delta)\). This is required to obtain the cost ratio results in Section 5. In particular, note that this fact allows for the explicit cancellation of \(|\mathcal{E}|\) from the cost ratio. Thus, the obtained simulation results hold for any graph topology in the given framework. At last, let us present the methodology to obtain the confidence interval in Fig. 3. **Fact 5**.: _Let the estimates for \(\mathbb{E}\Big{[}R(T_{\mathrm{ET}})^{4}\Big{]}\) and \(\mathbb{E}[T_{\mathrm{ET}}]\) be_ \[g_{1}=\frac{1}{n_{1}}\sum_{i=1}^{n_{1}}R(T_{\mathrm{ET}})_{i}^{4},\quad g_{2}= \frac{1}{n_{2}}\sum_{i=1}^{n_{2}}T_{\mathrm{ET},i},\] _respectively, where \(n_{1},n_{2}\in\mathbb{N}\) denote the number of samples \(R(T_{\mathrm{ET}})_{i}^{4}\) and \(T_{\mathrm{ET},i}\) obtained from independent Monte Carlo simulations. Moreover, let \([g_{1}^{\mathrm{L}},g_{1}^{\mathrm{R}}]\), \([g_{2}^{\mathrm{L}},g_{2}^{\mathrm{R}}]\) be the corresponding confidence intervals with confidence level \(\gamma\). Then, we can estimate the confidence interval \([\alpha^{\mathrm{L}},\alpha^{\mathrm{R}}]\) with_ \[\mathbb{P}\left(\alpha^{\mathrm{L}}\leq\frac{J_{\mathrm{ET}}(\Delta)}{J_{ \mathrm{TT}}(\mathbb{E}[T_{\mathrm{ET}}(\Delta)])}\leq\alpha^{\mathrm{R}} \right)\geq\gamma^{2}\] _according to_ \[\alpha^{\mathrm{L}}=\frac{g_{1}^{\mathrm{L}}}{N(N+2)(g_{2}^{\mathrm{R}})^{2}}, \quad\alpha^{\mathrm{R}}=\frac{g_{1}^{\mathrm{R}}}{N(N+2)(g_{2}^{\mathrm{L}})^ {2}}.\] Proof.: With Fact 4, we can estimate the cost ratio as \[\frac{J_{\mathrm{ET}}(\Delta)}{J_{\mathrm{TT}}(\mathbb{E}[T_{\mathrm{ET}}( \Delta)])}\approx\frac{g_{1}}{N(N+2)g_{2}^{2}}.\] Given the confidence intervals \([g_{1}^{\mathrm{L}},g_{1}^{\mathrm{R}}]\), \([g_{2}^{\mathrm{L}},g_{2}^{\mathrm{R}}]\), we find \[\mathbb{P}\left(\alpha^{\mathrm{L}}\leq\frac{J_{\mathrm{ET}}( \Delta)}{J_{\mathrm{TT}}(\mathbb{E}[T_{\mathrm{ET}}(\Delta)])}\leq\alpha^{ \mathrm{R}}\right) \geq\mathbb{P}(g_{1}\in[g_{1}^{\mathrm{L}},g_{1}^{\mathrm{R}}]\text { and }g_{2}\in[g_{2}^{\mathrm{L}},g_{2}^{\mathrm{R}}])\] \[=\mathbb{P}(g_{1}\in[g_{1}^{\mathrm{L}},g_{1}^{\mathrm{R}}]) \cdot\mathbb{P}(g_{2}\in[g_{2}^{\mathrm{L}},g_{2}^{\mathrm{R}}])=\gamma^{2},\] which finishes the proof. Since we have large numbers of samples \(n_{1},n_{2}\), we can assume the samples \(R(T_{\mathrm{ET}})_{i}^{4},T_{\mathrm{ET},i}\) to be normally distributed. This allows us to apply standard statistical methods to determine the confidence intervals \([g_{1}^{\mathrm{L}},g_{1}^{\mathrm{R}}]\), \([g_{2}^{\mathrm{L}},g_{2}^{\mathrm{R}}]\), e.g., for a confidence level \(\gamma=0.975\). Consequently, the confidence interval \([\alpha^{\mathrm{L}},\alpha^{\mathrm{R}}]\) according to Fact 5 corresponds to a confidence level of at least \(\gamma^{2}=0.975^{2}\approx 0.95\). ## Acknowledgment D. Meister thanks the Stuttgart Center for Simulation Science (SimTech) for supporting him.
2307.03164
Induced Gravitational Waves from Ultra Slow-Roll Inflation and Pulsar Timing Arrays Observations
The stochastic gravitational wave background (SGWB) detected recently by the pulsar timing arrays (PTAs) observations may have cosmological origins. In this work we consider a model of single field inflation containing an intermediate phase of ultra slow-roll. Fixing the amplitude of the peak of curvature perturbations by the PBHs bounds we calculate the gravitational waves (GWs) induced from the curvature perturbations enhanced during USR. The spectrum of the induced GWs depends on the sharpness of the transition from the USR phase to the final attractor phase as well as to the duration of the USR period. While the model can accommodate the current PTAs data but it has non-trivial predictions for the induced GWs on higher frequency ranges which can be tested by future observations.
Hassan Firouzjahi, Alireza Talebian
2023-07-06T17:46:58Z
http://arxiv.org/abs/2307.03164v3
# Induced Gravitational Waves from Ultra Slow-Roll Inflation ###### Abstract The stochastic gravitational wave background (SGWB) detected recently by the pulsar timing arrays (PTAs) observations may have cosmological origins. In this work we consider a model of single field inflation containing an intermediate phase of ultra slow-roll. Fixing the amplitude of the peak of curvature perturbations by the PBHs bounds we calculate the gravitational waves (GWs) induced from the curvature perturbations enhanced during USR. The spectrum of the induced GWs depends on the sharpness of the transition from the USR phase to the final attractor phase as well as to the duration of the USR period. While the model can accommodate the current PTAs data but it has non-trivial predictions for the induced GWs on higher frequency ranges which can be tested by future observations. Introduction There are indications of detection of stochastic gravitational waves background (SGWB) from recent pulsar timing arrays (PTAs) around the frequency range \(\sim 10\) nHz as reported in NANOGrav [1], Parkers PTA [2], European PTA [3] and the China PTA [4]. These signals may have cosmological origins as well as astrophysical interpretations. A natural astrophysical interpretation of the observed SGWB is the superpositions of gravitational waves (GWs) signals from the merging of binary supermassive black holes [1]. On the other hand, if the observed signal has cosmological origins, this can open a new window into observing the primordial universe and to probe physics beyond the Standard Model of particle physics. Among possible cosmological interpretations of the SGWB are the GWs induced from the enhanced scalar perturbations on small scales generated during inflation, first order cosmological phase transitions [5, 6, 7], domain walls or cosmic strings [8, 9, 10, 11, 12], see [13, 14] for further review. It should be noted that the previous NANOGrav 12.5 year data [15] also indicated traces of SGWB with a flat spectrum in a narrow range of nHz frequency which initiated interests to look for the origins of this signal. Scalar induced gravitational waves (SIGW) by the enhancement of scalar perturbations on small scale generated during inflation [16, 17, 18, 19, 20, 21, 22, 23, 24, 25] is a mechanism which can explain the observed SGWBs [13, 14]. In this mechanism, the GWs are sourced at the second order in perturbation theory via their interaction with the scalar sector generated during inflation. Typically, this setup requires that the amplitude of scalar perturbations to grow by about seven orders of magnitude compared to the observed CMB scale. Consequently, this mechanism can yield to primordial black holes (PBHs) formation which may comprise all or parts of dark matter energy density [26, 27, 28, 29, 30]. The setup of ultra slow-roll (USR) inflation has been employed as a model in which the primordial power spectrum can be enhanced to induce large SIGWs and PBHs [31, 32, 33, 34, 35, 36, 37, 38, 39], for a review see [29, 30]. The USR setup is a single field model of inflation in which the potential is flat [40, 41, 42] and the inflaton velocity falls off exponentially so the curvature perturbations grow on superhorizon scales [43]. Since the curvature perturbations grow on superhorizon scales the USR setup violates the Maldacena non-Gaussianity consistency condition [44, 45, 46, 47, 48, 49, 50, 51, 52, 53]. Originally, it was shown in [43] that the amplitude of local-type non-Gaussianity in USR setup is \(f_{NL}=\frac{5}{2}\). This question was further examined in [54] in which it was shown that the amplitude of \(f_{NL}\) crucially depends on the sharpness of the transition from the USR phase to the final slow-roll (SR) phase. In an extreme sharp transition from the USR phase to the SR phase, which was assumed in [43], \(f_{NL}\) reaches its maximum value \(\frac{5}{2}\). However, if the transition to the final stage is mild then the curvature perturbations evolve after the USR phase before it reaches to its final attractor value. Correspondingly, in a mild transition, the amplitude of \(f_{NL}\) is washed out in the subsequent evolution and it ends up with a value at the order of the slow-roll parameters. Another important point is the issue of loop corrections in this setup. This question was studied in various recent works [55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67]. Originally, it was argued in [55], see also [56], that loop corrections from small scale modes which leave the horizon during the USR phase can induce large corrections on CMB scale modes. This was criticized in [57, 58] arguing, among other things, that for a mild transition the loop corrections will be less significant and the standard PBHs formation within the single field USR scenario is applicable. This question was studied in some details in [61] with emphasis on the effects of the sharpness of the transition from the intermediate USR phase to the final attractor SR phase. It was shown in [61] that for an arbitrarily sharp transition the one loop corrections can be very large, in line with the results advocated in [55, 56]. However, it was speculated in [61] that for a mild transition, the dangerous one-loop corrections are washed out during the subsequent evolution of the mode function after the USR phase. This conclusion was further examined in [62] confirming the physical expectations of [57, 58]. In summary, in order for the one-loop corrections on CMB scale modes to be harmless one needs a mild enough transition from the USR phase to the final attractor phase. In this paper we employ the USR setup as a working mechanism to generate large SIGW as a possible explanations for the observed SGWB in the NANOGrav data [1]. For various recent works on SIGWs as an explanation of the the PTAs data see [68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78]. ## 2 The Setup The setup which we use to enhance the primordial power spectrum to induce large GWs at the range of scales observed by the PTA observations contains an intermediate phase of USR in single field inflation. We have a three-stage model of inflation in which the large CMB scale leaves the horizon at the early stage of inflation. The first stage of inflation proceeds say in about 16 e-folds or so. Then the USR phase takes over in which the potential is very flat and the curvature perturbations experience a growth on superhorizon scales. In order for the curvature power spectrum to be under perturbative controls the USR phase has to be terminated followed by a final phase of SR inflation. A realistic setup requires that the transition from the first SR to the USR stage and from the USR phase to the final SR phase to be smooth. However, in order to follow the dynamics analytically, we consider an idealized situation in which the transition from the SR to USR and then to final SR phase are instantaneous. Assuming that USR phase is extended during the time interval \(t_{i}\leq t\leq t_{e}\), we assume that the transitions at the starting point \(t=t_{i}\) and at the final point \(t=t_{e}\) to be instantaneous. While the transition to the final SR phase is instantaneous, but it will take time for the system to relax to its final attractor phase. We control this relaxation period by a sharpness parameter which plays important role in our analysis below. It it important that the instantaneous gluing of the solutions should not be confused with the sharpness of the transition to the final attractor solution. With the above discussions in mind, let us elaborate on the dynamics of our setup. During the first stage of inflation, \(t<t_{i}\), the system follows an attractor phase and the dynamics of the inflaton field \(\phi\) is given by the usual SR dynamics. The Hubble expansion rate \(H\equiv\frac{\dot{a}}{a}\) is nearly constant in which \(a(t)\) is the FLRW scale factor. The small evolution of \(H\) during the first stage of inflation is measured by the first SR parameter \(\epsilon\equiv-\frac{\dot{H}}{H^{2}}\) which is nearly constant and small. During the USR phase the dynamics of the system is given by \[\ddot{\phi}+3H\dot{\phi}=0\,,\qquad 3M_{P}^{2}H^{2}\simeq V_{0}\,, \tag{2.1}\] where \(M_{P}\) is the reduced Planck mass. As the potential is flat during the USR phase, \(H\) approaches a fixed value and from the field equation we obtain \(\dot{\phi}\propto a(t)^{-3}\). Correspondingly, the slow-roll parameter \(\epsilon\) falls off exponentially during the USR phase as well, \(\epsilon\propto a(t)^{-6}\). On the other hand, the second slow-roll parameter \(\eta\equiv\frac{\dot{\epsilon}}{H\epsilon}\simeq-6\) which is the hallmark of the USR phase. It is convenient to work with the number of e-fold \(N\) as the clock, \(dN=H(t)dt\). We choose the convention that the USR phase starts at \(N=0\) so for the first stage of inflation, \(N<0\) In particular, the CMB scale modes leave the horizon at around \(N\sim-16\). The duration of the USR phase is denoted by \(\Delta N\) which is a free parameter of our setup. Going to conformal time, \(d\tau=dt/a(t)\), the USR phase is extended during \(\tau_{i}\leq\tau\leq\tau_{e}\) and the duration of USR phase is given by \(\Delta N=\ln\big{(}\frac{\tau_{i}}{\tau_{e}}\big{)}=\ln\big{(}\frac{k_{e}}{k_{ i}}\big{)}\) in which \(k_{i}(k_{e})\) represents the mode which leaves the horizon at the start (end) of USR phase. The slow-roll parameter at the end of USR phase \(\epsilon_{e}\) is related to its value at the start of USR phase \(\epsilon_{i}\) via \(\epsilon_{e}=\epsilon_{i}e^{-6\Delta N}\). As explained above, we assume the USR phase is followed by a SR phase. Therefore, we need to investigate the evolution of the slow-roll parameters \(\epsilon(\tau)\) and \(\eta(\tau)\) after the USR phase. This was studied in more details in [54], see also [79, 80] for similar studies. Let us denote the final SR parameters with their attractor values by \(\epsilon_{\rm V}\) and \(\eta_{\rm V}\) which are expressed in terms of the first and the second derivatives of the potential in the final SR phase. To simplify the analysis, here, as in [61], we assume that the potential in the final stage is such that \(\epsilon_{\rm V}\gg|\eta_{\rm V}|\) though this assumption can be relaxed with no important changes in the results. A key parameter in our setup is the sharpness parameter \(h\) which controls how quickly the system reaches to its final attractor limit. Following [54], we define \(h\) as follows \[h\equiv\frac{6\sqrt{2\epsilon_{\rm V}}}{\dot{\phi}(t_{e})}=-6 \sqrt{\frac{\epsilon_{V}}{\epsilon_{e}}}\,. \tag{2.2}\] With this definition, the slow-roll parameters \(\epsilon(\tau)\) and \(\eta(\tau)\) after the USR transition are given by \[\epsilon(\tau)=\epsilon_{e}\Big{[}\frac{h}{6}-(1+\frac{h}{6}) \big{(}\frac{\tau}{\tau_{e}}\big{)}^{3}\Big{]}^{2}\qquad(\tau>\tau_{e})\,, \tag{2.3}\] and \[\eta(\tau)=-\frac{6(6+h)}{(6+h)-h\big{(}\frac{\tau_{e}}{\tau} \big{)}^{3}}\qquad(\tau>\tau_{e}). \tag{2.4}\] As discussed previously, the above results are obtained in the limit of an instant transition from the USR phase to the final SR phase. Even in this limit it will take some time for the system to reach to its attractor phase which is measured by the sharpness parameter \(h\). In the limit \(h\to-\infty\), the system reaches its final attractor phase immediately after \(\tau_{e}\) in which the mode functions become frozen. On the other hand, for other values of \(h\) the system keeps evolving after \(\tau_{e}\) until \(\epsilon(\tau)\) approaches its final attractor value \(\epsilon_{\rm V}\). A particular case of transition is when \(h=-6\) in which \(\epsilon(\tau)\) is frozen to its value at the end of USR, \(\epsilon(\tau)=\epsilon_{e}\) with \(\eta(\tau)=0\) for \(\tau>\tau_{e}\). This limit was mostly studied in recent literature concerning the loop correction such as in [55]. In the following analysis, as in [61], we consider a general value of \(h\) including the spacial case of \(h=-6\). Another important point is that the larger the value of \(|h|\) is the larger \(\epsilon_{V}\) is compared to \(\epsilon_{e}\). Correspondingly, the final power spectrum scales somewhat inversely with \(|h|\). As a result, a larger (smaller) value of \(|h|\) yields to a smaller (larger) final power spectrum. We work with the comoving curvature perturbation \(\mathcal{R}\) which in spatially flat gauge is related to inflaton perturbation via \(\mathcal{R}\equiv-\frac{H}{\dot{\phi}}\delta\phi\). Going to Fourier space, we define the quantum mode function in terms of the annihilation and creation operators as usual via \[\mathcal{R}(t,\mathbf{x})=\int\frac{d^{3}k}{(2\pi)^{3}}e^{i \mathbf{k}\cdot\mathbf{x}}\Big{(}\mathcal{R}_{k}(t)a_{\mathbf{k}}+\mathcal{ R}_{k}^{*}(t)a_{-\mathbf{k}}^{\dagger}\Big{)}\,, \tag{2.5}\] in which \(a_{\bf k}\) and \(a_{\bf k}^{\dagger}\) satisfy the usual commutation relation associated to the annihilation and creation operators, \([a_{\bf k},a_{\bf k^{\prime}}^{\dagger}]=\delta({\bf k}-{\bf k^{\prime}})\). Starting with the Bunch-Davies (Minkowski) initial condition and imposing the continuity of \({\cal R}\) and \(\dot{\cal R}\) at the transition points \(\tau=\tau_{i}\) and \(\tau=\tau_{e}\), the mode function at each stage of \(SR\to USR\to SR\) can be obtained [61]. The outgoing curvature perturbation \({\cal R}_{k}^{(3)}(t)\) in the final USR phase (third phase) is given by [61], \[{\cal R}_{k}^{(3)}=\frac{H}{M_{P}\sqrt{4\epsilon(\tau)k^{3}}}\Big{[}\alpha_{k} ^{(3)}(1+ik\tau)e^{-ik\tau}+\beta_{k}^{(3)}(1-ik\tau)e^{ik\tau}\Big{]}\,, \tag{2.6}\] with \(\epsilon(\tau)\) given by Eq. (2.3) and the coefficients \((\alpha_{k}^{(3)},\beta_{k}^{(3)})\) are as follow, \[\alpha_{k}^{(3)}=\frac{1}{8k^{6}\tau_{i}^{3}\tau_{e}^{2}}\Big{[}3h(1-ik\tau_{ e})^{2}(1+ik\tau_{i})^{2}e^{2ik(\tau_{e}-\tau_{i})}-i(2k^{3}\tau_{i}^{3}+3ik^{2} \tau_{i}^{2}+3i)(4ik^{3}\tau_{e}^{3}-hk^{2}\tau_{e}^{2}-h)\Big{]},\] and \[\beta_{k}^{(3)}=\frac{1}{8k^{6}\tau_{i}^{3}\tau_{e}^{3}}\Big{[}3(1+ik\tau_{i} )^{2}(h+hk^{2}\tau_{e}^{2}+4ik^{3}\tau_{e}^{3})e^{-2ik\tau_{i}}+ih(1+ik\tau_{e} )^{2}(3i+3ik^{2}\tau_{i}^{2}+2k^{3}\tau_{i}^{3})e^{-2ik\tau_{e}}\Big{]}.\] The physical quantities are calculated at the end of inflation \(\tau=0\) when the system has reached to its attractor phase with \(\epsilon(\tau)\to\epsilon_{V}\). The curvature perturbation power spectrum \({\cal P}_{\cal R}\) from Eq. (2.6) is obtained to be \[{\cal P}_{\cal R}(k,\tau=0)=\frac{H^{2}}{8M_{P}^{2}\pi^{2}\epsilon_{\rm V}} \big{|}\alpha_{k}^{(3)}+\beta_{k}^{(3)}\big{|}^{2}\,. \tag{2.7}\] The behaviour of \({\cal P}_{\cal R}(k,\tau=0)\) are plotted in Fig. 1. There are a number of common features as can be seen in this figure. First, we have the plateau on large scales associated to the modes which leave the horizon long before the USR phase starts. The amplitude of the power spectrum for these perturbations is fixed by the COBE normalization on \({\rm k}_{\rm CMB}=0.05\,{\rm Mpc}^{-1}\) with \({\cal P}_{\cal R}\simeq 2.1\times 10^{-9}\). Second, prior to the USR phase there is a dip in power spectrum followed by a universal scaling \({\cal P}\propto k^{4}\). Third, there are oscillations superimposed on the USR plateau after the maximum. Fourth, there is a plateau for the modes which leave the horizon at the final stage of inflation. As discussed previously, the larger is the value of \(|h|\), the smaller is the final power spectrum which we will demonstrate bellow. Let us look at the parameter dependence of the the power spectrum given in Eq. (2.7). Two important parameters are the sharpness of the transition \(h\) and the duration of the USR phase \(\Delta N\). In addition, we have the energy scale of inflation \(H\) and the final slow-roll parameter \(\epsilon_{V}\). As can be seen from Eq. (2.7) the latter two parameters appear in a combination which is fixed by the overall COBE normalization at CMB scales. This leaves the scale of inflation or the duration of the observed inflation, \(N_{\rm tot}\), to be another free parameter. In our analysis we consider various cases of \(N_{\rm tot}\) in the range \(50\lesssim N_{\rm tot}\lesssim 60\). Another independent variable may be considered to be the starting time of USR phase, \(\tau_{i}\). However, in order to obtain the enhancement in power for PTAs observations, we need the starting time of USR to be when the mode of interest which leaves the horizon have the nano-Hertz frequency. This requires the starting time of USR phase compared to the CMB scales to be separated by about 16 e-folds. Finally, the spectral index \(n_{s}\) is fixed by its best fit value from Planck observation, i.e. \(n_{s}\simeq 0.96\)[81, 82]. In summary, at the end we have three main independent parameters: the sharpness parameter \(h\), duration of USR, \(\Delta N\), and the total e-fold number of inflation \(N_{\rm tot}\) which we will vary. A crucial point is that models with the intermediate USR phase can generate significant PBHs which are constrained by observations. Therefore, in order to meet the various bounds on PBHs formation, we need to impose an additional bound on \((h,\Delta N)\) parameter space. These PBH constraints leave only one of them free which we take to be \(h\). A view of the power spectrum for various values of \(h\) and the bound from PBHs are shown in Fig. 1. Schematically, we see that the PBHs bound is roughly translated into \(\mathcal{P}_{\mathcal{R}}<10^{-2}\). More precisely, in order to consider the PBHs bound on the curvature power spectrum, we need to know about the probability distribution function (PDF) of the primordial perturbations. In a common approach [28], the mass fraction parameter \(\beta\) is related to statistics of \(\mathcal{R}\) as [83] \[\beta\simeq\int_{\mathcal{R}_{c}}^{\infty}\ f_{\mathcal{R}}(x)\ \mathrm{d}x\simeq\frac{1}{2}\mathrm{ Erfc}\left(\frac{\mathcal{R}_{c}}{\sqrt{2\mathcal{P}_{\mathcal{R}}}}\right) \tag{2.8}\] where \(f_{\mathcal{R}}\) is PDF of \(\mathcal{R}\) and \(\mathcal{R}_{c}\sim\mathcal{O}(1)\)[29, 84]. The second estimation comes from the fact that we can consider a Gaussian PDF for \(\mathcal{R}\) with zero mean and variance at the order of power spectrum. After PBH production, it is crucial to determine the fraction of PBH abundance in dark matter density at the present epoch. It is roughly given by [28] \[f_{\rm PBH}(M_{\rm PBH})\simeq 2.7\times 10^{8}\Big{(}\frac{M_{\rm PBH}}{M_{ \odot}}\Big{)}^{-\frac{1}{2}}\beta(M_{\rm PBH})\,, \tag{2.9}\] where \(M_{\odot}\) and \(M_{\rm PBH}\) are the solar mass and the PBH mass respectively. Assuming an instant reheating at the end of inflation [83], PBH mass can be estimated by \[\frac{M_{\rm PBH}}{M_{\odot}}\simeq 10^{-13}\left(\frac{10^{-6}M_{\rm P}}{H} \right)e^{2(N_{\rm tot}-N_{p}-22.25)}\,, \tag{2.10}\] where \(N_{\rm tot}\) is the total number of e-fold of inflation and \(H\) is the Hubble rates during inflation. Moreover, \(N_{p}\) is the location of the maximum of the power spectrum. Considering a fixed value for the location of the peak of power spectrum, the fraction \(f_{\rm PBH}\) depends on the total number of e-fold of inflation which is related to the reheating temperature. In Fig. 2, we have illustrated the mass function \(f_{\rm PBH}\) for various values of \(N_{\rm tot}\). Now let us look at the various limits of the power spectrum Eq. (2.7). We have two dimensionless numbers, \(x\equiv-k\tau_{i}\) and \(e^{-\Delta N}\). First consider the limit \(e^{-\Delta N}\ll x\) so we expand Eq. (2.7) to leading order in \(e^{-\Delta N}\), obtaining \[\mathcal{P}_{\mathcal{R}}(k,\tau=0)\simeq \frac{e^{6\Delta N}}{2}\mathcal{P}_{{}_{\rm CMB}}\big{(}\frac{h-6} {h}\big{)}^{2} \tag{2.11}\] \[\times\Big{[}2x^{6}+9x^{4}+18x^{2}+9+(21x^{4}-9)\cos(2x)+(6x^{5}-2 4x^{3}-18x)\sin(2x)\Big{]}\,,\] in which \(\mathcal{P}_{{}_{\rm CMB}}\) is the CMB scale power spectrum given by \[\mathcal{P}_{{}_{\rm CMB}}=\frac{H^{2}}{8\pi^{2}M_{P}^{2}\epsilon_{i}}\,. \tag{2.12}\] From Eq. (2.11) we see that \(\mathcal{P}_{\mathcal{R}}\propto e^{6\Delta N}\) which is the hallmark of USR inflation for the modes which leave the horizon during the USR phase. Second, we see that \(\mathcal{P}_{\mathcal{R}}\propto\big{(}\frac{h-6}{h}\big{)}^{2}\). This is clearly seen in Fig. 1 as cases with higher value of \(|h|\) have lower power in the final plateau. The physical reason is as follows. Models with larger \(|h|\) reach the final attractor phase more quickly. For this to happen, \(\epsilon(\tau)\) should assume its final value \(\epsilon_{\rm V}>\epsilon_{e}\) quickly as well. This means that the mode becomes frozen quickly after the USR phase but with a final amplitude \({\cal P}_{\cal R}(k,\tau=0)<{\cal P}_{\cal R}(k,\tau_{e})\). To understand the scaling behaviour of the power spectrum prior to USR phase and close to the USR peak, let us consider the \(x\ll 1\) limit of the expression (2.11), obtaining \[{\cal P}_{\cal R}(k,\tau=0)\simeq\frac{2}{25}e^{6\Delta N}{\cal P}_{\mbox{\tiny CMB }}\big{(}\frac{h-6}{h}\big{)}^{2}\,(k\tau_{i})^{4}\,. \tag{2.13}\] It shows that the power spectrum scales like \({\cal P}_{\cal R}(k)\propto k^{4}\) prior to and after the USR phase starts, a phenomenon which was observed previously in [85, 86, 87, 88, 89] as well. As we see in Fig. 1, there is a dip in power spectrum prior to USR phase where the above mentioned \(k^{4}\) scaling starts. To understand the nature of this dip, note that the expression (2.11) is obtained assuming that \(e^{-\Delta N}\ll x\). However, this limit is violated for very long modes which become superhorizon much earlier than the USR phase starts. In particular, the CMB scale modes belong to this limit. Considering the \(x\ll e^{-\Delta N}\) limit of the power spectrum we obtain \[{\cal P}_{\cal R}(k,\tau=0)\simeq{\cal P}_{\mbox{\tiny CMB}}\Big{(}1-\frac{4}{5 }\frac{h-6}{h}\,(k\tau_{i})^{2}\Big{)}\,,\qquad(k\tau_{i}\to 0)\,. \tag{2.14}\] The position of the dip \(k=k_{\rm d}\) is where the two expressions (2.14) and (2.13) become comparable, yielding to the approximate value (see also [89]) \[k_{\rm d}\tau_{i}\simeq\sqrt{\frac{5h}{4(h-6)}}\,e^{-\frac{3}{2}\Delta N}\,. \tag{2.15}\] Figure 1: The plot of \({\cal P}_{\cal R}\) vs. \(k\) for various values of \(h\) with \(h=-0.1\) (red), \(h=-1\) (orange), \(h=-6\) (green) and \(h=-12\) (blue). Left: The values of \((h,\Delta N)\) are fixed such that the peak of \({\cal P}_{\cal R}\) does not violate the PBHs bounds shown by the black (top) curve. All four curves very nearly share the same values of the USR peak and the dip prior to the USR phase. Right: We have fixed \(\Delta N=1.3\). As \(\Delta N\) is fixed for all curves, the power is controlled by \(h\) such that the larger \(|h|\) is the smaller the final plateau is. In addition, the position of the dip moves towards the right by increasing \(|h|\). From the above formula we see that for a fixed value of \(\Delta N\), as \(|h|\) increase the value of \(k_{\rm d}\) increases as well, i.e. the dip moves towards the right, as seen in the right panel of Fig. 1. As we mentioned previously the USR model can generate non-Gaussianities. However, the magnitude of \(f_{NL}\) depends on \(k\) as well. For the mode which leaves the horizon during the early stage of inflation and prior to USR phase, then the Maldacena consistency condition does hold and for these modes \(f_{NL}\) is basically very small. On the other hand, for the modes which leave the horizon during the USR phase, i.e. for \(k_{\rm i}<k<k_{e}\), the consistency condition is violated. The final value of \(f_{NL}\) for these modes crucially depends on the parameter \(h\). This was studied in details in [54] and [61] in which it is shown that up to slow-roll corrections, \[f_{NL}=\frac{5h^{2}}{2(h-6)^{2}}\,. \tag{2.16}\] For an infinitely sharp transition with \(h\to-\infty\) in which the system assumes its final attractor phase immediately after the USR transition, we obtain the maximum value \(f_{NL}=\frac{5}{2}\). However, lowering \(|h|\) reduces \(f_{NL}\) accordingly. For example, for the standard scenario in which \(h=-6\) as studied in [55] one obtains \(f_{NL}=\frac{5}{8}\simeq 0.63\). For milder transitions with \(|h|\lesssim 1\), from Eq. (2.16) we typically obtain \(f_{NL}\ll 1\). For example for \(h=-1\) and \(h=-0.1\) which we will study below, we obtain \(f_{NL}\simeq 0.051\) and \(f_{NL}\simeq 0.0007\) respectively. Therefore, to very good approximation one can employ the Gaussian bound on PBH's formation. To be more specific, to neglect the non-Gaussianity effects in PBH formation, we need that \(f_{NL}\mathcal{P}_{\mathcal{R}}\ll 1\)[90]. In our model with the maximum value Figure 2: Fraction \(f_{PBH}\) as a function of the mass of the formed PBHs in unit of solar mass for USR models in which the power spectrum has a peak around nano Hertz frequencies. The observational constraints are taken from Refs. [91, 92, 93]. The parameters \((h,\Delta N)\) are fixed such that the curve for each \(N_{\rm tot}\) reaches the maximum value of power allowed by the PBHs bound. For example, for \(N_{\rm tot}=55\) (green curve), we have \((h,\Delta N)=(-0.1,\,1.45),(-1,\,2.17),(-6,\,2.59)\) and \((-12,\,2.68)\). These values of \((h,\Delta N)\) are used to generate the results in Figs. 3, 4 and 5. \(f_{NL}=\frac{5}{2}\), we can easily satisfy the Gaussianity approximation if \(\mathcal{P}_{\mathcal{R}}\) is small. In our analysis, as can be seen in Fig. 1, the PBHs bound typically require that \(\mathcal{P}_{\mathcal{R}}\lesssim 10^{-2}\) so we easily meet the condition \(f_{NL}\mathcal{P}_{\mathcal{R}}\ll 1\) for all practical values of \(h\). As mentioned in Introduction section, the loop correction is an open question in this setup [55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67]. The effects of the sharpness of the transition on the loop corrections were studied in [61]. It was shown that for an arbitrarily sharp transition the one-loop corrections become very large. More specifically, it was shown in [61] that for \(|h|\gg 1\), the one-loop corrections scale linearly with \(h\), yielding to a large loop correction in line with the results of [55, 56]. However, it was shown in [62] that for a mild transition the one-loop corrections to CMB scale modes are slow-roll suppressed and are harmless. In our current study, in order to trust the Gaussian approximation for PBHs formation and to neglect the large loop corrections, one requires \(|h|\lesssim 1\) which we shall assume in our analysis. However, in order to compare the predictions of the setup with both sharp and mild transitions, we also present the results for SIGWs for the cases of sharp transition as well. In our numerical plots below, the examples of sharp transitions correspond to the cases \(h=-6\) and \(h=-12\). ## 3 SIGWs and Constraints from PTAs Observations The curvature perturbations, which have been generated during inflation, can re-enter the horizon during the radiation-dominated (RD) era in which the metric (in conformal Newtonian gauge) reads \[\mathrm{d}s^{2}= -a^{2}\left[(1+2\Phi)\mathrm{d}\tau^{2}+\Big{(}(1-2\Psi)\delta_{ ij}+\frac{1}{2}h_{ij}\Big{)}\mathrm{d}x^{i}\mathrm{d}x^{j}\right]. \tag{3.1}\] Here \(\tau\) is the conformal time during RD era, \(\Phi\) and \(\Psi\) are the Bardeen potentials and \(h_{ij}\) is the transverse-traceless tensor perturbations. Using the Einstein's field equations, the evolution of Fourier modes of \(h_{ij}\), denoted by \(h_{\mathbf{k}}^{\lambda}\), are given by \[h_{\mathbf{k}}^{\lambda\prime\prime}(\eta)+2\mathcal{H}h_{\mathbf{k}}^{ \lambda^{\prime}}(\eta)+k^{2}h_{\mathbf{k}}^{\lambda}(\eta)=4S_{\mathbf{k}}^{ \lambda}(\eta), \tag{3.2}\] where \(\lambda\) represents the two polarizations. The primes denote the derivative with respect to the conformal time, \(\mathcal{H}=a^{\prime}/a\) is the conformal Hubble parameter, and the source term \(S_{\mathbf{k}}^{\lambda}\) is transverse and traceless which is second order in scalar perturbations, given by \[S_{\mathbf{k}}^{\lambda}=\int\!\frac{\mathrm{d}^{3}q}{(2\pi)^{3} }\,\varepsilon_{ij}^{\lambda}(\tilde{\mathbf{k}})\ q^{i}q^{j}\bigg{[}2\Phi_{ \mathbf{q}}\Phi_{\mathbf{k}-\mathbf{q}}+\left(\mathcal{H}^{-1}\Phi_{\mathbf{ q}}^{\prime}+\Phi_{\mathbf{q}}\right)\left(\mathcal{H}^{-1}\Phi_{\mathbf{k}- \mathbf{q}}^{\prime}+\Phi_{\mathbf{k}-\mathbf{q}}\right)\bigg{]}\,, \tag{3.3}\] where \(\varepsilon_{ij}^{\lambda}\) is the polarization tensor. Note that here we have neglected the vector perturbations and the anisotropic stress (\(\Phi\simeq\Psi\)). In Fourier space, the Bardeen potential is related to \(\mathcal{R}_{\mathbf{k}}\) through transfer function \(\mathcal{T}(\mathbf{k}\tau)\) as \[\Phi_{\mathbf{k}}=\frac{2}{3}\mathcal{T}(\mathbf{k}\tau)\mathcal{R}_{\mathbf{ k}}\,. \tag{3.4}\] The transfer function encodes the linear evolution of the Newtonian potential after horizon reentry which has a oscillatory behaviour. Solving the equation of motion (3.2), taking the late-time limit during a RD era (\(\tau\to\infty\) at the matter-radiation equality), the power spectrum of tensor fluctuations is given by [94] \[\mathcal{P}_{h}(\tau,k)=4\int_{0}^{\infty}\mathrm{d}v\int_{|1-v|}^{|1+v|} \mathrm{d}u\ \ \mathcal{K}\left(u,v,k\tau\right)\ \mathcal{P}_{\mathcal{R}}\left(ku\right)\mathcal{P}_{\mathcal{R}}\left(kv \right)\,, \tag{3.5}\] For further details about the integration kernel \(\mathcal{K}\) and how to perform the integrations see [94]. The produced GW contributes to the total energy of Universe and dilutes like radiation. Taking into account the following matter-dominated and dark-energy-dominated eras, the current value of \(\Omega_{\rm GW}\), the fraction of the GW energy density per logarithmic wavelength, is obtained to be \[\Omega_{\rm GW}h_{0}^{2}=\Omega_{\rm r}h_{0}^{2}\ \left(\frac{g_{*}}{g_{*,e}} \right)^{1/3}\Omega_{\rm GW,e}(f)\,. \tag{3.6}\] Here \(\Omega_{\rm r}\) is the present-day abundance of radiation, \(g_{*}\) is the number of relativistic degrees of freedom in energy density, and the subscripts \(e\) denotes the time of emission. Note that \(\Omega_{\rm r}h_{0}^{2}\simeq 4.2\times 10^{-5}\) with \(h_{0}=H_{0}/100\,{\rm km\,s^{-1}\,Mpc^{-1}}\). Here \(f=c\,k/(2\pi)\) is the frequency of the GW which has appeared due to the \(k\)-dependence of the spectrum of curvature perturbations (2.7) during inflation. We have used the curvature perturbations power spectrum (2.7) generating during the USR phase and calculated the convolution integral (3.5) numerically to find the current fractional energy density of GWs (3.6). The results are shown in Fig. 3 for various values of \(h\) and \(\Delta N\) in nano-Hertz bound. The results have been presented alongside the posteriors of an Hellings-Downs (HD) correlated free spectral reconstruction of the NANOGrav signal [13]. The values of \((h,\Delta N)\) are fixed such that the peak of \(\mathcal{P}_{\mathcal{R}}\) does not violate the PBHs bounds. As seen, the spectra of our model follow the HD pattern expected for a stochastic gravitational wave background. Interestingly, we see that within the observable window the predictions of all models, mild (\(|h|\lesssim 1\)) or sharp (\(|h|\gg 1\)), follow the same pattern and are not significantly different from each other. However, outside the NANOGrav observed window (\(f>10^{2}\,{\rm nHz}\)) the curves deviate from each other noticeably. This pattern is similar to the plots of power spectrum of curvature perturbations presented in Fig. 1. The reason is that all curves are obtained after imposing the PBHs bounds. However, the starting time of the USR and the value of the peak of the USR plateau are very similar for all curves as seen Figure 3: The prediction of the model compared to the NANOGrav data [13] for the mild transition \(h=-0.1\) for various values of \(N_{\rm tot}\). \(\Delta N\) is fixed by PBHs bounds allowed in Fig. 2. As \(f_{\rm PBH}\) is exponentially sensitive to \(\Delta N\), the values of \(\Delta N\) are nearly equal to each other: \(\Delta N=1.44,1.45,1.46\) for \(N_{\rm tot}=51,55\) and \(N_{\rm tot}=59\) respectively. in Fig. 1. This is the reason why all curves, sharp or mild, follow close trajectories on the observable window. However, crucial to our setup is that outside the NANOGrav window, the curves have distinct predictions for SIGWs on frequencies much higher than \(\sim 10^{2}\,\)nHz. More specifically, the final tail of the power spectrum scales somewhat inversely with the sharpness parameter \(h\) such that milder (sharper) transitions have higher (lower) tails. In Fig. 4 we we have shown the SIGWs spectra for a larger frequency range. In this figure, the quantity \(\Omega_{\text{GW}}h^{2}\) was plotted against the frequency together with the sensitivity of the various existing and forthcoming GW experiments such as LISA, SKA, BBO, DECIGO etc. As seen, the tails of SIGW spectrums for different \((h,\Delta N)\) can fall into the sensitivity bounds of these observations. It means that different values of \((h,\Delta N)\) are distinguishable from each other in future GWs observations. ## 4 Summary and Discussions The stochastic gravitational wave background detected by various PTAs observations can open a new window to the dynamics of early universe. In particular, this signal can be generated by GWs induced by scalar perturbations at second order in perturbation theory. The SIGWs can be used as a tool to distinguish various inflationary scenarios. A key criteria is that the models which are employed to explain the SGWB observed by the PTAs observations should not generate too much of PBHs which are constrained in various frequency ranges. In this work we have considered a single field model of inflation containing an intermediate USR phase. This setup has been used extensively in the past to generate PBHs and for the induced GWs studies. We have paid particular attention to the sharpness parameter of the model which play significant roles in loop corrections and for the amplitude of non-Gaussianity. In order to be away Figure 4: The prediction of the model compared to the NANOGrav data [13] for the fixed \(N_{\text{tot}}=55\) but for various values of \(h\). The value of \(\Delta N\) is fixed by the PBHs fraction allowed in Fig. 2. More specifically, \((h,\Delta N)=(-0.1,\,1.45),(-1,\,2.17),(-6,\,2.59)\) and \((-12,\,2.68)\). from the dangerous one-loop corrections we require a mild transition with \(|h|\lesssim 1\). This is also the limit where the amplitude of non-Gaussianity is small and one can employ the Gaussian predictions of the PBHs formation. We have calculated the spectrum of SIGWs and compared it to the NANOGrave results. The predictions of the model are consistent with the observed data. However, a careful data analysis is required to contrast the predictions of the model with the PTA datas and to put constraints on the model parameters. While our setup can qualitatively explain the origin of the NANOGrave observations, but it has specific predictions for the spectrum in higher frequency ranges. Our model predicts that a setup with a mild (sharp) transition has a higher (lower) tail of IGWs once they are fit to the current NANOGrave data. As a result, the predictions of our model for the amplitude of induced GWs can be tested in future GWs observation which may put constraints on the model parameter or to rule it out. **Acknowledgments:** We are grateful to Anotonio Riotto, Sina Hooshangi, and Seyed Ali Hosseini Mansoori for useful discussions and comments on the draft. We are partially supported by the "Saramadan" Federation of Iran. A. T. would like to thank University of Rwanda, EAIFR, and ICTP for their kind hospitalities during the 17th international workshop on the "Dark Side of the Universe" where this work was in its final stages. Figure 5: The same as in Fig. 4 but for an extended range of the frequency. As we see, depending on the value of \(h\), the tail of SIGWs differ noticeably for each model which can be tested in future observations. The higher is the value of \(|h|\) the lower is the value of the SIGW tail.
2306.04522
A remark on a conjecture on the symmetric Gaussian Problem
In this paper we study the functional given by the integral of the mean curvature of a convex set with Gaussian weight with Gaussian volume constraint. It was conjectured that the ball centered at the origin is the only minimizer of such a functional for certain value of the mass. We give a positive answer in dimension two while in higher dimension the situation is different. In fact, for small value of mass the ball centered at the origin is a local minimizer while for large values the ball is a maximizer among convex sets with uniform bound on the curvature.
Nicola Fusco, Domenico Angelo La Manna
2023-06-07T15:32:58Z
http://arxiv.org/abs/2306.04522v2
# A remark on a conjecture on the symmetric Gaussian problem ###### Abstract. We prove that the Gaussian problem is a well-known and well-known problem. We show that the Gaussian problem is a well-known and well-known problem. We show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. We also show that the Gaussian problem is a well-known problem is a well-known problem. We also show that the Gaussian problem is a well-known problem. Introduction Let \(E\) be an open set of class \(C^{2}\) and \(X:M\to\mathbb{R}^{n}\) a \(C^{1}\) vector field. For any \(x\in M\), denoting by \(\tau_{1},\dots,\tau_{n-1}\) an orthonormal base for the tangent space \(T_{x}M\) with \(x\in M\), the tangential divergence of \(X\) is given by \[\operatorname{div}_{\tau}X=\sum_{i=1}^{n-1}\langle\nabla_{\tau_{i}}X,\tau_{i}\rangle,\] where \(\nabla_{\tau_{i}}X\) is the derivative of \(X\) in the direction \(\tau_{i}\). Note that if we still denote by \(X\) a \(C^{1}\) extension of the vector field in a tubular neighborhood of \(\partial E\), then \[\operatorname{div}_{\tau}X=\operatorname{div}X-\langle DX\nu_{\partial E},\nu _{\partial E}\rangle\] where \(\nu_{\partial E}\) is the exterior normal to \(E\). We recall also that the mean curvature of \(\partial E\) (actually the sum of the principal curvatures), is given by \[H_{\partial E}=\operatorname{div}_{\tau}\nu_{\partial E}. \tag{2.2}\] If we extend \(\nu_{\partial E}\) in a tubular neighborhood of \(\partial E\) so that the resulting vector field is still of class \(C^{1}\), then \(H_{\partial E}=\operatorname{div}\nu_{\partial E}\) on \(\partial E\). Observe that with this definition it turns out that if \(E\) is locally the subgraph of a \(C^{2}(\mathbb{R}^{n-1})\) function \(u\) then \[H_{\partial E}=-\operatorname{div}\left(\frac{Du}{\sqrt{1+|Du|^{2}}}\right). \tag{2.3}\] We recall that if \(E\) is a bounded open set of class \(C^{2}\) and \(X\in C^{1}(\partial E,\mathbb{R}^{n})\), the divergence theorem for manifolds states that \[\int_{\partial E}\operatorname{div}_{\tau}Xd\mathcal{H}^{n-1}=\int_{\partial E }H_{\partial E}\langle X,\nu_{\partial E}\rangle\,d\mathcal{H}^{n-1}.\] In particular, if \(X\) is a tangent vector field it holds \[\int_{\partial E}\operatorname{div}_{\tau}Xd\mathcal{H}^{n-1}=0.\] Note that if \(E\) is an open set of class \(C^{1,1}\), hence it is locally the subgraph of a \(C^{1,1}\) function \(u\), the mean curvature of \(\partial E\) can be defined using (2.3). With this definition the above divergence theorem still holds. Finally, the Laplace-Beltrami operator on \(\partial E\) is defined for any \(h\in C^{2}(\partial E)\) as \[\Delta_{\partial E}h=\operatorname{div}_{\tau}\nabla h\] where \(\nabla h\) denotes the tangential gradient of \(h\). ## 3. Two dimensional case: a two side estimate of the integral of the curvature in weighted spaces In this section we provide an estimate for the weighted integral of the curvature under suitable assumptions on the weight. Let \(f:[0,\infty)\to(0,\infty)\) be a \(C^{1}\) not increasing function and \(w:(0,+\infty)\to[0,\infty)\) be defined as \[w(r)=-\frac{f^{\prime}(r)}{r}. \tag{3.1}\] We define the weighted area \(|E|_{w}\) of a set \(E\) as \[|E|_{w}=\int_{E}w(|x|)\,dx.\] Note that if \(f=e^{-\frac{r^{2}}{2}}\) then \(w=e^{-\frac{r^{2}}{2}}\). Hence the results given in this section apply to the particular case of the Gaussian weight. We start by proving an isoperimetric type inequality concerning a weighted integral of the curvature. To this aim, here and in the following we denote by \(B_{r}\) the ball centered at the origin with radius \(r\). **Proposition 3.1**.: _Let \(r>0\), \(f:[0,\infty)\to(0,\infty)\) a \(C^{1}\) not increasing function and \(w\) be defined as in (3.1). For any convex set \(E\subset\mathbb{R}^{2}\) of class \(C^{1,1}\) with \(|E|_{w}=|B_{r}|_{w}\) containing the origin it holds_ \[\int_{\partial E}H_{\partial E}f(|x|)\,d\mathcal{H}^{1}\leq\int_{\partial B_ {r}}H_{\partial B_{r}}f(|x|)\,d\mathcal{H}^{1}. \tag{3.2}\] _If \(w\) is not increasing (3.2) holds for any convex set \(E\) of class \(C^{1,1}\) with \(|E|_{w}=|B_{r}|_{w}\)._ Proof.: For a convex set \(E\) of class \(C^{1,1}\) containing the origin we denote by \(\rho:\mathbb{R}\to(0,\infty)\) a \(C^{1,1}\) periodic function such that \(\partial E=\{\rho(\theta)(\sin\theta,\cos\theta):\theta\in[0,2\pi]\}\). Note that for almost every \(\theta\in[0,2\pi]\) the curvature at \(\rho(\theta)(\sin\theta,\cos\theta)\) is given by \[H_{\partial E}=\frac{\rho^{2}+2\rho^{\prime 2}-\rho\rho^{\prime\prime}}{(\rho^{2} +\rho^{\prime 2})^{\frac{3}{2}}}.\] Thus we compute \[|E|_{w}=\int_{0}^{2\pi}\int_{0}^{\rho}tw(t)\,dt\] and \[\int_{\partial E}H_{\partial E}f(|x|)\,d\mathcal{H}^{1}=\int_{0}^{2\pi}\frac{ \rho^{2}+2\rho^{\prime 2}-\rho\rho^{\prime\prime}}{\rho^{2}+\rho^{\prime 2}}f( \rho)\,d\theta.\] Since \(|E|_{w}=|B_{r}|_{w}\) we have, recalling (3.1) \[2\pi f(0)-\int_{0}^{2\pi}f(\rho)\,d\theta=\int_{0}^{2\pi}\int_{0}^{\rho}tw(t )\,dt=2\pi\int_{0}^{r}tw(t)\,dt=2\pi(f(0)-f(r))\] which gives \[\int_{0}^{2\pi}f(\rho)\,d\theta=2\pi f(r).\] Hence, integrating by parts, \[\begin{split}\int_{\partial E}H_{\partial E}f(|x|)d\mathcal{H}^ {1}&=\int_{0}^{2\pi}\frac{\rho^{\prime 2}-\rho\rho^{\prime\prime}}{ \rho^{2}+\rho^{\prime 2}}f(\rho)+\int_{0}^{2\pi}f(\rho)\,d\theta\\ &=-\int_{0}^{2\pi}\frac{d}{d\theta}\arctan\left(\frac{\rho^{ \prime}}{\rho}\right)f(\rho)d\theta+\\ &=\int_{0}^{2\pi}(\rho\rho^{\prime})\arctan\left(\frac{\rho^{ \prime}}{\rho}\right)\ f^{\prime}(\rho)d\theta+2\pi f(r)\\ &\leq 2\pi f(r)=\int_{\partial B_{r}}H_{\partial B_{r}}f(r)d \mathcal{H}^{1},\end{split} \tag{3.3}\] where in the last inequality we used that \(t\arctan t\geq 0\) for all \(t\in\mathbb{R}\) and \(f^{\prime}(\rho)\leq 0\). When \(0\in\partial E\), given \(\varepsilon>0\) small we may translate \(E\) to get a set \(E_{\varepsilon}=E+x_{\varepsilon}\) with \(|x_{\varepsilon}|<\varepsilon\) and such that \(0\in\operatorname{int}E_{\varepsilon}\). Then the validity of (3.2) for \(E\) follows by applying the same inequality to \(E_{\varepsilon}\) and then letting \(\varepsilon\to 0\). In the case that the set \(E\) does not contain the origin let \(x_{0}\) be the nearest point of \(\overline{E}\) to the origin. Hence, since \(E\) is convex we have that \(\langle x,x_{0}\rangle\geq|x_{0}|^{2}\) for all \(x\in\overline{E}\), which in turn implies that \(|x-x_{0}|<|x|\) for all \(x\in\overline{E}\). Thus \(|E-x_{0}|_{w}>|E|_{w}\). Let \(s>0\) such that \(|B_{s}|_{w}=|E-x_{0}|_{w}\). Since \(|E-x_{0}|_{w}>|E|_{w}\) we have that \(s>r\) and using that \(E-x_{0}\) passes through the origin we find \[\int_{\partial E}H_{\partial E}f(|x|)\,d\mathcal{H}^{1}\leq\int_{\partial(E-x_ {0})}H_{\partial E}f(|x|)\,d\mathcal{H}^{1}\leq 2\pi f(s)\leq 2\pi f(r).\] We note that Proposition 3.1 remains true if we replace the convexity assumption on \(E\) with the assumption that \(E\) is starshaped with respect to the origin. Next result shows that when \(E\) is a convex set containing the origin, the inequality above can be given in a stronger quantitative form. To this aim, given any sufficiently smooth set \(E\subset\mathbb{R}^{2}\), we introduce the following positive quantities \[\alpha_{f}(E)=-\int_{\partial E}\left(|x|-\frac{\langle x,\nu\rangle^{2}}{|x|} \right)f^{\prime}(|x|)\,d\mathcal{H}^{1}, \tag{3.4}\] and \[\beta_{f}(E)=\int_{\partial E}\left(|x|-\frac{\langle x,\nu\rangle^{2}}{|x|} \right)\left(\frac{f(|x|)-|x|f^{\prime}(|x|)}{|x|^{2}}\right)\,d\mathcal{H}^{1}. \tag{3.5}\] **Theorem 3.2**.: _Let \(E\) be a convex set of class \(C^{1,1}\) containing the origin such that \(|E|_{w}=|B_{r}|_{w}\). Then_ \[\alpha_{f}(E)\leq\int_{\partial B_{r}}H_{\partial B_{r}}f(|x|)\,d\mathcal{H}^ {1}-\int_{\partial E}H_{\partial E}f(|x|)\,d\mathcal{H}^{1}\leq\beta_{f}(E). \tag{3.6}\] Proof.: Denote by \(\rho:\mathbb{R}\to(0,\infty)\) a \(C^{1,1}\) periodic function such that \(\partial E=\{\rho(\theta)(\sin\theta,\cos\theta):\theta\in[0,2\pi]\}\). To prove the first inequality we observe that \[|x|-\frac{\langle x,\nu\rangle^{2}}{|x|}=\frac{\rho\rho^{\prime 2}}{\rho^{2} +\rho^{\prime 2}}. \tag{3.7}\] Using that \[t\arctan t\geq\frac{t^{2}}{\sqrt{1+t^{2}}}\] for all \(t\in\mathbb{R}\), using (3.3) and arguing as in the proof of Proposition 3.1 we get \[\int_{\partial E}H_{\partial E}f(|x|)\,d\mathcal{H}^{1}= \int_{0}^{2\pi}\rho^{2}\frac{\rho^{\prime}}{\rho}\arctan\left( \frac{\rho^{\prime}}{\rho}\right)\,\,f^{\prime}(\rho)d\theta+2\pi f(r)\] \[\leq \int_{0}^{2\pi}\frac{\rho^{\prime}\rho^{2}}{\sqrt{\rho^{\prime 2 }+\rho^{2}}}\,\,f^{\prime}(\rho)d\theta+2\pi f(r)\] \[= \int_{\partial E}\left(|x|-\frac{\langle x,\nu\rangle^{2}}{|x|} \right)f^{\prime}(|x|)\,d\mathcal{H}^{1}+\int_{\partial B_{r}}H_{\partial B_{ r}}f(|x|)\,d\mathcal{H}^{1}.\] To prove the second inequality we first recall that up to a constant \(u(x)=\log|x|\) is the fundamental solution of the laplacian in two dimensions. As a consequence of this fact we claim that if \(E\) contains the origin and \(|E|_{w}=|B_{r}|_{w}\) \[\int_{\partial E}\frac{1}{|x|}f(|x|)\,d\mathcal{H}^{1}\geq\int_{\partial B_{ r}}\frac{1}{|x|}f(|x|)\,d\mathcal{H}^{1}. \tag{3.8}\] In fact, using the divergence theorem in \(E\setminus B_{\varepsilon}\), where \(\overline{B}_{\varepsilon}\subset E\) and letting \(\varepsilon\to 0^{+}\), with some elementary calculations we get \[\begin{split}\int_{\partial E}\frac{1}{|x|}f(|x|)\,d\mathcal{H}^{ 1}&\geq\int_{\partial E}\frac{\langle x,\nu\rangle}{|x|^{2}}f(|x| )\,d\mathcal{H}^{1}&=\int_{E}\operatorname{div}\left(\frac{x}{|x |^{2}}f(|x|)\right)dx\\ &=2\pi f(0)+\int_{E}\frac{f^{\prime}(|x|)}{|x|}\,dx\\ &=\int_{\partial B_{r}}\frac{1}{|x|}f(|x|)\,d\mathcal{H}^{1}, \end{split} \tag{3.9}\] where in the last equality we used the assumption that \(|E|_{w}=|B_{r}|_{w}\). Note that the above inequality is strict, unless \(E=B_{r}\) and it can be actually written in a quantitative form arguing, see for instance [14]. To prove the proposition we now use (3.9) and the diverge theorem on a manifold. \[\int_{\partial E}H_{\partial E}f(|x|)\,d\mathcal{H}^{1}\geq\int_{ \partial E}H_{\partial E}\left\langle\frac{x}{|x|},\nu\right\rangle f(|x|)\,d \mathcal{H}^{1}\] \[=\int_{\partial E}\operatorname{div}_{\tau}\left(\frac{x}{|x|}f( |x|)\right)\,d\mathcal{H}^{1}\] \[=\int_{\partial E}\frac{1}{|x|}f(|x|)\,d\mathcal{H}^{1}+\int_{ \partial E}\left(|x|-\frac{\langle x,\nu\rangle^{2}}{|x|}\right)\left(\frac{| x|f^{\prime}(|x|)-f(|x|)}{|x|^{2}}\right)\,d\mathcal{H}^{1}\] \[\geq\int_{\partial B_{r}}\frac{1}{|x|}f(|x|)\,d\mathcal{H}^{1}+ \int_{\partial E}\left(|x|-\frac{\langle x,\nu\rangle^{2}}{|x|}\right)\left( \frac{|x|f^{\prime}(|x|)-f(|x|)}{|x|^{2}}\right)\,d\mathcal{H}^{1}\] \[=\int_{\partial B_{r}}H_{\partial B_{r}}f(|x|)\,d\mathcal{H}^{1} +\int_{\partial E}\left(|x|-\frac{\langle x,\nu\rangle^{2}}{|x|}\right)\left( \frac{|x|f^{\prime}(|x|)-f(|x|)}{|x|^{2}}\right)\,d\mathcal{H}^{1},\] thus proving the second inequality in (3.6). **Remark 3.3**.: Observe that the above proof shows that inequality (3.8) holds for any set \(E\) of finite perimeter containing the origin in the interior. Note however that this latter assumption can not be weakened as it is shown by an example in [9]. Note that the above theorem essentially says that one may control the gap \(\mathscr{H}(E)-\mathscr{H}(B_{r})\) with the oscillation of the normals to \(E\) and \(B_{r}\). To be more precise, let us denote by \(\pi\) the projection of \(\partial E\) on \(\partial B_{r}\). Observe that \[\frac{1}{2}|x||\nu_{\partial E}(x)-\nu_{\partial B_{r}}(\pi(x))|^{2}\leq|x|- \frac{\langle x,\nu\rangle^{2}}{|x|}\leq|x||\nu_{\partial E}(x)-\nu_{\partial B _{r}}(\pi(x))|^{2}.\] Then it is clear that under the assumption of Theorem 3.2 and if \(E\subset B_{R}\) for some \(R>0\), then \[|\int_{\partial E}H_{\partial E}f(|x|)\,d\mathcal{H}^{1}-\int_{\partial B_{r} }H_{\partial B_{r}}f(|x|)\,d\mathcal{H}^{1}|\leq C(f,R)\|\nu_{\partial E}(x)- \nu_{\partial B_{r}}(\pi(x))\|_{L^{2}(\partial E)}^{2}\] where the constant \(C(f,f^{\prime},R)\) depends only on the function \(f\) and its derivative and \(R\). Note that the above Theorem applies in particular to the Gaussian weight \(\gamma(r)=e^{-\frac{r^{2}}{2}}\). As a consequence of this we get **Remark 3.4**.: We note that inequalities (3.2) and (3.6) can be immediately extended to any bounded convex set \(E\) contained in the plane with not empty interior. To see this we recall (see [17, Section 4.2]) that for such \(E\) there exists a _curvature measure_\(\mu_{E}\) supported on \(\partial E\) such that if \(E_{h}\) is a sequence of smooth convex sets converging in the Hausdorff distance to \(E\), then \(H_{\partial E_{h}}\,\mathcal{H}^{1}\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.5pt \vrule height 6.0pt width 0.5pt depth 0.5pt}}\partial E_{h}\rightharpoonup\mu_{E}\). Using this measure, for instance (3.2) becomes \[\int_{\partial E}f(|x|)\,d\mu_{E}\leq\int_{\partial B_{r}}H_{\partial B_{r}}f( |x|)\,d\mathcal{H}^{1},\] whenever \(E\) is such that \(|E|_{w}=|B_{r}|_{w}\). A similar extension also holds for (3.6). We conclude this section by proving another consequence of Theorem 3.2. More precisely, if \(E_{h}\to B_{r}\) in the Hausdorff distance then the corresponding weighted curvature integrals converge with a speed controlled by the distance of \(E_{h}\) from \(B_{r}\). To this aim, given two closed sets \(E,F\subset\mathbb{R}^{2}\) we denote by \(d_{\mathcal{H}}(E,F)\) the Hausdorff distance between \(E\) and \(F\). We will also use the following lemma, which is the two dimensional version of a more general statement proved in [8] (see proof of Lemma 3.3). **Lemma 3.5**.: _Let \(E\subset\mathbb{R}^{2}\) be a convex body containing the origin and let \(\rho:[0,2\pi]\to(0,2)\) be such that \(\partial E=\rho(\theta)(\cos\theta,\sin\theta)\). Then_ \[\|\rho^{\prime}\|_{L^{\infty}}\leq 2\sqrt{\|\rho-1\|_{L^{\infty}}}\frac{1+\| \rho-1\|_{L^{\infty}}}{1-\|\rho-1\|_{L^{\infty}}}.\] **Theorem 3.6**.: _Let \(f:[0,\infty)\to(0,\infty)\) a \(C^{1}\) not increasing function and \(E_{h}\subset\mathbb{R}^{2}\) a sequence of convex set converging to \(B_{r}\) in the Hausdorff distance. Then there exists a constant depending only on \(r\) and \(f\) such that for \(h\) large_ \[\left|\int_{\partial E_{h}}f(|x|)d\mu_{E_{h}}-\int_{\partial B_{r}}f(|x|)H_{ \partial B_{r}}d\mathcal{H}^{1}\right|\leq Cd_{\mathcal{H}}(E_{h},B_{r}).\] Proof.: Let \(w\) be the function defined as in (3.1) and for all \(h\) let \(r_{h}\) be the unique positive number such that \(|B_{r_{h}}|_{w}=|E_{h}|_{w}\). We have \[\left|\int_{\partial B_{r}}f(|x|)H_{\partial B_{r}}\,d\mathcal{H}^ {1}-\int_{\partial E_{h}}f(|x|)\,d\mu_{E_{h}}\right|\leq \left|\int_{\partial B_{r}}f(|x|)H_{\partial B_{r}}\,d\mathcal{H}^ {1}-\int_{\partial B_{r_{h}}}f(|x|)H_{\partial B_{r}}\,d\mathcal{H}^{1}\right|\] \[+\int_{\partial B_{r_{h}}}H_{\partial B_{r_{h}}}f(|x|)\,d\mathcal{ H}^{1}-\int_{\partial E_{h}}f(|x|)\,d\mu_{E_{h}}. \tag{3.10}\] Setting \(d_{h}=d_{\mathcal{H}}(E_{h},B_{r})\), since for \(h\) large \(B_{r-d_{h}}\subset E_{h}\subset B_{r+d_{h}}\) we have \(r-d_{h}\leq r_{h}\leq r+d_{h}\), that is \(|r-r_{h}|\leq d_{h}\). Hence for \(h\) large we may estimate the first integral on the right hand side of (3.10) as follows. \[\left|\int_{\partial B_{r}}f(|x|)H_{\partial B_{r}}\,d\mathcal{H} ^{1}-\int_{\partial B_{r_{h}}}f(|x|)H_{\partial B_{r}}\,d\mathcal{H}^{1}\right| \leq 2\pi|f(r_{h})-f(r)|\] \[\leq 2\pi\max_{[r/2,2r]}|f^{\prime}||r_{h}-r|\leq Cd_{h}.\] To estimate the second integral we denote by \(\rho_{h}\) the Lipschitz function such that \(\partial E_{h}=\rho_{h}(\cos\theta,\sin\theta)\). Then we use the second inequality in (3.6), (3.7) and Lemma 3.5 applied to \(\frac{1}{r}E_{h}\) to get for \(h\) large \[\int_{\partial B_{r_{h}}}H_{\partial B_{r_{h}}}f(|x|)\,d\mathcal{ H}^{1}- \int_{\partial E_{h}}f(|x|)\,d\mu_{E_{h}}\] \[\leq\max_{\rho\in[r/2,2r]}\left(\frac{f(\rho)-\rho f^{\prime}( \rho)}{\rho^{\prime 2}}\right)\int_{\partial E_{h}}\left(|x|-\frac{\langle x,\nu_{ \partial E_{h}}\rangle^{2}}{|x|}\right)\,d\mathcal{H}^{1}\] \[\leq C\int_{0}^{2\pi}\rho_{h}^{\prime 2}\,d\theta\leq 8\pi Cr\|\rho_{h} -r\|_{\infty}\left(\frac{r+\|\rho_{h}-r\|_{\infty}}{r-\|\rho_{h}-r\|_{\infty }}\right)^{2}\] \[\leq C^{\prime}d_{h}.\] This last estimates concludes the proof. Observe that arguing as in final part of the above proof under the assumption of Theorem 3.2 if \(d_{\mathcal{H}}(E,B_{r})<1\), we have \[\left|\int_{\partial E}H_{\partial E}f(|x|)\,d\mathcal{H}^{1}-\int_{\partial B_{ r}}H_{\partial B_{r}}f(|x|)\,d\mathcal{H}^{1}\right|\leq C(f)d_{\mathcal{H}}(E,B_{r})\] for some constant depending only on \(f\). ## 4. Higher dimension The isoperimetric inequality proved in Proposition 3.1 is false in higher dimension, as shown by the following example. **Example 4.1**.: Let \(n=3\), \(r>0\). If \(r\) is sufficiently small, there exists a smooth convex body \(E\) such that \(\gamma(E)=\gamma(B_{r})\) but \[\mathscr{H}(E)>\mathscr{H}(B_{r}).\] Proof.: Denote by \(C(t)\) the cylinder in \[C(s)=\{(x^{\prime},x_{3})\in\mathbb{R}^{2}\times\mathbb{R}:\,|x^{\prime}|\leq s\}\] For any \(r>0\) let \(s(r)\) the unique positive number such that \(\gamma(C_{s(r)})=\gamma(B_{r}).\) Note that \[\gamma(C_{s})=\int_{0}^{s}te^{-\frac{r^{2}}{2}}\,dt=1-e^{-\frac{s^{2}}{2}}\] while \[(2\pi)^{\frac{1}{2}}\gamma(B_{r})=2\int_{0}^{r}t^{2}e^{-\frac{t^{2}}{2}}\,dt= 2\left(-re^{-\frac{r^{2}}{2}}+\int_{0}^{r}e^{-\frac{t^{2}}{2}}\,dt\right).\] Since \(\gamma(B_{r})=\gamma(C_{s})\) we get \[e^{\frac{-s^{2}}{2}}=1-\frac{2}{(2\pi)^{\frac{1}{2}}}\left(-re^{-\frac{r^{2}}{ 2}}+\int_{0}^{r}e^{-\frac{t^{2}}{2}}\,dt\right)\] Moreover, we also have \[\mathscr{H}(C_{s})=(2\pi)^{\frac{3}{2}}e^{-\frac{s^{2}}{2}},\qquad\mathscr{H} (B_{r})=8\pi re^{-\frac{r^{2}}{2}}.\] Hence \[\mathscr{H}(C_{s}) =(2\pi)^{\frac{3}{2}}+4\pi(re^{-\frac{r^{2}}{2}}-\int_{0}^{r}e^{- \frac{t^{2}}{2}}\,dt)\] \[=\mathscr{H}(B_{r})+(2\pi)^{\frac{3}{2}}-4\pi(re^{-\frac{r^{2}}{ 2}}+\int_{0}^{r}e^{-\frac{t^{2}}{2}}\,dt)\] \[\geq\mathscr{H}(B_{r})+1,\] provided \(r\) is sufficiently small. Let \(C_{T,s(r)}\) the convex body obtained as the union of the cylinder \(C_{s(r)}\cap\{|x_{3}|<T\}\) with the two half balls of radius \(s(r)\) placed on the upper and lower basis of the cylinder. Since \[\gamma(C_{T,s(r)})\to\gamma(C_{s(r)})\] as \(T\to\infty\), we conclude that \(\mathscr{H}(C_{T,s(r)})>\mathscr{H}(B_{r^{\prime}})\) with \(r^{\prime}\) such that \(\gamma(C_{T,s(r)})=\gamma(B_{r^{\prime}})\), provided \(r\) is small and \(T\) is sufficiently large. **Lemma 4.2**.: _Let \(E\) be a bounded open set of class \(C^{2}\) starshaped with respect to the origin and let \(h:\mathbb{S}^{n-1}\to(0,\infty)\) a \(C^{2}\) function such that_ \[\partial E=\{y=xh(x),\,x\in\mathbb{S}^{n-1}\}.\] _Then_ \[H_{\partial E}(xh(x))=\frac{-\frac{1}{h}\Delta_{\mathbb{S}^{n-1}}h+n-1}{\sqrt {|\nabla h|^{2}+h^{2}}}+\frac{h\frac{1}{2}\langle\nabla|\nabla h|^{2},\nabla h \rangle+h^{2}|\nabla h|^{2}}{h^{2}\sqrt{(|\nabla h|^{2}+h^{2})^{3}}}. \tag{4.1}\] Proof.: First, we extend \(h\) to \(\mathbb{R}^{n}\setminus\{0\}\) as a homogeneous function of degree \(0\) still denoted by \(h\). Note that with this definition for any \(x\in\mathbb{S}^{n-1}\) the tangential gradient of \(h\) at \(x\) coincides with the gradient of \(h\) at the same point. Note also that the exterior normal to \(\partial E\) at \(xh(x)\), for \(x\in\mathbb{S}^{n-1}\), is given by \[\nu(xh(x))=\frac{xh(x)-\nabla h(x)}{\sqrt{h^{2}(x)+\nabla h(x)|^{2}}},\] where we have set \(\nu=\nu_{\partial E}\). Thus, setting \(y=xh(x)\) and recalling (2.2) we have \[H_{\partial E}(y)=\operatorname{div}\nu(y)=\frac{\partial\nu_{i}}{\partial y _{i}}(y)=\frac{\partial\nu_{i}}{\partial x_{j}}(y)\frac{\partial x_{j}}{ \partial y_{i}}(y),\] where we have adopted the standard convention of summation over repeated indexes. Since the derivatives of \(h\) are homogeneous of degree \(-1\) we have \[\frac{\partial x_{j}}{\partial y_{i}}=\frac{\partial}{\partial y_{i}}\frac{y_ {j}}{h(y)}=\frac{\delta_{ij}}{h(x)}-\frac{x_{j}}{h^{2}(x)}\frac{\partial h}{ \partial x_{i}}(x).\] Hence \[H_{\partial E}(xh(x))=\frac{1}{h(x)}\operatorname{div}\left(\frac{xh(x)- \nabla h(x)}{\sqrt{h^{2}+|\nabla h|^{2}}}\right)-\frac{\partial\nu_{i}}{ \partial x_{j}}\frac{x_{j}}{h^{2}}\frac{\partial h}{\partial x_{i}}.\] Denoting by \(\operatorname{div}_{\tau}\) the tangential divergence on \(\mathbb{S}^{n-1}\) we have \[H_{\partial E}(xh(x)) =\frac{1}{h(x)}\operatorname{div}\left(\frac{xh(x)-\nabla h(x)}{ \sqrt{h^{2}+|\nabla h|^{2}}}\right)=\frac{1}{h(x)}\operatorname{div}_{\tau} \nu(x)+\frac{1}{h}\frac{\partial\nu_{i}}{\partial x_{j}}x_{i}x_{j}-\frac{ \partial\nu_{i}}{\partial x_{j}}\frac{x_{j}}{h^{2}}\frac{\partial h}{ \partial x_{i}}\] \[=\frac{1}{h(x)}\operatorname{div}_{\tau}\nu(x)+\frac{1}{h^{2}}(h ^{2}+|\nabla h|^{2})^{\frac{1}{2}}\frac{\partial\nu_{i}}{\partial x_{j}}\nu_{ i}x=\frac{1}{h(x)}\operatorname{div}_{\tau}\nu(x).\] Then, since \(\langle x,\nabla h\rangle=0\), an easy calculation gives (4.1). We conclude by proving that in higher dimension if \(r>\sqrt{n-2}\) then the ball \(B_{r}\) is a local maximizer of the integral of the weighted mean curvature with respect to \(C^{2}\) perturbations. Quite surprisingly, the ball \(B_{r}\) is a local minimizer if \(r\) is small enough. **Theorem 4.3**.: _For all \(r>0\) there exist \(\varepsilon_{0}(r),C(r)>0\) with the property that if \(u\in W^{2,\infty}(\mathbb{S}^{n-1})\), \(\|u\|_{W^{2,\infty}}\leq\varepsilon<\varepsilon_{0}\) and \(E=\{trx(1+u(x)),\,x\in\mathbb{S}^{n-1},\,t\in(0,1)\}\) is such that \(\gamma(E)=\gamma(B_{r})\) then_ \[\mathscr{H}(B_{r})-\mathscr{H}(E)\geq r^{n-2}e^{-\frac{r^{2}}{2}}\left(r^{2}- n+2-C\varepsilon_{0}\right)\|u\|_{W^{1,2}(\mathbb{S}^{n-1})}. \tag{4.2}\] _Moreover, if \(E=-E\)_ \[\mathscr{H}(B_{r})-\mathscr{H}(E)\leq r^{n-2}e^{-\frac{r^{2}}{2}}\left(C \varepsilon+r^{2}-(n-2)\frac{n-1}{2n}\right)\|u\|_{W^{1,2}(\mathbb{S}^{n-1})}. \tag{4.3}\] Proof.: To prove our statement we use (4.1) with \(h\) replaced by \(r(1+u)\), thus getting \[\mathscr{H}(E)=\int_{\partial E}H_{\partial E}\,d\mathcal{H}^{n-1}\] \[=\int_{\mathbb{S}^{n-1}}\left[(n-1)-\left(\frac{\Delta u}{1+u} \right)+\left(\frac{\langle\nabla^{2}u\nabla u,\nabla u\rangle+(1+u)|\nabla u| ^{2}}{(1+u)(|\nabla u|^{2}+(1+u)^{2})}\right)\right]\frac{r^{n-2}e^{-\frac{r^{ 2}}{2}(1+u)^{2}}}{((1+u))^{2-n}}\,d\mathcal{H}^{n-1}.\] \[=r^{n-2}\left[(n-1)I-J+K\right].\] By Taylor expansion and the smallness of \(u\) we get \[I= \int_{\mathbb{S}^{n-1}}(1+u)^{n-2}e^{-\frac{r^{2}}{2}(1+u)^{2}}d \mathcal{H}^{n-1}\] \[= e^{-\frac{r^{2}}{2}}\left(n\omega_{n}+((n-2)-r^{2})\int_{ \mathbb{S}^{n-1}}u\,d\mathcal{H}^{n-1}\right)\] \[+e^{-\frac{r^{2}}{2}}\left(\frac{(n-2)(n-3)-(2n-3)r^{2}+r^{4}}{2 }\int_{\mathbb{S}^{n-1}}u^{2}d\mathcal{H}^{n-1}\right)+o(\|u\|_{L^{2}(\mathbb{ S}^{n-1})}^{2})\] and \[e^{\frac{r^{2}}{2}}\int_{\mathbb{S}^{n-1}}(1+u)^{n-3}\Delta ue^{ -\frac{r^{2}(1+u)^{2}}{2}}\,d\mathcal{H}^{n-1}\] \[= \int_{\mathbb{S}^{n-1}}\Delta ud\mathcal{H}^{n-1}+(n-3-r^{2}) \int_{\mathbb{S}^{n-1}}u\Delta u\,d\mathcal{H}^{n-1}\] \[+\int_{\mathbb{S}^{n-1}}u^{2}\Delta uG(u)\,d\mathcal{H}^{n-1},\] where \(G(u)\) contains the remainder in the Taylor expansion. Using that \(\int\Delta u\,d\mathcal{H}^{n-1}=0\) and integrating by parts the terms involving the Laplace-Beltrami operator we infer \[J=\int_{\mathbb{S}^{n-1}}(1+u)^{n-3}\Delta ue^{-\frac{r^{2}(1+u)^{2}}{2}}=-e^ {-\frac{r^{2}}{2}}(n-3-r^{2})\int_{\mathbb{S}^{n-1}}|\nabla u|^{2}d\mathcal{H }^{n-1}+o(\|u\|_{W^{1,2}(\mathbb{S}^{n-1})}^{2}).\] The last term is actually easier to treat since we are assuming also the smallness of the Hessian of \(u\). Thus we have \[K= \int_{\mathbb{S}^{n-1}}\frac{\langle\nabla^{2}u,\nabla u,\nabla u \rangle+(1+u)|\nabla u|^{2}}{(1+u)^{1-n}((1+u)^{2}+|\nabla u|^{2})}e^{-\frac{r ^{2}}{2}(1+u)^{2}}\,d\mathcal{H}^{n-1}\] \[= e^{-\frac{r^{2}}{2}}(1+o(\|u\|_{W^{1,2}(\mathbb{S}^{n-1})}^{2}) )\int_{\mathbb{S}^{n-1}}\langle\nabla^{2}u,\nabla u,\nabla u\rangle+|\nabla u |^{2}\,d\mathcal{H}^{n-1}.\] Collecting all the previous equalities we then get \[\mathscr{H}(E) -\mathscr{H}(B_{r})=r^{n-2}e^{-\frac{r^{2}}{2}}(n-1)\left(((n-2)- r^{2})\int_{\mathbb{S}^{n-1}}u\,d\mathcal{H}^{n-1}\right)\] \[+r^{n-2}e^{-\frac{r^{2}}{2}}(n-1)\left(\frac{(n-2)(n-3)-(2n-3)r^{ 2}+r^{4}}{2}\int_{\mathbb{S}^{n-1}}u^{2}d\mathcal{H}^{n-1}\right)\] \[+r^{n-2}e^{-\frac{r^{2}}{2}}\left((n-2-r^{2})\int_{\mathbb{S}^{n-1 }}|\nabla u|^{2}d\mathcal{H}^{n-1}+\int_{\mathbb{S}^{n-1}}\langle\nabla^{2}u, \nabla u,\nabla u\rangle\,d\mathcal{H}^{n-1}\right). \tag{4.4}\] To estimate the integral of \(u\) in the previous equation we need to exploit the assumption that the Gaussian measures of \(E\) and \(B_{r}\) are equal. In fact, since \[\gamma(B_{r})=\gamma(E)=\frac{r^{n}}{(2\pi)^{n/2}}\int_{B}(1+u(x))^{n}e^{-\frac{ r^{2}|x|^{2}(1+u(x))^{2}}{2}}\,dx, \tag{4.5}\] we can expand the integral via Taylor formula to find \[\int_{0}^{1}t^{n-1}\,dt\int_{\mathbb{S}^{n-1}}\Big{[}(1+u)^{n}e^{-\frac{r^{2}t ^{2}(1+u)^{2}}{2}}-e^{-\frac{r^{2}t^{2}}{2}}\Big{]}\,d\mathcal{H}^{n-1}=0.\] Using again Taylor expansion, we then easily get (4.6) \[0=\int_{0}^{1}t^{n-1}e^{-\frac{r^{2}t^{2}}{2}}\,dt\int_{\mathbb{S }^{n-1}}\big{[}(1+u)^{n}e^{-r^{2}t^{2}(u+u^{2}/2)}-1\big{]}\,d\mathcal{H}^{n-1}\] \[=\int_{0}^{1}t^{n-1}e^{-\frac{r^{2}t^{2}}{2}}\,dt\int_{\mathbb{S }^{n-1}}\Big{[}(n-r^{2}t^{2})u+\Big{(}\frac{n(n-1)}{2}-\frac{(2n+1)r^{2}t^{2}} {2}+\frac{r^{4}t^{4}}{2}\Big{)}u^{2}\Big{]}\,d\mathcal{H}^{n-1}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ Therefore if we write \[u=\sum_{k=0}^{\infty}\sum_{i=1}^{G(n,k)}a_{k,i}y_{k,i},\quad\text{where}\quad a_{k,i}=\int_{\mathbb{S}^{n-1}}uy_{k,i}d\mathcal{H}^{n-1},\] we have \[||u||^{2}_{L^{2}(\mathbb{S}^{n-1})}=\sum_{k=0}^{\infty}\sum_{i=1}^{G(n,k)}a_{k, i}^{2},\quad||D_{\tau}u||^{2}_{L^{2}(\mathbb{S}^{n-1})}=\sum_{k=1}^{\infty}k(k+n-2 )\sum_{i=1}^{G(n,k)}a_{k,i}^{2}\,. \tag{4.9}\] Note that (4.6) implies \[|a_{0}|^{2}=\left|\int_{\mathbb{S}^{n-1}}u\,d\mathcal{H}^{n-1}\right|^{2}=o( \|u\|_{L^{2}(\mathbb{S}^{n-1})}).\] Since \(E=-E\) we also have that \(u\) is an even function, hence \(a_{2k+1,i}=0\) for all \(k\in\mathbb{N}\) and \(i\in\{1,\dots,G(2k+1,n)\}\). Hence we can write \[\|\nabla u\|^{2}\geq 2n\|u\|_{L^{2}(\mathbb{S}^{n-1})}-o(\|u\|^{2}_{L^{2}( \mathbb{S}^{n-1})})\] which finally gives \[\int_{\partial E}H_{\partial E}e^{-\frac{|x|^{2}}{2}}\,d\mathcal{H }^{n-1}- \int_{\partial B_{r}}H_{\partial B_{r}}e^{-\frac{|x|^{2}}{2}}d \mathcal{H}^{n-1}\] \[\geq r^{n-2}e^{-r^{2}}\left((n-2)\frac{n+1}{2n}-r^{2}-\varepsilon _{0}\right)\|\nabla u\|_{L^{2}(\mathbb{S}^{n-1})}.\] The next result is a local maximality results under weaker assumptions. To this aim we introduce the function \[\psi(s)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{s}e^{-\frac{t^{2}}{2}}\,dt,\] which is the value of the Gaussian volume of the half space \(H_{s}=\{x\in\mathbb{R}^{n}:x_{1}\leq s\}\). **Theorem 4.4**.: _Let \(n\geq 3\), \(M>0\) and \(m\geq\max\{\psi(2M),\psi(\sqrt{n-2})\}\). For any \(C^{2}\) convex set containing the origin with \(\gamma(E)=\gamma(B_{r})=m\) and \(\|H_{\partial E}\|_{L^{\infty}}\leq M\) it holds_ \[\mathscr{H}(E)\leq\mathscr{H}(B_{r}). \tag{4.10}\] _Moreover, if \(m\geq\psi(\sqrt{n-2})\)_ \[\int_{\partial E}\frac{\langle x,\nu\rangle}{|x|}H_{\partial E}e^{-\frac{|x|^ {2}}{2}}\leq\int_{\partial B_{r}}\frac{\langle x,\nu\rangle}{|x|}H_{\partial B _{r}}e^{-\frac{|x|^{2}}{2}} \tag{4.11}\] _for any convex set \(E\) containing the origin with \(\mathscr{H}(E)<\infty\) and \(\gamma(E)=\gamma(B_{r})=m\)._ Proof.: Let \(E\) as in the statement and let \(r_{E}\) the radius of the largest ball centered at the origin and contained in \(E\), i.e. \[r_{E}=\sup\{r:B_{r}\subset E\}. \tag{4.12}\] Let \(x\in\partial B_{r_{E}}\cap\partial E\) and let \(H\) be the halfspace containing the origin and such that the hyperplane \(\partial H\) is tangent to \(E\) at \(x\). Since by convexity \(E\subset H\) we have \(\psi(r_{E})=\gamma(H)\geq m\) hence \(r_{E}\geq\psi^{-1}(m)\). Therefore, our assumption on \(m\) implies \(r_{E}\geq\max\{2M,\sqrt{2(n-2)}\}\). Now, using the divergence theorem on mainfolds we infer \[\mathscr{H}(E) =\int_{\partial E}H_{\partial E}\frac{\langle x,\nu\rangle}{|x|}e^ {-\frac{|x|^{2}}{2}}\,d\mathcal{H}^{n-1}+\int_{\partial E}H_{\partial E}\left(1 -\frac{\langle x,\nu\rangle}{|x|}\right)e^{-\frac{|x|^{2}}{2}}\,d\mathcal{H}^{n -1}\] \[=\int_{\partial E}\operatorname{div}_{\tau}\left(\frac{x}{|x|}e^{ -\frac{|x|^{2}}{2}}\right)\,d\mathcal{H}^{n-1}+\int_{\partial E}H_{\partial E }\left(1-\frac{\langle x,\nu\rangle}{|x|}\right)e^{-\frac{|x|^{2}}{2}}\,d \mathcal{H}^{n-1}.\] We compute the tangential divergence to find \[\operatorname{div}_{\tau}\left(\frac{x}{|x|}e^{-\frac{|x|^{2}}{2}}\right)= \frac{n-1}{|x|}e^{-\frac{|x|^{2}}{2}}-\left(1-\frac{\langle x,\nu\rangle^{2}} {|x|^{2}}\right)\left(\frac{1}{|x|}+|x|\right)e^{-\frac{|x|^{2}}{2}}.\] This gives \[\mathscr{H}(E)= \int_{\partial E}\frac{n-1}{|x|}e^{-\frac{|x|^{2}}{2}}\,d \mathcal{H}^{n-1}\] \[= \int_{\partial E}\frac{n-1}{|x|^{2}}\langle x,\nu\rangle e^{- \frac{|x|^{2}}{2}}\,d\mathcal{H}^{n-1}\] \[+\int_{\partial E}\left(1-\frac{\langle x,\nu\rangle}{|x|}\right) \left(H_{\partial E}+\frac{n-1}{|x|}-\left(\frac{1}{|x|}+|x|\right)\left(1+ \frac{\langle x,\nu\rangle}{|x|}\right)\right)e^{-\frac{|x|^{2}}{2}}\,d \mathcal{H}^{n-1} \tag{4.13}\] Note that, differently from the two dimensional case, the integral quantity \(\int_{\partial E}\frac{n-1}{|x|^{2}}\langle x,\nu\rangle e^{-\frac{|x|^{2}}{2 }}\,d\mathcal{H}^{n-1}\) is maximized by the ball centered at the origin with the same Gaussian volume of \(E\). Indeed, using the divergence theorem \[\int_{\partial E}\frac{1}{|x|^{2}}\langle x,\nu\rangle e^{-\frac{ |x|^{2}}{2}}\,d\mathcal{H}^{n-1}= \int_{E}\operatorname{div}\left(\frac{x}{|x|^{2}}e^{-\frac{|x|^{2} }{2}}\,dx\right)\] \[= \int_{E}\frac{n-2}{|x|^{2}}e^{-\frac{|x|^{2}}{2}}\,dx-(2\pi)^{ \frac{n}{2}}\gamma(E)\] \[\leq \int_{B_{r}}\frac{n-2}{|x|^{2}}e^{-\frac{|x|^{2}}{2}}\,dx-(2\pi)^ {\frac{n}{2}}\gamma(B_{r})\] \[= \int_{\partial B_{r}}\frac{1}{|x|}e^{-\frac{|x|^{2}}{2}}\,d \mathcal{H}^{n-1}=\frac{1}{n-1}\mathscr{H}(B_{r}). \tag{4.14}\] Since \(E\) is a convex set containing the origin, we have that \(\langle x,\nu\rangle\geq 0\) for all \(x\in\partial E\). This fact together with (4.13) and (4.14) leads to \[\mathscr{H}(E)\leq\mathscr{H}(B_{r})+\int_{\partial E}\left(1-\frac{\langle x,\nu\rangle}{|x|}\right)\left(H_{\partial E}+\frac{n-2}{|x|}-|x|\right)e^{- \frac{|x|^{2}}{2}}\,d\mathcal{H}^{n-1}. \tag{4.15}\] Since \(H_{\partial E}(x)\leq M\leq r_{E}/2\), \(r_{E}\leq|x|\) for all \(x\in\partial E\) the assumption \(r_{E}\geq 2\sqrt{n-2}\) implies that the integrand on the right hands side is negative, which in turn gives (4.10). Inequality (4.11) is also a consequence of (4.15). Indeed, (4.15) implies that if \(r\geq\psi(\sqrt{n-2})\) \[\int_{\partial E}\frac{\langle x,\nu\rangle}{|x|}H_{\partial E}e^{-\frac{|x|^{ 2}}{2}}\,d\mathcal{H}^{n-1}\leq\mathscr{H}(B_{r})=\int_{\partial B_{r}}\frac{ \langle x,\nu\rangle}{|x|}H_{\partial B_{r}}e^{-\frac{|x|^{2}}{2}}\,d\mathcal{H }^{n-1}.\]
2303.01284
NeU-NBV: Next Best View Planning Using Uncertainty Estimation in Image-Based Neural Rendering
Autonomous robotic tasks require actively perceiving the environment to achieve application-specific goals. In this paper, we address the problem of positioning an RGB camera to collect the most informative images to represent an unknown scene, given a limited measurement budget. We propose a novel mapless planning framework to iteratively plan the next best camera view based on collected image measurements. A key aspect of our approach is a new technique for uncertainty estimation in image-based neural rendering, which guides measurement acquisition at the most uncertain view among view candidates, thus maximising the information value during data collection. By incrementally adding new measurements into our image collection, our approach efficiently explores an unknown scene in a mapless manner. We show that our uncertainty estimation is generalisable and valuable for view planning in unknown scenes. Our planning experiments using synthetic and real-world data verify that our uncertainty-guided approach finds informative images leading to more accurate scene representations when compared against baselines.
Liren Jin, Xieyuanli Chen, Julius Rückin, Marija Popović
2023-03-02T14:08:09Z
http://arxiv.org/abs/2303.01284v2
# NeU-NBV: Next Best View Planning Using Uncertainty Estimation ###### Abstract Autonomous robotic tasks require actively perceiving the environment to achieve application-specific goals. In this paper, we address the problem of positioning an RGB camera to collect the most informative images to represent an unknown scene, given a limited measurement budget. We propose a novel mapless planning framework to iteratively plan the next best camera view based on collected image measurements. A key aspect of our approach is a new technique for uncertainty estimation in image-based neural rendering, which guides measurement acquisition at the most uncertain view among view candidates, thus maximising the information value during data collection. By incrementally adding new measurements into our image collection, our approach efficiently explores an unknown scene in a mapless manner. We show that our uncertainty estimation is generalisable and valuable for view planning in unknown scenes. Our planning experiments using synthetic and real-world data verify that our uncertainty-guided approach finds informative images leading to more accurate scene representations when compared against baselines. ## I Introduction Active perception and exploration is a core prerequisite for embodied robotic intelligence. In many applications, including robotic manipulation, inspection, and vision-based navigation, the ability to autonomously collect data is crucial for scene understanding and further downstream tasks [1]. A key challenge in this procedure is planning a view sequence for sensors to obtain the most useful information given platform-specific constraints [2]. In this work, we present a new framework for iteratively planning the next best view (NBV) for an RGB camera to explore an unknown scene. Given a limited measurement budget, our goal is to actively position the sensor to gather the most informative data of the scene online, i.e. during a robotic mission. To address this problem, traditional NBV planning methods rely on explicit global map representations of the scene, such as point clouds [3], volumes [4, 5, 6], or surface meshes [7, 8], as a basis for planning. However, due to map discretisation, these approaches scale poorly to larger scenes and have limited representation ability [9]. Implicit neural representations, such as neural radiance fields (NeRFs) [10], are drawing immense interest as an alternative approach for complex scene understanding. Given posed 2D images, NeRFs synthesise novel views by optimising an underlying continuous function encoding the scene appearance and geometry. In the context of active perception, emerging works [11, 12, 13, 14, 15] incorporate uncertainty estimation into NeRFs and exploit it to guide NBV planning. While showing promising results, these studies follow an active learning [16] paradigm to collect the most informative, i.e. most uncertain, images for periodically re-training a NeRF to improve the scene representation with minimal data. Since re-training NeRF models is computationally expensive, such methods are impractical for online robotic applications. To overcome the inefficiency and per-scene optimisation requirements of NeRF models, another line of work focuses on image-based neural rendering [17, 18, 19, 20]. Image-based approaches exploit a shared encoder to map nearby 2D reference images into latent feature space, upon which the local implicit representation is conditioned. This allows training the network across multiple scenes to learn scene priors, enabling it to generalise to new scenes without test time optimisation. Previous works in image-based neural rendering [17, 18, 19, 20] mainly study improving network performance assuming pre-recorded image data. However, exploiting the benefits of image-based neural rendering for active view planning in robotics has not yet been considered. The main contribution of this paper is a novel NBV planning framework bridging the gap between active perception Fig. 1: Our novel NBV planning framework exploits uncertainty estimation in image-based neural rendering to guide measurement acquisition. Given reference images from the current image collection of the scene (black frustums), our network outputs per-pixel uncertainty estimates at sampled view candidates (coloured frustums). Brighter frustums indicate higher average uncertainty from the view. Zoom-in boxes illustrate per-pixel uncertainty estimates at the most certain and uncertain views. By selecting the most informative, i.e. most uncertain, view candidate at which to take the next measurement, our approach efficiently explores the unknown scene in a mapless manner. and image-based neural rendering for online robotic applications. A key aspect of our framework is a new technique for uncertainty estimation in image-based neural rendering, which enables us to quantify the informativeness of view candidates without relying on ground truth images or global scene representations. Intuitively, high uncertainty indicates where scene information provided by the closest reference images is insufficient to render the novel view, due to under-sampling or more complex scene details in these areas. Therefore, we utilise view uncertainty as an informative exploration objective. As shown in Fig. 1, based on the predicted uncertainty, we actively select the most uncertain view candidate to maximise the information acquired during a data collection process in an unknown scene. To the best of our knowledge, our work is the first to address active perception using image-based neural rendering. We make the following three claims: (i) our uncertainty estimation technique generalises to unknown scenes and provides an informative proxy for rendering quality at novel views; (ii) our uncertainty-guided NBV planning strategy outperforms baseline approaches in finding more informative images to represent an unknown scene given a limited measurement budget; (iii) the informative images collected using our approach also improve the offline training quality of NeRF models. To support reproducibility, our implementation and simulation dataset will be released at: [https://github.com/dmar-bonn/neu-nbv](https://github.com/dmar-bonn/neu-nbv). ## II Related Work ### _Next Best View Planning_ View planning for robot active perception is an area of active research [2]. In initially unknown scenes, a common approach is to iteratively select the NBV from a set of view candidates using an acquisition function capturing their expected utility based on the current map state. Isler et al. [4] build a probabilistic volumetric map and select the NBV by calculating the information gain composed of visibility and the likelihood of seeing new parts of an object. Bircher et al. [6] find the NBV in a receding-horizon fashion by generating a random tree and selecting the branch maximising the amount of unmapped space in a volumetric map from view candidates. Zaenker et al. [5] maintain a voxel map of the scene and select the NBV among candidates obtained by targeted region-of-interest sampling and frontier-based exploration sampling. Zeng et al. [3] propose a point cloud-based deep neural network to directly predict the information gain of view candidates from the current raw point cloud of the scene. Song and Jo [8] evaluate the completeness of reconstructed surfaces and extract low-confidence surfaces to guide NBV planning. All these approaches require explicit discretised 3D map representations to maintain current information about the scene, which limits their scalability and representation ability. In contrast, our approach utilises a compact implicit neural representation conditioned only on 2D image inputs for NBV planning. ### _Implicit Neural Representations_ Implicit neural representations parameterise a continuous differentiable signal with a neural network [9]. For example, NeRFs [10] learn a density and radiance field supervised only by 2D images. To render a novel view, NeRFs sample points densely along a camera ray, then predict radiance and density from the position and view direction of each point. The final RGB and depth estimate of the ray is calculated by differentiable volume rendering. As the scene information is encoded in the network parameters, NeRFs overfit to a single scene and require significant training time. Instead of memorising a specific scene, image-based neural rendering, e.g. PixelNeRF [17], leverages an encoder to map nearby reference images into latent feature space. After aggregating features from reference images, a multilayer perceptron (MLP) is trained to interpret the aggregated feature into appearance and geometry information at a novel view. By training across different scenes, image-based approaches generalise well to new scenes without test time optimisation. We exploit the generalisation ability of image-based neural rendering to achieve online NBV planning for efficient data collection in an unknown scene. ### _Uncertainty Estimation in Neural Representations_ Estimating uncertainty in learning-based computer vision tasks is a long-standing problem [21]. Several recent works address uncertainty quantification in NeRF models. S-NeRF [22] proposes learning a probability distribution over all possible radiance fields modelling the scene. To this end, it treats radiance and density as stochastic variables and uses variational inference to approximate their posterior distribution after training. W-NeRF [23] directly learns to predict RGB variance as an uncertainty measure in rendering transient objects in the scene. For image-based neural rendering, Rosu and Behnke [18] introduce a loss function to learn confidence estimation in the rendered images. However, they only consider a fixed number of reference images with small view changes as inputs, which limits the applicability of their approach in robotics. Emerging works use uncertainty-guided NBV selection to address NeRF training with a constrained measurement budget. Pan et al. [11] and Ran et al. [13] model the emitted radiance as Gaussian distribution and learn to predict the variance by minimising negative log-likelihood during training. These works add the view candidate with the highest information gain, i.e. the highest uncertainty reduction, to the existing training data. Instead of learning uncertainty in parallel to radiance and density, Lee et al. [15] and Zhan et al. [12] propose calculating the entropy of the density prediction along the ray as an uncertainty measure with respect to the scene geometry. The entropy is used to guide measurement acquisition towards less precise parts. Sunderhauf et al. [14] exploit the recent development of fast rendering of Instant-NGP [24] to train an ensemble of NeRF models for a single scene, and measure uncertainty using the variance of the ensemble's prediction, which is utilised for NBV selection. The above-mentioned approaches address uncertainty-guided NBV selection based on NeRFs. Although these approaches show NeRF model refinement with limited input data, deploying such methods in robotic applications is not straightforward. As the scene information is entirely encoded in the network weights, after each planning step, the uncertainty estimation must be re-optimised to account for newly added measurements, which is time- and compute-consuming. In contrast, our novel approach incorporates uncertainty estimation in image-based neural rendering to actively select informative images, which are incrementally added to our image collection. This way, we explore an unknown scene without the need to maintain an explicit map representation or re-train an implicit neural representation. ## III Our Approach We propose a novel mapless NBV planning framework for robotic exploration tasks. An overview of our framework is shown in Fig. 2. We first sample view candidates and query their corresponding closest reference images from the current image collection. Based on the scene information provided by the reference images, our image-based neural rendering network predicts per-pixel uncertainty at these view candidates. The NBV planning strategy selects the most uncertain view candidate corresponding to the next measurement, which we add to the image collection. Our image-based neural rendering network retrieves scene information in a purely image-based manner. This enables us to achieve efficient autonomous exploration without maintaining an explicit map or iteratively re-training an implicit neural representation. In the following subsections, we describe our network architecture, training procedure for uncertainty estimation, and NBV planning scheme. ### _Network Architecture_ Our network follows the design choices of PixelNeRF [17] regarding the architecture of encoder and MLP. However, PixelNeRF uses a volume rendering technique requiring dense sampling along the ray at predefined intervals, which is inefficient and limits its online applicability. Inspired by Rosu and Behnke [18] and Sitzmann et al. [25], we adopt a long short-term memory (LSTM) module [26] to adaptively predict the jumping distance to the next sampling point, therefore speeding up the inference of neural rendering. The network is illustrated in Fig. 3. Given a novel view, we query our current image collection to find the \(N\) closest reference images \(\mathbf{I}_{n\in\{1,2,\ldots,N\}}\). We use a shared convolutional-based encoder \(E\) to extract latent feature volume \(\mathbf{F}_{n}=E(\mathbf{I}_{n})\in\mathbb{R}^{H\times W\times L}\) from each reference image. \(H\) and \(W\) are feature volume's spatial resolution and \(L\) is the channel dimension. We parameterise a ray emitted from the novel view as \(r(t)=\mathbf{o}+t\mathbf{d}\), where \(\mathbf{o}\in\mathbb{R}^{3}\) is the camera centre position and \(t\) is the distance along view direction \(\mathbf{d}\in\mathbb{R}^{3}\). Starting from the close end of the ray \(t=t_{s}\), we transform the sampling point's position \(\mathbf{x}=r(t)\) and view direction \(\mathbf{d}\) into each reference view coordinate using known relative camera poses to get \(\mathbf{x}_{n}\) and \(\mathbf{d}_{n}\), respectively. To recover high-frequency details of the scene, the point position \(\mathbf{x}_{n}\) is mapped into higher-dimensional space by the positional encoding operation \(\gamma\) proposed by Mildenhall et al. [10]. By combining it with its view direction, we compose pose feature \(\mathbf{p}_{n}=(\gamma(\mathbf{x}_{n}),\mathbf{d}_{n})\) for sampling point expressed in \(n^{th}\) reference view coordinate. To retrieve the latent image feature from reference images, we project \(\mathbf{x}_{n}\) onto the corresponding reference image plane using known camera intrinsics to get image coordinates \(\phi(\mathbf{x}_{n})\), which we use to query the image feature \(\mathbf{f}_{n}=\mathbf{F}_{n}(\phi(\mathbf{x}_{n}))\in\mathbb{R}^{L}\) by grid sampling with bilinear interpolation [17]. The acquired pose feature \(\mathbf{p}_{n}\) and image feature \(\mathbf{f}_{n}\) from each reference image are processed individually by MLP\({}_{\text{feat}}\). For aggregating features from all reference images, we use the predicted weight \(w_{n}\in\left[0\,,1\right]\) and processed feature \(\mathbf{f}_{n}^{\prime}\) to calculate the weighted mean \(\mathbf{f}_{\mathrm{mean}}\) and variance \(\mathbf{f}_{\mathrm{var}}\). This operation downweights the feature from less informative reference images. Conditioning on the aggregated feature \((\mathbf{f}_{\mathrm{mean}},\mathbf{f}_{\mathrm{var}})\), our LSTM module adaptively predicts the jumping distance \(\Delta t\) to the next sampling point \(\mathbf{x}=r(t+\Delta t)\), thus mitigating the sampling inefficiency commonly seen in volume rendering [10, 17]. We iterate this process a fixed number of times to let the sampling point approach the surface in the scene and acquire depth prediction. We then use MLP\({}_{\text{out}}\) to interpret the aggregated feature queried at the final sampling point into colour and uncertainty information, as detailed in the following subsection. ### _Uncertainty Estimation in Image-based Neural Rendering_ Our uncertainty estimation quantifies the uncertainty inherited from the input data, due to the varying quality of the information provided by the reference images. For example, we expect reference images with large view differences and self-occlusions with respect to the novel view to lead to blurry rendering and thus high uncertainty. An illustration of input-dependent uncertainty estimated using our new approach is shown in Fig. 4. Given supervision using only posed 2D images, we incorporate input-dependent uncertainty estimation in the image-based neural rendering training process. Considering that the predicted RGB value is normalised between \(\left[0\,,1\right]\), we model each channel value of the RGB prediction \(c_{i}\in\left[0\,,1\right]\), where Fig. 2: Overview of our mapless NBV planning framework. We leverage uncertainty estimation in image-based neural rendering to actively guide measurement acquisition in unknown scenes. \(i\in\{1,2,3\}\), as an independent logistic normal distribution described by: \[p(c_{i};\mu_{i},\sigma_{i})=\frac{1}{\sigma_{i}\sqrt{2\pi}}\,\frac{1}{c_{i}(1-c_{ i})}\,e^{-\frac{(\text{logit}(c_{i})-\mu_{i})^{2}}{2\sigma_{i}^{2}}}\,, \tag{1}\] where \(\text{logit}(c_{i})=\ln(\frac{c_{i}}{1-c_{i}})\sim\mathcal{N}(\mu_{i},\,\sigma _{i}^{2})\) follows a normal distribution, with the mean \(\mu_{i}\) and variance \(\sigma_{i}^{2}\) predicted by our network. To train the network, following Kendall and Gal [21], we minimise the negative log-likelihood \(-\log p\left(c_{i}=y_{i}\mid\mu_{i},\sigma_{i}\right)\) given ground truth RGB channel values \(y_{i}\in\left[0\,,1\right]\). For a single pixel RGB prediction, this leads to our photometric loss function formulated as: \[\mathcal{L}=\sum_{i=1}^{3}\frac{1}{2}\log(\sigma_{i}^{2})+\log(y_{i}(1-y_{i}) )+\frac{(\text{logit}(y_{i})-\mu_{i})^{2}}{2\sigma_{i}^{2}}\,. \tag{2}\] For calculating the loss, the ground truth RGB channel value is mapped into logit space by \(\text{logit}(y_{i})\). We clamp \(y_{i}\) at \(\left[0.001\,,0.999\right]\) to ensure numerical stability. During deployment in unknown scenes, given a novel view and its reference images, our network predicts mean \(\mu_{i}\) and variance \(\sigma_{i}^{2}\) assuming each RGB channel of a pixel is normally distributed in logit space. We then sample \(100\) times from the normal distribution and pass all samples through a sigmoid function to acquire a valid RGB channel value. We treat the mean and variance of the \(100\) channel values as our final channel-wise RGB prediction \(c_{i}\in\left[0\,,1\right]\) and uncertainty estimate \(u_{i}\in\left[0\,,0.25\right]\) of the respective pixel. ### _Uncertainty Guided Next Best View Planning_ Our novel NBV planning framework exploits uncertainty estimation in image-based neural rendering to guide efficient data collection. Given a limited measurement budget, our uncertainty-guided approach is effective at finding more informative images to better represent an unknown scene. For view planning, we consider a scene-centric hemisphere action space. First, our planning procedure initialises the image collection with image measurements at two random views. For planning the next camera view, we uniformly sample a fixed number of view candidates \(k\in\{1,2,\dots,K\}\) within allowable view changes around the current camera view. For each view candidate, we find at maximum \(N\) closest reference images in our current image collection. Given the novel view and corresponding reference images, our network renders per-pixel uncertainty estimate \(\mathbf{U}_{k}\in\left[0\,,0.25\right]^{H_{r}\times W_{r}\times 3}\) following the approach in Sec. IV-B, where \(H_{r}\) and \(W_{r}\) are the desired rendering resolution. In this setup, we propose a simple utility function defined as: \[g(k)=\frac{1}{H_{r}\times W_{r}\times 3}\,\left\|\mathbf{U}_{k}\right\|_{1}. \tag{3}\] The view candidate \(k^{*}\) with the highest utility \(g(k^{*})\) is selected as our NBV. High uncertainty indicates that the view candidate cannot be well-rendered by our network given the current image collection, due to under-sampling around the view, i.e. the closest reference images are far away, or the scene is generally complex when observed from the view. Therefore, a new measurement at the most uncertain view potentially yields the highest information value for scene representation. The newly-captured image at the NBV is then added to our image collection. We iterate this planning procedure until a given measurement budget is exhausted. Note that our framework is agnostic to sampling strategies and can be easily adapted to other specific scenarios. ## IV Experimental Evaluation Our experimental results support our three claims: (i) we show that our uncertainty estimation in image-based neural rendering is informative to rendering quality and generalises to new scenes; (ii) we show that our uncertainty-guided NBV planning strategy collects informative images using a publicly available real-world dataset and in a simulated environment. To measure the quality of collected images, we evaluate their influence on image-based neural rendering performance at test views; and (iii) we show the benefit of using our online collected images to train NeRF models. Fig. 3: Our network architecture. Different colours indicate features from different reference images. Note that the encoder is not explicitly shown. We use an LSTM module to predict jumping distance \(\Delta t\) to the next sampling point given the aggregated feature from all reference images acquired at the current sampling point. After a fixed number of iterations, the aggregated feature at the final point is interpreted to colour and uncertainty information. Arrows with dashed lines show the forward pass happening only in the last iteration. Experimental results indicate that images collected using our planning framework lead to more accurate implicit representations in both cases when compared against baselines. ### _Training Procedure_ **Datasets.** We train our network separately on two datasets for the corresponding planning experiments. We first use real-world images with a resolution of \(400\times 300\) pixels from the DTU dataset [27]. We follow the data split proposed by PixelNeRF [17] with \(88\) training scenes and \(15\) test scenes, in which no shared or similar scenes exist. For each scene, \(49\) images are collected following a fixed path pattern on a section of a scene-centric hemisphere. We also record our own synthetic dataset, considering \(50\) ShapeNet [28] models from \(4\) representative categories: car, motorcycle, camera, and ship. For each model, we record \(100\) images with a resolution of \(200\times 200\) pixels from views uniformly distributed on the hemisphere covering the scene. **Training Setup.** We use the Adam optimiser with a learning rate of \(10^{-5}\) and exponential decay of \(0.999\). LSTM iteration number during a forward pass is set to \(16\). The network is implemented in PyTorch and trained with a single NVIDIA RTX A5000 GPU for \(\sim 2\) days until convergence. Rendering a novel view with the same resolution as the two dataset images takes \(0.6\) s and \(0.3\) s, respectively, which is \(60\) times faster than PixelNeRF [17]. Our network design is agnostic to the number of input reference images. For both training processes, we randomly select \(3\), \(4\), or \(5\) reference images for novel view rendering in the scene to restrict memory consumption. ### _Evaluation of Uncertainty Estimation_ Our first experiment is designed to show that our uncertainty estimation strongly correlates with rendering error in image-based neural rendering in unknown scenes. To evaluate the quality of uncertainty prediction, we consider two metrics. We use Spearman's Rank Correlation Coefficient (SRCC) [29] to asses the monotonic relationship between averaged uncertainty estimate and rendering error over a test view. As SRCC only captures the informativeness of averaged uncertainty prediction, the quality with respect to the structural similarity between the per-pixel uncertainty estimate and error is not considered. To evaluate the structural similarity, we report the Area Under the Sparsification Error (AUSE) curve [30], which reveals how strongly the uncertainty coincides with the rendering error pixel-wise. For each test scene in the DTU dataset, we create \(100\) test sets. Each test set consists of four images randomly selected from the scene, from which we use three as reference images and the remaining one as the test view. We average the predicted uncertainty and mean squared error (MSE) of each test view. We then calculate SRCC values with respect to the \(100\) pairs of averaged uncertainty and MSE. Empirically, SRCC values higher than \(0.8\) indicate strong monotonicity (high average uncertainty prediction is consistent with high average rendering error). We also report the average AVSE over \(100\) test views for each scene. AUSE of \(0\) means that the order of pixel-wise uncertainty magnitude perfectly aligns with the order of the MSE value (uncertain areas at the rendered test view overlap with erroneous predictions). We compare our approach against two alternative un Fig. 4: Examples of our input-dependent uncertainty estimation in image-based neural rendering. For rendering the same novel view, we select two sets of reference images. The comparison clearly shows how rendering quality depends on scene information provided by reference images. Reference images with low information value lead to blurry rendering and correspondingly high uncertainty prediction (yellow) from our network. The error map shows the mean squared error between ground truth and rendered RGB (white areas indicate higher errors). Our uncertainty prediction is strongly correlated with this error, thus serving as a good proxy for view planning. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c} \hline \hline Scene No. & & 8 & 21 & 30 & 31 & 34 & 38 & 40 & 41 & 45 & 55 & 63 & 82 & 103 & 110 & 114 \\ \hline \multirow{3}{*}{SRCC \(\uparrow\)} & Entropy & \(0.16\) & \(0.52\) & \(0.37\) & \(0.29\) & \(0.21\) & \(0.60\) & \(0.39\) & \(0.52\) & \(0.17\) & \(0.47\) & \(0.53\) & \(0.32\) & \(0.42\) & \(0.33\) & \(0.60\) \\ & Confidence & \(0.83\) & \(0.83\) & \(0.90\) & \(0.80\) & \(0.66\) & \(0.76\) & \(0.81\) & \(0.80\) & \(0.83\) & \(0.78\) & \(0.82\) & \(0.88\) & \(0.48\) & \(0.53\) & \(0.79\) \\ & Ours & \(\mathbf{0.84}\) & \(\mathbf{0.89}\) & \(\mathbf{0.93}\) & \(\mathbf{0.88}\) & \(\mathbf{0.86}\) & \(\mathbf{0.87}\) & \(\mathbf{0.83}\) & \(\mathbf{0.86}\) & \(\mathbf{0.89}\) & \(\mathbf{0.91}\) & \(\mathbf{0.91}\) & \(\mathbf{0.93}\) & \(\mathbf{0.73}\) & \(\mathbf{0.83}\) & \(\mathbf{0.89}\) \\ \hline \multirow{3}{*}{AUSE \(\downarrow\)} & Entropy & \(0.50\) & \(0.48\) & \(0.34\) & \(0.42\) & \(0.55\) & \(0.48\) & \(0.50\) & \(0.51\) & \(0.51\) & \(0.41\) & \(0.38\) & \(0.34\) & \(0.47\) & \(0.36\) & \(0.45\) \\ & Confidence & \(0.25\) & \(0.26\) & \(0.14\) & \(0.18\) & \(0.21\) & \(0.28\) & \(0.27\) & \(0.22\) & \(0.19\) & \(0.23\) & \(0.14\) & \(0.16\) & \(0.23\) & \(0.20\) & \(0.16\) \\ & Ours & \(\mathbf{0.17}\) & \(\mathbf{0.18}\) & \(\mathbf{0.05}\) & \(\mathbf{0.11}\) & \(\mathbf{0.12}\) & \(\mathbf{0.19}\) & \(\mathbf{0.14}\) & \(\mathbf{0.13}\) & \(\mathbf{0.11}\) & \(\mathbf{0.15}\) & \(\mathbf{0.08}\) & \(\mathbf{0.08}\) & \(\mathbf{0.18}\) & \(\mathbf{0.12}\) & \(\mathbf{0.11}\) \\ \hline \hline \end{tabular} \end{table} TABLE I: Evaluation of uncertainty estimation strategies on the 15 test scenes of the DTU dataset. Best results in bold. certainty estimation methods that can be incorporated into image-based neural rendering frameworks. Lee et al. [15] propose calculating the entropy of the density distribution of the samples along each ray as uncertainty quantification in NeRF models. We re-implement this entropy calculation in PixelNeRF, which we denote _Entropy_ in the experiments. Rosu and Behnke [18] proposes learning to predict RGB rendering confidence in image-based neural rendering by defining the loss as a linear combination of the predicted and the ground truth images. As their approach only handles a fixed number of reference images with small view changes, we adapt it by replacing our loss function Eq. (2) with their confidence loss and train the network under the same conditions. We denote this method _Confidence_. Table I summarises the results. Our uncertainty prediction is more informative with respect to rendering error compared to the other two methods. The poor performance of the _Entropy_ approach is likely due to the fact that the entropy of the density distribution mainly captures uncertainty over scene geometry. As neural rendering can recover colour information under inaccurate depth prediction [9], naively incorporating _Entropy_ as uncertainty estimation in image-based neural rendering fails to provide useful information about rendering quality. The superior performance of our approach compared to _Confidence_ indicates that our probabilistic interpretation of RGB prediction leads to more consistent uncertainty estimates. A qualitative illustration of our uncertainty prediction results is exemplified in Fig. 4. ### _Comparison of Next Best View Planning Strategies_ We show that our uncertainty-guided NBV planning collects the most informative images to better represent an unknown scene. For evaluating planning performance, we use collected images and our image-based neural rendering network to render test views. The rendering quality is measured by the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) [10]. Note that, since the image-based neural rendering network is fixed for test view rendering in all experiments, performance differences arise purely as the consequence of different NBV planning strategies. We compare our uncertainty-guided approach against two heuristic baselines: * _Ours_: selects the most uncertain view candidate via our uncertainty prediction as illustrated in Fig. 1; * _Max. View Distance_: selects the view candidate that maximises the view distance with respect to previously collected images; * _Random_: selects a view candidate uniformly at random. We conduct experiments on the DTU dataset and in our simulator with corresponding pre-trained networks, respectively. For all planning experiments, we initialise the image collection with two randomly selected images and use different planning approaches to take the next images until a given maximum of measurements is reached. **DTU dataset.** We set the measurement budget to \(9\) images including the \(2\) images for initialisation. As the DTU dataset has limited views for each scene, we treat all unselected views as view candidates. We adopt the three planning strategies to select the next view from the view candidates and add it to our image collection. After each view selection step, we use the current image collection to render all unselected views. We calculate the average PSNR and SSIM with standard deviations. We repeat the experiment \(10\) times for all \(15\) test scenes and report the results in Fig. 5. As shown, NBV planning guided by our uncertainty estimation selects the most informative view candidate in each step reflected by better image-based neural rendering quality. **Simulator.** To demonstrate the advantages of our NBV planning framework in a more realistic robotic application scenario, we show the planning experiment in a simulation environment with a continuous action space. We import two different ShapeNet 3D models into the simulator. First, we consider a car model, which belongs to the training category but is not seen during training. Second, to show the generalisation ability of our approach, we test our planning framework on an indoor model consisting of a sofa and table. Note that the sofa and table are not in our training data categories. We configure our action space as a scene-centric hemisphere and set the measurement budget to \(20\) images including \(2\) initialisation images, as in the DTU experiment. At each planning step, we uniformly sample \(50\) view candidates within the interval of maximal \(60^{\circ}\) view angle change with respect to the current camera view. The three planners select the next view among the sampled view candidates. For our approach, we predict per-pixel uncertainty at \(60\times 60\) pixels resolution for each view candidate using a maximum of \(5\) closest reference images. One planning step takes \(1.5\) s in this setting. To evaluate the quality of collected images during online missions, we fix \(100\) random test views of the scene. After every \(2\) measurements, we use our network to render all test views given a maximum of \(5\) closest reference images from the current image collection and report average PSNR Fig. 5: Comparison of NBV planners on the DTU dataset. For each test scene, we use our image-based neural rendering network and collected images to render unselected views. To evaluate planning performance, we report the average PSNR and SSIM with standard deviations over all test scenes and runs. Note that the large standard deviations are due to varying rendering difficulty of each scene. Our uncertainty-guided approach finds informative images in the scene, improving scene representations via image-based neural rendering. and SSIM with standard deviations to evaluate implicit scene reconstruction quality. We repeat each planning experiment \(10\) times on the two models, respectively. Fig. 6 summarises the planning results. Our findings confirm that images collected using our uncertainty-guided approach lead to better image-based neural rendering quality in both scenes. Non-adaptive heuristic approaches cannot efficiently utilise the measurement budget, thus limiting their view planning performance. In contrast, our uncertainty-guided approach collects informative images in a targeted way, resulting in higher test view rendering quality. ### _Data Collection for Offline Modelling_ In this experiment, we further show that the images collected by our approach improve NeRF training using limited data. Note that different from uncertainty-guided NBV planning based on NeRFs [11, 13, 14, 15], our uncertainty estimation generalises to unknown scenes, thus the data collection process and NeRF training can be decoupled in our framework. This avoids computationally expensive network re-training during online missions. After online NBV planning experiments in our simulator, described in Sec. IV-C, we use Instant-NGP [24] to train NeRF models using images collected by the three planning approaches, respectively, under the same training conditions. To evaluate the training results, we render \(100\) test views using the trained NeRF models. We report the rendering metrics averaged over all experiment runs in Table II and show examples of rendering results at complex views from the scene in Fig. 7. Both quantitative and qualitative results verify that our planning strategy for collecting informative images boosts NeRF performance with limited training data. This indicates the benefits of using our approach to efficiently explore an unknown scene and collect informative images online. The 3D modelling of the scene can be done by training NeRFs offline, after a robotic mission, when computational resources are less constrained. ## V Conclusions and Future Work In this work, we propose a novel mapless NBV planning framework for online robotic applications. We integrate uncertainty estimation in image-based neural rendering and exploit the predicted uncertainty to guide our measurement acquisition. We show that our uncertainty estimation is informative to the rendering quality at novel views and generalises to new scenes. Our planning experiments prove that our uncertainty-guided NBV planning scheme effectively finds informative views in an unknown scene. Image collection using our approach leads to more accurate scene representations via online image-based neural rendering and offline implicit reconstruction using NeRFs. One limitation of our current framework is that rendering a high-resolution per-pixel uncertainty or RGB is inefficient for applications that require fast robot motion. To address this, future work will consider exploiting depth measurements to achieve more efficient sampling, thus speeding up the inference of neural rendering. To extend our framework to complex and cluttered environments, we plan to incorporate geometric uncertainty estimation for planning in unconstrained action spaces. Finally, we will investigate integrating semantic prediction with uncertainty estimation to enable exploring regions of interest in an unknown scene in applications where targeted inspection is necessary. \begin{table} \begin{tabular}{c c c c} \hline \hline & & Car & Indoor \\ \hline \multirow{3}{*}{PSNR \(\uparrow\)} & Max. View Distance & \(27.37\pm 0.65\) & \(30.02\pm 0.55\) \\ & Random & \(25.73\pm 0.83\) & \(28.46\pm 0.92\) \\ & Ours & \(\mathbf{28.35\pm 0.53}\) & \(\mathbf{30.46\pm 0.24}\) \\ \hline \multirow{3}{*}{SSIM \(\uparrow\)} & Max. View Distance & \(0.925\pm 0.004\) & \(0.937\pm 0.003\) \\ & Random & \(0.908\pm 0.012\) & \(0.920\pm 0.007\) \\ \cline{1-1} & Ours & \(\mathbf{0.934\pm 0.004}\) & \(\mathbf{0.941\pm 0.003}\) \\ \hline \hline \end{tabular} \end{table} TABLE II: NeRF training results using images collected from our planning experiments in the simulator. Best results in bold. Fig. 6: Comparison of NBV planners in a ShapeNet-based simulation environment. We conduct the experiments on (a) car and (b) indoor models, respectively. We fix \(100\) test views for evaluation purposes. For each test view, we query a maximum of 5 closest reference images from currently collected images and use our image-based neural rendering network to render the test view. We report the average PSNR and SSIM with standard deviations over all test views and experiment runs. Our uncertainty-guided NBV planning outperforms heuristic baselines in finding more informative images, resulting in higher rendering quality given a limited measurement budget. ## Acknowledgement We would like to thank Matteo Sodano for proofreading.
2304.05045
Scalable Real-Time Vehicle Deformation for Interactive Environments
This paper proposes a real-time physically-based method for simulating vehicle deformation. Our system synthesizes vehicle deformation characteristics by considering a low-dimensional coupled vehicle body technique. We simulate the motion and crumbling behavior of vehicles smashing into rigid objects. We explain and demonstrate the combination of a reduced complexity non-linear finite element system that is scalable and computationally efficient. We use an explicit position-based integration scheme to improve simulation speeds, while remaining stable and preserving modeling accuracy. We show our approach using a variety of vehicle deformation test cases which were simulated in real-time.
Ben Kenwright
2023-04-11T08:11:29Z
http://arxiv.org/abs/2304.05045v1
# Scalable Real-Time Vehicle Deformation ###### Abstract This paper proposes a real-time physically-based method for simulating vehicle deformation. Our system synthesizes vehicle deformation characteristics by considering a low-dimensional coupled vehicle body technique. We simulate the motion and crumbling behavior of vehicles smashing into rigid objects. We explain and demonstrate the combination of a reduced complexity non-linear finite element system that is scalable and computationally efficient. We use an explicit position-based integration scheme to improve simulation speeds, while remaining stable and preserving modeling accuracy. We show our approach using a variety of vehicle deformation test cases which were simulated in real-time. ## 1 Introduction Vehicle Deformation & DamageVehicle deformation is concerned with the object shape changing temporarily (elastic deformation) or permanently (plastic deformation or fracture) due to external forces, such as, impacts with the environment. The simulation of realistic vehicle deformation in real-time is challenging and important owing to the complexity of the problem and the realistic engagement it provides. When a vehicle impacts with an object, the vehicle does not bounce away like a rubber ball. While cars are design to be rigid and rugged, they are ultimately deformable for safety reasons. The vehicle deforms and absorbs the impact energy during a crash so that most of the energy is dissipated across the body. Hence, an effective and scalable technique for creating aesthetically pleasing vehicle deformation in real-time would be significant. The area of damageable and destructible vehicles is a complex multi-discipline problem. Typically, a vehicle is decomposed of multiple materials (e.g., glass, rubber, plastic, and steel) which all deform or break in different manners. For example, due to the large number of materials a vehicle is made from, this would cause a range of effects during an impact, such as, bending, tearing, splintering, and smashing. However, for this paper, we focus on an uncomplicated deformation technique for real-time environments, such as, video games (e.g., cars bumping into objects, sliding along walls, and rolling, to produce emphasized dents and bends). Aesthetically PleasingThe deformation focuses on _entertainment appearances_, and does not mechanically affect the vehicle dynamics. Adding dynamic interactive content to a simulation serves to enhance the enjoyability. This paper does not strive to create an "ultra accurate simulation". Car damage in vehicle simulations (e.g., driving games) can add additional novelty and engagement without spoiling the focus (i.e., playing). Vehicle deformation offers a dynamic challenge. While it is common for commercial games to have soft-body physics, the question of doing it effectively and efficiently without sacrificing other features within the simulation. Ideally, we want everything in the scene to react to external forces in a believable manner. For instance, in a racing simulation, we want to avoid a car going off track and rolling over, to recover without a scratch to continue on its journey. Visually feeding back to the player through vehicle deformation collisions and damage aesthetically is valuable. Video games have issues putting car damage into their game because the modelling is difficult or the manufacturers of those cars will not let them depict what will happen if the player crashes their car into a wall, but can be a real disappointment when the player can crash a vehicle into a barrier at 100+ mph and has nothing happens to it. This paper presents an appropriately realistic damage/deformation model without too much work and without potentially causing detriment to the rest of the simulation (e.g., computational time and taking development resources away from other areas). Elastic and Plastic DeformationA lot of real-time research has been done into creating _elastic_ deformable bodies [1, 13, 14] (e.g., cloth and soft body meshes). This type of deformation is reversible. Once the forces are no longer applied, the object returns to its original shape. Less work has addressed the _plastic_ deformation. This type of deformation is irreversible. However, an object in the plastic deformation range will first have undergone elastic deformation, which is reversible, so the object will return part way to its original shape (see Figure 1). The fracturing region is the final point after plastic deformation. All materials will eventually fracture and break with sufficient force. This type of deformation is also irreversible but we do not consider it in our approach. A break occurs after the material has reached the end of the elastic and plastic deformation ranges and would result in the mesh splitting and tearing. Typically, a car body is majority made from steel. As we would expect, steel has a small elastic deformation range and a rather large plastic deformation range, while materials, such as, plastics and rubber, have a minimal plastic deformation range and a larger elastic range. Research that has investigated elastic and plastic structural analysis of vehicles has primarily been off-line. These off-line models are able to synthesize accurate deformations [1, 12] (e.g., elastic-plastic deformation) but are computationally expensive and not easily scalable or applicable to real-time environments. ContributionWe paper presents a lightweight vehicle deformation system for real-time environments. The simulation is fast enough for interactive systems, such as, vehicle-themed video games and driving simulators. The deformations generate in our system have the following features, and to our knowledge, no previous work has demonstrated them: (1) The simulation speed is fast enough for resource limited environments while maintaining an interactive frame-rate and dynamic model without sacrificing accuracy (e.g., the solution can be saved and shared over a network in real-time). (2) Our system presents a unified solution for modeling vehicle damage characteristics (e.g., body-mass distribution) to capture life-like damage properties. (3) All the computational and memory run-time costs for applying and solving the deformation can be done in a single time-step (i.e., a linearly scalable model). Figure 1: _Elastic-Plastic Deformation - Illustrate the stress-strain curve and the relationship between stress (force applied) and strain (deformation)._ Related Work The creation and control of deformable vehicle meshes is an important component in multiple research areas, such as, engineering material analysis and safety testing. However, we focus on an interactive solution, such as, video games, since it allows the player to visually experience the effect of real-life car accidents. We review a number of important papers that have presented solutions for deformation in different contexts (e.g., character animation and material analysis) that relate to our work. While accurate simulations of vehicle deformation have been published [14], a real-time solution that is scalable and flexible has not yet been presented. Similarly, in commercial circles (e.g., video games), we see vehicle deformation and damage but the method and techniques are propriety owned and not shared publically. One open-source vehicle damage system is available, known as 'Rigs of Rods (RoR)' [13], which uses full soft-body physics to emulate the structural breakdown of vehicles but can be computationally intensive. The framework models the vehicle characteristics by accurately decomposing the model chassis into the correct components with stress characteristics to produce a highly realistic solution with one goal (vehicle destruction). Our approach sees deformation and damage as a minor part of the system, since a real-world solution only provides a minute amount of the overall computational resources to the physics - needing to share with graphics, artificial intelligence, networking, and gameplay features. A method that does not encapsulate the physical properties of the model, but is able to produce uncomplicated vehicle deformation feedback uses texture-vertex mapping. The approach writes collisions to a texture that is used to deform the graphical mesh vertices (e.g., analogous to physical bump-mapping). Information written to the texture are applied to the vehicle mesh using the graphical processing unit (GPU) to achieve a real-time frame-rates. This approach can also be used to write non-deformable feedback (e.g., scratches and marks to the graphical textures). ## 3 Deformation Model This section discusses the representation of the deformation model (i.e., finite element decomposition approximation). The model must be able to handle large deformations and be stable under large time-steps while not hindering the systems performance. Deformation is a geometric measure of strain (e.g., stretching and shearing). Typically, strain models use accurate finite element methods that assume small deformations. These simple linear strain models may cause inflation/expansion issues when the strain cannot separate rotation information [12]. Of course, methods have been developed to attempt to remove the rigid rotation [11, 1]. This solution is popular in interactive applications but the deformation needs to be very small. We use a coarser non-linear finite-element model that is coupled to the high-detailed geometry mesh. This allows us to synthesize physically plausible deformation in real-time, while maintaining a reasonably correct physical model. Our coarser model extracts the key details from the high-detailed vehicle mesh, such as, structural inter-connectivity and mass distribution. Vehicle collision deformations are likely to be large, especially for high speed impacts (e.g., car-car collisions). We must take care with these large deformations, as large aesthetically pleasing deformation are difficult to create in some respects (i.e., materials should not look rubbery or jelly-like). In order to handle large deformations, we use a low-dimensional approximation. We express large deformations regardless of vehicle rigid body centre (i.e., the vehicle dynamics). As the vehicle dynamics are not by default hindered by the deformation. From the viewpoint of computational speed, using a lightweight strain approximation with a reduced number of mesh elements (i.e., a coarser mesh based on the vehicle's convex hull, Section 4) that is parallelizable, makes it possible to produce acceptable large deformations at real-time simulation speeds. The mesh node positions formulate a finite element decomposition. The strain for each element is computed from the nodal positions in the currently deformed state and the initially non-deformed state. For every element, we determine the influence of each neighbour. As nodes are disturbed from their rest location by external perturbations they influence their neighbouring elements in accordance with their spatial proximity and connectivity strength. Figure 4: **Control Points and Vertices** - The control points coordinate the deformation of the high-detail graphical mesh. So in Equation 1, we take the inverse distance, so that nearer the point the greater the influence. Taking the inverse power enables us to regionalize the influence, due to the exponential fall off and provide a stronger coupling, between the graphical mesh and the control points (see Figure 5. Figure 3: **Vehicle Convex Hull** - Generating a convex hull mesh for the vehicle model. (a) original mesh, (b) convex hull for collisions and control points, and (c) different levels of detail for the deformation control model by reducing the convex hull triangles and hence number of control points. Figure 2: **Concept** - A 2D illustration of the low-dimensional deformable body driven system. The course volumetric mesh encloses the detailed graphical surface. The deformable body is attached to the graphical mesh by nodes to produce the overall effect. In this illustration, \(c\) is the centroid and \(x_{0},..,x_{n}\) represents the coordinates for the coarse deformable mesh. Low-Resolution Control Mesh We explain our low-dimensional control model for vehicle deformation. A control mesh technique is used reduce the computational overhead and the mathematical complexity of the model so we can achieve real-time frame-rates. We explain the high-resolution vehicle surface interactions to handle detailed contacts between the vehicle and the virtual environment in an endeavour to realistically mimic the mechanical deformation properties. Model Reduction & Mesh EmbeddingThe two main techniques for reducing the complexity of a finite element system can be classified into two main types: _modal reduction_ and _mesh embedding_. Modal reduction is a popular method for reducing the complexity of a finite element system by using a linear subspace to span a small number of displacement basis vectors to represent the deformation in the body. The eigenmodes obtained from linear modal analysis would be the best basis vectors for small deformation. For large deformation, however, they are not sufficient to capture the non-linear deformation characteristics, so multiple techniques have been suggested to choose a good deformation basis set [1]. Model techniques have successfully been used for real-time solutions, such as, surgery simulators and hand-soft body interaction. Mesh embedding, which is also called _free-form_ deformation [10], uses a low-dimensional coarse volumetric mesh to enclose the entire deformable body in order to represent the behavior of the body. The location of every material point inside the deformable body is determined by interpolating the positions of the neighboring nodes in the mesh. Since the work by Faloutsos et al. [1], mesh embedding techniques have been widely used to simulate soft bodies in the graphics literature [14, 15, 16]. We chose mesh embedding to reduce complexity of the deformable body in our simulation system not only because the technique can reduce the model complexity without losing the fine geometry of the vehicle but also because the frame can be manipulated more easily and efficiently using the embedding mesh system compared to modal reduction. In our formulation, the control body elements are considered as an interconnected set of rigid elements that can be solved using iterative penalty-based constraints. The complete system consists of a set of deformable body elements and a rigid body core (i.e., vehicle centre-of-mass). The position of a material point in the deformable body is determined from the nodal positions of the coarse mesh through interpolation. The relationship between the vehicle mesh and the nodes is defined in Equation 1. \[\begin{split}\phi_{ij}&=\frac{1}{||c_{i}-v_{j}||^{ \alpha}}\\ v^{\prime}_{j}&=v_{j}-\left[\sum_{i}(c_{i0}\ \phi_{ij})-\sum_{i}(c_{i}\phi_{ij})\right]\end{split} \tag{1}\] where \(v_{j}\) is the \(j\)th vertex, \(c_{i}\) is the \(i\)th control point, \(v^{\prime}_{j}\) is the \(j\)th transformed output vertex, \(\phi_{ij}\) is the initial inverse distance between vertex \(j\) and control point \(i\) (is constant and calculated once at the beginning), and \(\sum_{i}\phi_{i}=1\) (typically, \(\alpha\approx 3-4\)). ScalabilityWe endeavour to automate the damage process rather than depending on artist intervention for modelling the underlying low-poly control mesh. This enables us to adapt the detail of the deformation model to scale to different platforms (i.e., reduce the model complexity to more coarser representations for environments with limited resources, such as, memory and processing power). ## 5 Dynamics In order to capture the structural failing of a vehicle body during impacts, the constraints are assumed to break when their deformation exceeds a certain threshold value. Under loading, the vehicle body will suffer from two forms of deformation, i.e., elastic and plastic. The body shell is characterized to bend and return to its original shape for small disturbances. When the elastic threshold is exceeded the body will deform and take on a new shape (i.e., a new rest shape). Linear elastic deformation is governed by Hooke's law, which states, \(\sigma=E\varepsilon\), where \(\varepsilon\) is the strain, \(\sigma\) is the applied stress, and \(E\) is a material constant called Young's modulus. Importantly, this relationship _only applies in the elastic range_ and indicates that the slope of the stress vs. strain curve can be used to find Young's modulus. In engineering, this calculation is used to determine the materials tensile strength [1]. The elastic range ends when the material reaches its yield strength. At this point plastic deformation begins. The deformable shell constraints are enforced using position-based dynamics. This provides control over the explicit integration and remove instability issues. We were able to manipulate the positions of the vertices and parts of deformable mesh directly during the simulation without complications. Our approach is formulated to handle general position-based constraints effortlessly and efficiently. Additionally, an explicit position-based constraint solver is easy to understand and implement. We define general constraints via a constraint function ([14, 15, 16]). Instead of computing forces as the derivative of a constraint function energy, we directly solve for the equilibrium configuration and project positions. With our method we derive a bending term for the material which uses a point based approach similar to the one proposed by Grinspun et al. [17] and Bridson et al. [1]. Position-based dynamics have been used for a variety of systems. For example, Jakobsen [1] built his physics engine (called Fysix) on a position-based approach. With the central idea of using a Verlet integrator to manipulate positions directly. The velocities are implicitly stored by the current and the previous positions of the point-masses. The constraints are enforced by direct manipulation of the positions. Jakobsen demonstrated the efficient representation fo distance constraints that could be used to form the underpinnings of a stable and iterative control mesh. In this paper, we use position-based constraints with a breaking threshold, which is used to create a semi-rigid shell for the vehicle. After breaking the constraint rest conditions are recalculated to represent the new positions. An important note is our model is decomposed of point masses and does not need to account for any angular calculations. Position-based methods have proven themselves an efficient and robust method in variety of soft body systems, such as, cloth [17], character animation [1], and fluid dynamics [10]. For a detailed introduction to position-based dynamics, we refer the reader to the interesting work of Muller [17] and Jakobsen [1]. Visual ConstraintsWe set limits on the amount of deformation (i.e., deviation between the starting and current control point locations). The control points were only allowed to deviate by a specified amount from their starting locations. This was set as a global constant for the vehicle, however, it could be customized for different control points to produce a more aesthetically correct result if necessary. ## 6 Experimental Results We focused on three-dimensional vehicle deformation, however, we also applied the concept to test models to emphasis particular characteristics (e.g., a drinks can shown in Figure 4). Various kinds of simulations were tested in our system with the three-dimensional deformable vehicle bodies shown in Figure 7. The simulations with all the test models were implemented on a desktop machine with 3.2 GHz Intel i7 CPU and NVIDIA GeForce GTX 480 GPU. In order to create physically plausible vehicle motions (i.e., a car driven by its wheels), we used only a basic rigid body simulator, applying external forces at the locations of the wheels to acceleration the body, and drive the overall vehicle motion. The secondary motion of the deformable mesh were obtained by coupling the control points to the rigid body chassis. The deformations cause cause the rigid body collision mesh to be updated each time the deformation mesh is modified, for instance, by external forces, such as, collisions and contacts, to produce the desired visual damage. **Bottlenecks** Typically, a vehicle deformation model can be difficult to simulate due to a number of challenges: * model complexity (e.g., number of triangles and sub-mesh objects, such as, doors, chairs, engine, and windows) * typically for interactive environments, we are not interested in the internal modelling of the vehicle (e.g., engine, suspension, AND impact bars). Hence, during collisions and deformations, we avoid showing the internal body since it this information is not provided by the artist (i.e., aesthetic outer model of the car) * interconnected physical constraints (e.g., formulating complex matrix model to realistically connect constraints) * crumpling and compression factors (i.e., certain materials deform in specialist ways, such as, folding and crumpling) ## 7 Discussion This paper focuses on an aesthetically pleasing vehicle deformation model for real-time interactive environments. Realistic vehicles are designed with specialist deformation zones to help protect the passengers in the event collisions. Typically, simulation developers are limited by numerous issues, both from resources and from legally. For example, video game publishers/developers who use real-world vehicle models need permission from car manufacturers to what damage they show within their virtual environment. Of course, deformation simulates interactive effects and provides a more engaging virtual world for the user. We presented vehicle deformation framework for the vehicle and not the environments, for instance, collisions with barriers, which would also need to be deformed the barrier for additional realism. We created a scalable and effective method for creating deformation in real-time, which can be expanded upon to incorporate additional damage features, such as, fracturing and tearing of the vehicle body. The model is flexible enough to be expand to include greater complexity, such as, different material plasticity and elasticity constants (e.g., bumper and bonnet). While we have used an uncomplicated point-mass decomposition with linear coupling between the low and high resolution mesh, additional methods from skinning [23, 24], such as, dual-quaternion interpolation, may offer smoother surfaces with bulging characteristics. ## 8 Conclusion A robust and efficient method for adding deformations vehicle simulation environments produces a more engaging and entertaining solution. The method we have presented allows the solution to be customized to the systems needs (e.g., scalable, efficient, and autonomous or through artistic customisation). The flexibility of the approach within this paper allows developers and artists to design more attractive vehicle simulations that are more active and engaging without sacrificing resources. Overall, we focused on a low-dimensional solution which moves away from pre-canned solutions (i.e., stored animation files) and supports a diverse set of characteristics to create an effective real-time effect. ## Acknowledgements A special thanks to reviewers for taking time to review this article and provide insightful comments and suggestions to help to improve the quality of this article.
2310.07915
Tag Your Fish in the Broken Net: A Responsible Web Framework for Protecting Online Privacy and Copyright
The World Wide Web, a ubiquitous source of information, serves as a primary resource for countless individuals, amassing a vast amount of data from global internet users. However, this online data, when scraped, indexed, and utilized for activities like web crawling, search engine indexing, and, notably, AI model training, often diverges from the original intent of its contributors. The ascent of Generative AI has accentuated concerns surrounding data privacy and copyright infringement. Regrettably, the web's current framework falls short in facilitating pivotal actions like consent withdrawal or data copyright claims. While some companies offer voluntary measures, such as crawler access restrictions, these often remain inaccessible to individual users. To empower online users to exercise their rights and enable companies to adhere to regulations, this paper introduces a user-controlled consent tagging framework for online data. It leverages the extensibility of HTTP and HTML in conjunction with the decentralized nature of distributed ledger technology. With this framework, users have the ability to tag their online data at the time of transmission, and subsequently, they can track and request the withdrawal of consent for their data from the data holders. A proof-of-concept system is implemented, demonstrating the feasibility of the framework. This work holds significant potential for contributing to the reinforcement of user consent, privacy, and copyright on the modern internet and lays the groundwork for future insights into creating a more responsible and user-centric web ecosystem.
Dawen Zhang, Boming Xia, Yue Liu, Xiwei Xu, Thong Hoang, Zhenchang Xing, Mark Staples, Qinghua Lu, Liming Zhu
2023-10-11T21:56:16Z
http://arxiv.org/abs/2310.07915v2
Tag Your Fish in the Broken Net: A Responsible Web Framework for Protecting Online Privacy and Copyright ###### Abstract The World Wide Web, a ubiquitous source of information, serves as a primary resource for countless individuals, amassing a vast amount of data from global internet users. However, this online data, when scraped, indexed, and utilized for activities like web crawling, search engine indexing, and, notably, AI model training, often diverges from the original intent of its contributors. The ascent of Generative AI has accentuated concerns surrounding data privacy and copyright infringement. Regrettably, the web's current framework falls short in facilitating pivotal actions like consent withdrawal or data copyright claims. While some companies offer voluntary measures, such as crawler access restrictions, these often remain inaccessible to individual users. To empower online users to exercise their rights and enable companies to adhere to regulations, this paper introduces a user-controlled consent tagging framework for online data. It leverages the extensibility of HTTP and HTML in conjunction with the decentralized nature of distributed ledger technology. With this framework, users have the ability to tag their online data at the time of transmission, and subsequently, they can track and request the withdrawal of consent for their data from the data holders. A proof-of-concept system is implemented, demonstrating the feasibility of the framework. This work holds significant potential for contributing to the reinforcement of user consent, privacy, and copyright on the modern internet and lays the groundwork for future insights into creating a more responsible and user-centric web ecosystem. Keywords: user consent, privacy, copyright, web crawler, transparency, responsible web ## 1 Introduction The World Wide Web (WWW) has revolutionized the way people communicate, learn, and share knowledge. From its inception as a tool for academic collaboration [1] to its evolution into a global platform intertwining with our daily lives, the WWW has become an indispensable part of modern society. Yet, as it has grown in scale and complexity, so too have the challenges associated with managing and safeguarding the vast amounts of data it contains. While the WWW has democratized information access, it has also given rise to concerns about how this information, which is often contributed by users from all corners of the globe, is repurposed and utilized. A prominent example of this repurposing is the training of Generative AI (GenAI) models [2], such as Large Language Models (LLMs). These models, while transformative in their capabilities to generate content based on patterns in existing data, bring to the fore pressing concerns about data privacy, copyright infringements, and the broader ethics of data usage [3]. Central to these concerns is the EU's General Data Protection Regulation (GDPR) [4], a pivotal regulatory framework safeguarding data protection and sovereignty. Established in 2018, the GDPR outlines the rights of EU citizens concerning their personal data and sets rigorous standards for entities engaged in data collection or processing. However, the dynamic nature of GenAI models, especially in their data sourcing and utilization, poses unique challenges in ensuring privacy compliance. Such concerns have led to various investigations or class actions1. Concurrently, as the digital domain burgeons, copyright concerns have also gained prominence. The unauthorized harnessing of copyrighted content, especially when curating training datasets for GenAI models like LLMs, has precipitated notable legal confrontations. Cases involving GitHub Co-pilot accused of redistributing copyrighted code without proper attribution2, Stable Diffusion allegedly copied over 12 million images from Getty Images without permission3, and ChatGPT sued by U.S. authors for misusing their writings in training4. Regrettably, the current infrastructure of the WWW is ill-equipped to effectively tackle these multifaceted challenges (see detailed discussion in Section 2.2). Footnote 1: [https://www.forbes.com/sites/emmawoollacott/2023/09/01/openai-hit-with-new-lawsuit-over-chatgpt-training-data/](https://www.forbes.com/sites/emmawoollacott/2023/09/01/openai-hit-with-new-lawsuit-over-chatgpt-training-data/) Footnote 2: [https://www.theregister.com/2023/05/12/github_microsoft_openai_copilot/](https://www.theregister.com/2023/05/12/github_microsoft_openai_copilot/) Footnote 3: [https://www.theverge.com/2023/2/6/23587393/ai-art-copyright-lawsuit-getty-images-stable-diffusion](https://www.theverge.com/2023/2/6/23587393/ai-art-copyright-lawsuit-getty-images-stable-diffusion) Footnote 4: [https://www.reuters.com/technology/more-writers-sue-openai-copyright-infringement-over-ai-trainin-g-2023-09-11/](https://www.reuters.com/technology/more-writers-sue-openai-copyright-infringement-over-ai-trainin-g-2023-09-11/) To address these pressing concerns amidst the burgeoning era of GenAI and LLMs on the WWW, this paper introduces a novel, user-centric consent tagging framework. This framework leverages the extensibility of HTTP and HTML in conjunction with the decentralized nature of distributed ledger technology like Blockchain. Our primary objective is to provide users with enhanced control over their online data, enabling them to tag, monitor, and, if necessary, request data withdrawal from data custodians. In weaving together these techniques, this research endeavors to harmonize the transformative potential of GenAI with the foundational principles of data privacy and user empowerment. Ultimately, this work contributes to the vision of a web ecosystem that prioritizes responsibility and user autonomy. The primary contributions of this paper are listed as follows: * We underscore the challenges and technical disparities between the present Web infrastructure and the legal mandates for privacy and copyrights, with a specific focus on the data practices of Generative AI models; * We introduced a novel architecture that augments the existing World Wide Web framework, facilitating responsible user-centric data practices including data collection, data scraping, and data distribution; * We demonstrate the feasibility of our framework, highlighting its compatibility, extensibility, and seamless integration with the current Web infrastructure. The remainder of this paper is organized as follows. Section 2 explains the motivating scenario of this study, while Section 3 presents the overall design of architecture and system mechanism, with the subsequent section demonstrating the implementation details. Section 5 evaluates the proposed approach, followed by Section 6 which discusses insights and sheds light on the study's limitations. A review of the existing literature related to this work is presented in Section 7, and Section 8 concludes the paper. ## 2 Motivating Scenario The current infrastructure of WWW allows the utilization of online data with nearly no restrictions for companies but fails to provide transparency and control for users. This section demonstrates the motivating scenario by describing a typical journey of user data, followed by discussing the challenges of building a responsible web. ### Data Journey: From User Activities to Training AI models This section describes a typical journey of user data, tracing down its path from initial web-based activities to its utilization in training AI models, as illustrated in Figure 1. **User Activities.** Internet users interact with the web through various interfaces, termed as _clients_, including web browsers and mobile applications. These interactions are predominantly facilitated by _HTTP requests_. When users, for instance, upload a selfie via a web browser or a mobile app, the client initiates an HTTP request. This request generally contains the data in its body and is dispatched to the designated _server_. While HTTP encompasses methods for diverse types of requests, data transmission is commonly achieved through methods like _POST_, _PUT_, and _PATCH_. **Server Processing and Response.** Upon receiving the user-submitted data via HTTP request from a client, the web server processes the request. The server routes the requests to specific _handlers_ based on a _routing_ mechanism. The routing typically takes into account the HTTP method (e.g. GET, POST, PUT) and parses the URL to direct requests to the correct handlers. The handler further processes the request, which might extract various pieces of information from the request, including the client's user-agent, user-submitted data, and other relevant details from both the request header and body. Depending on the purpose and relevance, the extracted data might be validated and saved in the _databases or file systems_. When a resource, such as a web page or an _API_ endpoint containing pertinent data, is being accessed, data is fetched from storage or cache, embedded in an HTML page or payload (e.g., JSON), and subsequently sent by a server in an HTTP GET _response_ back to the client, which then renders or processes it accordingly. **Web Crawling.**_Web crawlers_, operated by entities such as search engines, researchers, and archiving services, automatically browse and collect web page information. They continuously crawl the web pages of websites by sending HTTP GET requests to fetch the content of URLs. The crawler downloads the web page content, which is in response to its request, and attributes certain information to the downloaded page, such as the URL and metadata [5]. To avoid overloading website servers, the crawlers often use an adaptive back-off algorithm to introduce a delay between consecutive requests to the same server [6]. Additionally, as documented in Robots Exclusion Protocol (extended by proposed RFC 93095), the web crawlers should respect _robots.txt_, which specifies which pages or directories should not be crawled. OpenAI also launched _GPTBot6_, and provides specific configurations and _IP ranges_ to allow website administrators to control its access to their sites. Google recently followed to announce _Google-Extended7_ to give web publishers certain control over their web crawling activities. Figure 1: Illustration of the User Data Journey, from User Activities to Downstream Distribution. Data Cleaning and Preprocessing.Once data is collected, especially from the vast expanse of the web, it often requires _cleaning and preprocessing_ before being deployed for analytics and AI model training. For example, data might be anonymized to safeguard the privacy of web users [7]. Furthermore, web-sourced data can be messy--it might have missing fields, duplicate records, or inconsistencies [8]. Cleaning ensures these issues are addressed. After cleaning, the data might still not be ready for analysis or AI training. Preprocessing steps, such as normalizing numerical values, encoding categorical variables, or deriving new insightful fields, transform the data into a more usable format [9]. AI Model Training.With the data cleaned and preprocessed, it can then be used to train AI models. Depending on the training paradigm, preparatory steps such as data labeling might be essential. The choice of model architecture and training techniques hinges on the specific domain and objective [10]. Currently, if data needs to be removed from a trained model, the process often employs retraining or machine unlearning methods [11, 12]. Downstream DistributionsAfter the model is trained and deployed, there are still potential _downstream distributions_ and uses of the knowledge derived from original data [13]. While these downstream distributions typically do not involve sharing raw user data directly, they can inadvertently reveal information related to the original user data, especially due to the issues such as _model memorization_[14] and _membership inference attacks_[15]. Examples of such downstream distributions include outputs of the model, sharing of pre-trained model weights, and incorporation of model outputs into datasets for training new models. ### Challenges of Protecting User Privacy and Copyrights Online content often includes personal data subject to regulations like the GDPR [16]. However, integrating this data into (Gen)AI models introduces issues due to their diverse and expansive data sources [17]. For example, While service providers might notify users of data collection, data harvested from the internet for GenAI training often bypasses this notification process, violating the _Right to be Informed (Art. 13, Art. 14)_. After the collection, data is integrated into GenAI models without providing the information about the dataset, making it difficult to access the original data, which may violate the _Right of Access (Art. 15)_. In addition, despite users' _Right to (Withdraw) Consent (Art. 7)_ to data processing, the potential ambiguities surrounding data collection and usage by GenAI models might complicate the clear acquisition and withdrawal of user consent. Given the swiftly advancing nature of the technology, the original consent may no longer align with the model's evolving usage and the emerging capabilities of these models. Similarly, when considering the copyright of online content, particularly that shared by artists on websites, existing internet infrastructure lacks a straightforward mechanism to identify the original content creators for the purposes of seeking consent or offering compensation. The current architecture of the WWW presents significant obstacles for companies in upholding these rights. Even if companies are genuinely committed to guaranteeing them, the practical implementation proves challenging [18]. For instance, when data is crawled from the web, firms cannot notify data subjects about this activity since they lack direct user contact. Additionally, verifying an individual's authenticity without requesting supplementary information becomes untenable, further complicating the process of consent withdrawal. Due to these challenges as well as the increasing scrutiny from regulatory bodies and persistent advocacy from affected stakeholders, organizations have been prompted to adopt more transparent practices. OpenAI, for instance, published blog posts about their AI safety practice and released documentation about their GPTBot and methods for web administrators to disallow or customize their crawling activities. Though pushed by the public, such voluntary self-regulation demonstrates a commitment to the responsible web and the ethical development of AI. Similarly, Google followed to announce Google-Extended, which provided a similar approach for web administrators to control the access of crawlers. However, these approaches, which largely rely on the Robots Exclusion Protocol, are insufficient in building a truly responsible web that robustly safeguards the privacy and copyrights of online users. These methods offer only coarse-grained control over crawler access, typically restricting access to directories and files rather than specific content on a page. Additionally, the right to enforce these site-level policies is granted to website administrators rather than the users. This becomes especially problematic on social media or content-sharing platforms where content creators' rights and autonomy are neglected. These limitations underscore the need for a more comprehensive and user-centric approach. ## 3 Architectural Design To address these challenges and provide a comprehensive and user-centric solution for a responsible web, we introduce a novel framework. We present the architectural design of the proposed framework in this section. The system architecture and its process interactions are described in 3.1 and 3.2, respectively. ### System Architecture Figure 2: System Architecture. The framework features four layers, including the Client Side, Web Side, Machine Learning Side, and Distributed Ledger. Fig. 2 demonstrates the overview of our proposed system architecture, which consists of four main layers: client, web, machine learning, and distributed ledger sides. Please note that the proposed architecture also contains components in conventional web application architectures, while in this paper we focus on the consent tagging framework for data privacy and user empowerment. **Client Side.** The client layer contains three main components: the functional modules of client applications, the Integrated Consent Tagging Extension, and a Consent Management Endpoint. The functional modules provide functionalities of the client application, such as user interfaces, network communications, and computations. The Integrated Consent Tagging Extension serves as an extension inside the client application, through the forms of browser extension, application plugin, or software library. Upon any HTTP request being sent, the Integrated Consent Tagging Extension injects a _tag_ as an additional HTTP header in the request. The header contains two parts: a consent configuration of which crawlers are consented, and a tag containing two parts, which further include the hash \(H_{d}\) of the user data, and the digital signature \(S_{d}\) of the hash \(H_{d}\) using a key pair \(K\). An example of the headers is shown in Table 1. The metadata of the request, the hash \(H_{d}\), and the signature \(S_{d}\) are stored locally. Optionally, the client-side program that initiated the post request can specify whether the request contains data that will be stored on the server using a header, which the Integrated Consent Tagging Extension can check upon the request being sent, to decide whether tag and log this request or not. A user can track the data through the Consent Management Endpoint. The endpoint will query the distributed ledger to extract the journey of the consent tag. If the user wishes to withdraw the consent, the endpoint can broadcast the consent withdrawal request on the distributed ledger. In order to prove the ownership of the consent tag, the withdrawal request contains the hash \(H_{d}\), digital signature \(S_{d}\), public key \(K_{pub}\) of key pair \(K\), and a signature \(S_{c}\) signed by private key \(K_{pri}\) of key pair \(K\) on a challenge from the random number generator on the distributed ledger. **Web Side.** After receiving the HTTP requests from the client, the web server will proceed with a series of operations to fulfill its functionalities. The consent tags should be linked with the corresponding data stored in the database. Once the data is provided through APIs or HTML pages, the Consent Tagging Processor will embed the tag in the data. The tag is either provided in a designated field in the document such as JSON, or as an HTML DOM attribute. An example HTML DOM is shown in code listing 1. The tag will be uploaded onto the distributed ledger so that users are notified of the data being crawled. Moreover, the Consent Tagging Processor will together with other modules mask the content where the consent configuration specified by the user blocks certain crawlers. For circumstances where there are services such as Content Delivery Network (CDN) involved, additional components may be required, for instance Bot Management and Service Workers. ``` 1<divclassm>article<contents?> 2<pclass="post-body" 3consent-tag-hash="ab4a39e4fc8118cbb37c..." 4consent-tag-sig="64d3d1079b5ac19ea5b..."> 5Ireallylovethispost. 6</p> 7</div> ``` Listing 1: Example of consent information injected in HTML DOM elements as attributes. \begin{table} \begin{tabular}{|l|} \hline Request URL: [https://www.reddit.com/submit](https://www.reddit.com/submit) \\ Request Method: POST \\ \hline \hline HTTP Request Headers \\ \hline Content-Type: application/x-www-form-urlencoded \\ User-Agent: Mozilla/5.0 (Windows NT... \\ X-Consent-Config: GPTBot:0;Googlebot:1;default:0 \\ X-Consent-Tag-Hash: ab4a39e4fc8118cbb37c... \\ X-Consent-Tag-Sig: 646d3d1079b5ac19ea5b... \\... \\ \hline \end{tabular} \end{table} Table 1: Example HTTP request headers of user post. The consent configuration, the hash, and the signature of the data are included in the headers. Once we scrape the web page, the web crawler will keep the link between the tag and data throughout later processes of cleaning and preprocessing. Once the data is acquired and used by another entity, the Consent Resolver logs the tags of data on a distributed ledger. If a user requests consent withdrawal on the distributed ledger, Consent Resolver should remove the corresponding data of specific tags being requested from the original and also downstream datasets. **Machine Learning Side.** AI software companies acquire datasets from web crawling entities, or they may themselves perform web scraping activities. They then train machine learning models using these datasets. The company keeps the records of consent linked with the data in the Consent Records repository. If the user requests consent withdrawal, the company will remove the corresponding data from the dataset. However, merely removing data from datasets might not be enough, as the data is already trained into the weights of models, which might still invoke legal implications related to privacy or copyrights. Therefore, the Retraining/Unlearning Trigger should be activated by the consent withdrawal requests, and invoke the re-training or unlearning process of machine learning models. The retraining/unlearning process should be logged on the distributed ledger for transparency. **Distributed Ledger Side.** The distributed ledger maintains three components, including i) Consent Logger, which stores the tags of data being crawled, used for training, and requested consent withdrawal; ii) Agent Configurations, which store configurations of web crawlers such as their user agent information and IP ranges, and web servers can use this information to identify crawlers and enforce relevant policies against crawlers, and; iii) Consent Request Handler, which accepts consent withdrawal requests from users and notifies relevant parties. The distributed ledger provides avail Figure 3: System Mechanism. The complete lifecycle contains four processes, including Consent Configuration and Tagging, Consent Control against Web Crawling, Data and Consent Tag Distribution, and Consent Withdrawal. ability, consistency, transparency, and decentralization for consent management, and can prevent the nature of a single point of failure and centralized power of a centralized platform. ### System Mechanism The system mechanism is depicted in figure 3, showcasing four processes: Consent Configuration and Tagging, Consent Control against Web Crawling, Data and Consent Tag Distribution, and Consent Withdrawal. **Consent Configuration and Tagging.** The user should generate at least one key pair for signing the consent tags. Optionally, the user may set up a consent configuration to specify which crawlers are allowed to scrape their online data or skip to allow all crawlers by default. The key pair and consent configuration are stored in the Integrated Consent Tagging Extension of the client application. Users may generate additional key pairs, or modify consent configuration at any time. Please note that when the user is prompted by the client-side interface, such as an HTML input element, to send data, the client-side program, when initiating the request, may specify the request as _non-crawlable_ in the HTTP header, which will be read by consent tagging extension, so that no consent configuration and tags will be attached together with the data. This can be used for data such as passwords and other non-public content. If the field is not marked as non-crawlable, the consent tagging extension will hash and sign data using the private key, and attach the resulting hash and signature with the consent configuration in a header of the HTTP request. The data, consent configuration, hash, signature, signing key pair, and metadata such as URL and time are stored locally. The web server stores the consent configuration, hash, signature, and their link with the data, in the database. **Consent Control against Web Crawling.** The Agent Configurations of crawlers can be read by the website owners, and the configurations can be regularly updated by the crawling entities. The security mechanism of distributed ledger ensures the authenticity of these configurations [19]. When the server is being crawled by a web crawler, the crawler first goes to _robots.txt_8 to read which portions of the website it is allowed to visit. These instructions are set by the website owner, and the granularity is commonly at the level of directories. Then the crawler will request each URL it wishes to scrape. The request contains the client IP address and the "User-agent" header9, by which the server can identify the crawler. The crawler may also add a signature on the current timestamp to the HTTP headers in order to prove its authenticity. Anti-crawler strategies are enforced to protect the website against malicious crawlers and illegal visits. If certain trusted crawlers e.g. _Googlebot_10 are allowed by the website, the server should retrieve the data from the database using techniques such as _Object-relational mapping (ORM)_. Corresponding consent configuration and consent tags should be extracted together, and the server should filter these data based on consent configuration. If the requesting crawler is not allowed in certain data, such data should be excluded from the response by techniques such as masking the HTML elements. If the consent configuration does not reject the crawler, the server should inject the consent tags into the response by adding relevant _consent tag_ attribute in HTML elements, similar to the example in listing 1. The server should also log the activity on a distributed ledger to associate the consent tag with the crawler, and the user will receive notifications about activities related to their data. Footnote 8: [https://www.rfc-editor.org/rfc/rfc9309](https://www.rfc-editor.org/rfc/rfc9309) **Data and Consent Tag Distribution.** After being scraped by the crawler, the data will go through a number of steps before being included in datasets, and consent tags should be always kept with the data throughout the process. Aggregated data should be kept with a collection of consent tags of the original data. Every time the data is transferred from one party to another, or being trained into a machine learning model, the transfer or training information should be logged to the entries of the data on the distributed ledger, and the user will be notified accordingly. Furthermore, large-volume downstream distribution should be also logged and tracked on the distributed ledger. This traceability can be further augmented using watermarking technology11. Footnote 11: [https://huggingface.co/blog/alicia-truepic/identify-ai-generated-content](https://huggingface.co/blog/alicia-truepic/identify-ai-generated-content) **Consent Withdrawal.** At any time, users are able to request withdrawal of their consent through a distributed ledger, without revealing their true identity. Users can view the journey of their data using the client-side Consent Management Endpoint, which connects to the distributed ledger to query the information based on local records. If users intend to withdraw consent to certain data, they can send the signing public key, the hash of the data, and a signature of a challenge for withdrawal to the distributed ledger. Once all this information is automatically verified to be authentic by the Consent Request Handler on the distributed ledger, all parties holding the original or downstream data will receive the notification about this consent withdrawal, and once their removal of data is completed, they should report the completion to the Consent Logger, which will subsequently notify the user. The authority and the public can audit the relevant parties based on the consent records throughout the process. ## 4 Implementation We implement a proof-of-concept system to validate the feasibility of architectural design. In this section, we describe the main software libraries and core operations of modules within each layer. ### Web Side The base system of a web server is implemented as a streamlined social media website based on LoopBack 412. The website is deployed onto a Google Cloud Platform 4 vCPU 16GB memory virtual machine with a Debian 12 operating system. PostgreSQL13 v15.0 is used as database, in combination with Redis14 v7.2 for the REST-level caching. On top of the base system, the Consent Tagging Processor is facilitated with an additional database table, coupled with injector mechanisms in the HTTP handlers of the base system. Algorithm 1 demonstrates the operations of the Consent Tagging Processor when a user request containing user data is received, effectively linking the consent information with data. Algorithm 2 demonstrates the operations of masking data and injecting consent information into response when the web page is being crawled by crawlers. Footnote 12: [https://loopback.io/](https://loopback.io/) Footnote 13: [https://www.postgresql.org/](https://www.postgresql.org/) Footnote 14: [https://redis.io/](https://redis.io/) We employ Scrapy15 v2.11.0 as the crawler in combination with Splash16 v3.5.0 to enable JavaScript rendering capability. The crawler extracts elements from web pages and outputs data in gzip format. Footnote 15: [https://github.com/scrapy/scrapy](https://github.com/scrapy/scrapy) Footnote 16: [https://github.com/scrapinghamb/splash](https://github.com/scrapinghamb/splash) ``` crawerInfo = CrawerInfo.get(request.headers["user-agent"]) data = ORM.extract(datalds) consentInfo = consentStore.extract(datalds) ford in data do if{checkConsent(consentInfo[d.id], crawerInfo) then d = mask(d) else d = addConsentInfo(d, consentInfo[d.id]) endif endfor response.send(data) ``` **Algorithm 2** Consent Tagging Processor on receiving crawler request ### Client Side The frontend is implemented using React17 16.8.6. The Integrated Consent Tagging Extension is implemented as a part of a Chrome browser extension18 with manifest v2. The Consent Management Endpoint is housed within the same extension, using web3.js19 v1.9.0. As shown in Algorithm 3, the extension signs the data on behalf of the user when a request is sent, and stores relevant information for tracking and withdrawal of consent. The extension uses stored information to query the distributed ledger for consent withdrawal. The hash algorithm used is Keccak-25620, and the digital signature scheme is Elliptic Curve Digital Signature Algorithm (ECDSA)21 with named curve NIST P-38422. Footnote 17: [https://react.dev/](https://react.dev/) Footnote 18: [https://developer.chrome.com/docs/extensions/](https://developer.chrome.com/docs/extensions/) Footnote 19: [https://github.com/web3/web3.js](https://github.com/web3/web3.js) Footnote 20: [https://docs.web3js.org/api/web3-utils/function/sha3/](https://docs.web3js.org/api/web3-utils/function/sha3/) Footnote 21: [https://web3js.readthedocs.io/en/v1.9.0/web3-eth.html#sign](https://web3js.readthedocs.io/en/v1.9.0/web3-eth.html#sign) Footnote 22: [https://csrc.nist.gov/pubs/fips/186-4/final](https://csrc.nist.gov/pubs/fips/186-4/final) **function** webRequest.onBeforeSendHeaders.listener if request.method == 'POST' and "non-crawable" not in request.headers then h = hash(request.data) s = sign(privKey, h) request.headers.add(h,s,consentConfig) localStorage.add(h,s,pubKey,consentConfig,metadata) endif endfunction ``` **Algorithm 3** Integrated Consent Tagging Extension on sending request ### Machine Learning Side As the focus of the architecture is on the tracking and withdrawal of user content instead of training of machine learning models, the functional modules of the Machine Learning side are not implemented into a real system. We implemented a retraining trigger using bash, and a Consent Records module bridged with the distributed ledger by the endpoint implemented using web3.js. The endpoint will receive emitted consent withdrawal message from the distributed ledger, and deliver the message to the Trigger. ### Distributed Ledger We selected the Ethereum blockchain as the distributed ledger platform and implemented the modules as smart contracts using Solidity 0.8.20. The Consent Logger smart contract keeps a mapping of all consent tags with their hashes, signatures, custodians, and associated requests. The Consent Request Handler serves as an interface for users to engage with the Logger for consent management. Additionally, the Agent Configurations smart contract maintains the configurations for Crawlers. Web Servers can query this contract to obtain crawler configurations and their future updates, subsequently enforcing relevant server-side policies. ## 5 Evaluation The implemented proof-of-concept system runs smoothly throughout the lifecycle of user data. The consent information is correctly embedded into request headers, subsequently received by the web server, and scraped by the crawler. This information is then incorporated into the AI model's training data. Users are promptly notified about their data usage via a distributed ledger, enabling them to request data withdrawal using cryptographical mechanisms without the necessity of authenticating their identity. The framework is compatible with the existing internet infrastructure. In our assessments, it effectively accommodates requests of types POST, PUT, and PATCH. According to our evaluation results, the proof-of-concept implementation does not inherently cater to WebSocket requests, but this capability can be realized by intercepting WebSocket connections prior to their establishment. This also holds for custom methods. Due to the extensibility of HTTP and HTML, our framework, which is built upon this spirit, also possesses considerable extensibility. The injected consent headers support expanded fields evolved conventions, and different cryptographic data. Likewise, the injected consent attributes of DOM elements are extensible. The framework seamlessly integrates with the current internet infrastructure. Components, such as the client-side Consent Tagging Extension and the web-side Consent Tagging Processor, all function correctly with websites or clients that have not adopted the framework. Furthermore, to assess the impact of the framework on the existing applications, we ran a series of micro-benchmarking on the implemented system. The client-side overhead is shown in Figure 4. Each data point depicted in the figure represents the mean value derived from 20 individual runs to ensure statistical reliability. The framework causes additional overhead for sending requests from the client side. However, the overhead for a payload size of 1mb is only around 5ms, which we believe is negligible, particularly compared with the non-avoidable overhead of around 20ms from the request invocation from frontend code. Figure 4: Time cost for Request Invocation, Request Processing, and Request Processing with Consent, across varying payload sizes. The time associated with Request Processing with Consent indicates the additional client-side overhead introduced by the framework. For backend overhead, the results are shown in Figure 5. As modern web services adopt a range of technologies such as caching and indexing, to optimize the performance of queries, we initially disabled these features to compare the difference in time cost. Though the framework added time cost when querying a relatively large amount of data, the increment was merely around 20ms, which could be deemed negligible. Moreover, when the optimization features are enabled, the time costs are nearly identical, demonstrating that our framework is unlikely to introduce additional burdens on typical web services. In addition, our evaluation of the framework extends to the transaction capacity on the selected distributed ledger, Ethereum. Our findings indicate that a single transaction, adhering to a typical gas limit of 30M, can encapsulate over 47,000 consent information entries. This scenario operates under the assumption that web servers aggregate and upload consent information in bulk within a singular transaction, executed after specified intervals. Our testing, conducted on the Goerli testnet, received confirmation on the network within an average of 23 seconds. In an optimal scenario utilizing consortium blockchains, which have considerably higher transactions per second (TPS) rates, the bottleneck observed on Ethereum will become less constraining for this framework. ## 6 Discussion This work introduces a framework dedicated to strengthening the privacy and copyright protections of online users, overcoming the limitations inherent in existing approaches predicated on corporate choices. Rather than endeavoring to create an entirely new internet architecture, the objective of this framework is to augment the existing internet infrastructure. This is achieved by harnessing its inherent extensibility to enable enhanced privacy and copyright capabilities. Moreover, the extensions to HTTP and HTML are engineered to be perceptible only to client, server, and crawler applications, while remaining invisible to users, which facilitates a seamless and non-disruptive upgrade. The decentralized architecture guarantees the availability of the framework, ensuring users retain continual access to consent management features, independent of the availability of specific service providers or authorities. Additionally, this framework introduces consent tags which is based on cryptographic methods, ensuring that users are not required to provide additional personal information, thereby avoiding a compromise of privacy while exercising their rights. Figure 5: Time cost for Query Processing, across varying number of data rows. Time associated with Query Processing with Consent indicates the server-side overhead when using the framework. Nonetheless, this framework is designed with good actors in mind. Similar to voluntary measures like providing options for website administrators to restrict crawlers, or adhering to the Robots Exclusion Protocol, the efficacy of this framework hinges on the willingness of companies or legislations to adopt it. Moreover, executing data deletion post-withdrawal requests might necessitate auditing by authoritative third parties. We acknowledge this as a limitation of our framework. Yet, given the recent surge in public concerns regarding privacy and copyright in the context of GenAI, coupled with the demonstrated willingness for self-regulation by various companies, we posit that this framework constitutes a significant stride towards building a responsible web. ## 7 Related Work The challenges of user data control in the digital realm have been addressed from various angles in the literature. Considering the diversity of web applications and the "data silos" they have created, SOLID [20] aims to address data ownership and privacy concerns by offering data stores under user control. The decentralized _Pods_, allow users to manage data access by applications or individuals, with the option to revoke permissions. Although SOLID lays the groundwork for user-centric data management, its adoption is still in the early stages with challenges in broader web integration and application support, indicating a need for more integrative solutions within the existing web ecosystem. Chen et al. [21] utilized the XACML policy language to craft user-customized access control policies. This methodology offers a robust mechanism for data sharing in a secure environment. Whilst, the current Web-based applications usually do not support such schemes to prevent data crawling, hence, we present a multi-layered solution to address this challenge. The decentralized nature of blockchain technology offers promising avenues for secure data sharing. Several researchers have delved into harnessing blockchain for ensuring data privacy and integrity and fostering user control [22, 23, 24]. Specifically, blockchain has been leveraged for copyright management [25, 26], which can be generalized to stakeholders' consent management over their data. ## 8 Conclusion The World Wide Web (WWW), serving as a vital source of information for individuals, has posed challenges in safeguarding user privacy and copyrights, especially amidst the recent rise of Generative AI. The prevailing WWW falls short of providing adequate mechanisms for consent withdrawal or data copyright claims, leaving both users and data holders at a disadvantage. This paper presents a user-controlled consent tagging framework, built on top of the existing internet infrastructure, leveraging the extensibility of HTTP and HTML, alongside distributed ledger technology, to address these concerns. Users can tag their online data at the point of transmission, monitor its usage, and request consent withdrawal from data holders. Through the evaluation of a proof-of-concept system, the effectiveness of the framework is substantiated. We believe that this work paves the way for a more responsible, user-centric web ecosystem.
2305.04072
Keyword-Based Diverse Image Retrieval by Semantics-aware Contrastive Learning and Transformer
In addition to relevance, diversity is an important yet less studied performance metric of cross-modal image retrieval systems, which is critical to user experience. Existing solutions for diversity-aware image retrieval either explicitly post-process the raw retrieval results from standard retrieval systems or try to learn multi-vector representations of images to represent their diverse semantics. However, neither of them is good enough to balance relevance and diversity. On the one hand, standard retrieval systems are usually biased to common semantics and seldom exploit diversity-aware regularization in training, which makes it difficult to promote diversity by post-processing. On the other hand, multi-vector representation methods are not guaranteed to learn robust multiple projections. As a result, irrelevant images and images of rare or unique semantics may be projected inappropriately, which degrades the relevance and diversity of the results generated by some typical algorithms like top-k. To cope with these problems, this paper presents a new method called CoLT that tries to generate much more representative and robust representations for accurately classifying images. Specifically, CoLT first extracts semantics-aware image features by enhancing the preliminary representations of an existing one-to-one cross-modal system with semantics-aware contrastive learning. Then, a transformer-based token classifier is developed to subsume all the features into their corresponding categories. Finally, a post-processing algorithm is designed to retrieve images from each category to form the final retrieval result. Extensive experiments on two real-world datasets Div400 and Div150Cred show that CoLT can effectively boost diversity, and outperforms the existing methods as a whole (with a higher F1 score).
Minyi Zhao, Jinpeng Wang, Dongliang Liao, Yiru Wang, Huanzhong Duan, Shuigeng Zhou
2023-05-06T15:26:05Z
http://arxiv.org/abs/2305.04072v1
# Keyword-Based Diverse Image Retrieval by ###### Abstract. In addition to relevance, diversity is an important yet less studied performance metric of cross-modal image retrieval systems, which is critical to user experience. Existing solutions for diversity-aware image retrieval either explicitly post-process the raw retrieval results from standard retrieval systems or try to learn multi-vector representations of images to represent their diverse semantics. However, neither of them is good enough to balance relevance and diversity. On the one hand, standard retrieval systems are usually biased to common semantics and seldom exploit diversity-aware regularization in training, which makes it difficult to promote diversity by post-processing. On the other hand, multi-vector representation methods are not guaranteed to learn robust multiple projections. As a result, irrelevant images and images of rare or unique semantics may be projected inappropriately, which degrades the relevance and diversity of the results generated by some typical algorithms like top-\(k\). To cope with these problems, this paper presents a new method called CoLT that tries to generate much more representative and robust representations for accurately classifying images. Specifically, CoLT first extracts semantics-aware image features by enhancing the preliminary representations of an existing one-to-one cross-modal system with semantics-aware contrastive learning. Then, a transformer-based token classifier is developed to subsume all the features into their corresponding categories. Finally, a post-processing algorithm is designed to retrieve images from each category to form the final retrieval result. Extensive experiments on two real-world datasets Div400 and Div150Cred show that CoLT can effectively boost diversity, and outperforms the existing methods as a whole (with a higher \(F1\) score). Cross-modal retrieval, Keyword-based image retrieval, Diversification retrieval, Transformer + Footnote †: dagger}\)Mojar part of this work was done while the author was an intern at Tencent. + Footnote †: dagger}\)Corresponding author. + Footnote †: dagger}\)Mojar part of this work was done while the author was an intern at Tencent. + Footnote †: dagger}\)Corresponding author. Obviously, keyword-based queries are prone to match various retrieval results, but a list of images with similar semantics cannot meet the diverse requirements of different users, thus deteriorating their retrieval experience [43, 64]. To address the aforementioned drawback, the task of _keyword-based diverse image retrieval_[15, 17, 18, 20], is proposed, which takes a short keyword-based text as input to search a list of images with high relevance and rich semantic diversity. Recent approaches can be roughly divided into two groups. The first group is post-processing based approaches [38, 39, 40, 10, 25, 33, 41, 61]. These methods usually apply existing cross-modal encoders to extracting features. Then, various algorithms (_e.g._ re-ranking [61] and clustering [33]) are adopted to promote the diversity. However, these methods often cannot obtain a good retrieval list with balanced relevance and diversity, due to the limitations of _one-to-one projection_. For instance, as shown in Fig. 1(a), on the one hand, in typical one-to-one projection, (**W1**) the query feature (_the red star_) is likely to be surrounded by images of common semantics (_the brown points_) due to the long-tail distribution of the training data, which will make the top-\(k\) result set full of images of similar semantics. On the other hand, (**W2**) image features with different semantics are less distinguishable because of the ignorance of modeling diversity [64], which will hurt the performance of some algorithms like clustering. The second group is a set of learning-based approaches [1, 42, 52, 64, 70] that try to use various techniques (_e.g._ graph [43], metric learning [5, 46] and multiple instance learning [55, 70]) to model the diversity. Compared with the one-to-one projection that projects each image to a vector in the latent space, these methods [42, 52, 64] embed each image (or text query) into multiple vectors around the relevant features to obtain their diverse representations for top-\(k\) search, namely _multi-vector projection_. Unfortunately, such a projection is not robust enough and unable to handle images of rare or unique semantics. As shown in Fig. 1(b), (**W3**) some irrelevant outliers (_the grey points_) will be mistakenly projected to represent diversity. Besides, (**W4**) some images of rare or unique semantics (_the blue points_), will very possibly be projected into some remote regions where the top-\(k\) algorithm cannot reach. To overcome the weaknesses (_i.e._, **W1**-**W4**) of the existing methods, in this paper we propose a novel approach called CoLT (the abbreviation of Semantics-aware **C**ontrastive **L**earning and **T**ransformer) for keyword-based image retrieval. In particular, to overcome **W1**, **W2** and **W3**, CoLT extracts stable, representative and distinguishable image features with the help of a new _semantics-aware contrastive learning_ (SCL) loss. As shown in Fig. 1(c), the core idea of SCL is to project images of similar semantics (_e.g._ dogs of the same breed) to vectors around their matched semantic prototype that keeps a proper distance from the other prototypes (_e.g._ dogs of different breeds and irrelevant images) and the query feature to better model the diversity. As for coping with images of rare semantics (**W4**), instead of utilizing top-\(k\) algorithm as in existing works, CoLT employs a powerful _transformer-based token classifier_ (TTC) to generate the final retrieval results. Specifically, in TTC the image and query features are concatenated as an input token sequence. Subsequently, TTC classifies each token into a relevant semantic category to distinguish the images of various semantics. Finally, a flexible post-processing algorithm is designed to select images from various semantic categories (both common and rare semantics), to form the final retrieval results. Such a design offers our method four-fold advantages: (i) _High semantic relevance_. CoLT improves the robust one-to-one projection of pre-trained cross-modal encoders, which is much more stable than recent multi-vector projection-based methods. (ii) _High semantic diversity_. CoLT not only makes the image features much more distinguishable via semantics-aware contrastive learning but also uses a transformer-based token classifier to mine rare semantics. (iii) _General and easy-to-use_. CoLT can be directly stacked at the end of various existing cross-modal encoders, without modifying their structures and parameters, and boost the performance in a plug-and-play manner. (iv) _Easy-to-control_. We can modify the post-processing algorithm in CoLT to flexibly balance semantic relevance and semantic diversity without re-implementing the model. Contributions of this paper are summarized as follows: (1) We pinpoint the limitations of existing methods and present a novel approach called CoLT for keyword-based diverse image retrieval. CoLT first extracts high-quality and distinguishable semantics-aware features and then classifies the features to generate the final retrieval list. (2) We develop a semantics-aware contrastive loss in CoLT to extract more robust and representative features. (3) To better mine semantic diversity, we design a transformer-based token classifier to generate the retrieval results. (4) We conduct extensive experiments on two real-world datasets Div400 and Div150Cred, which show that our method can effectively boost the diversity, and outperforms the existing methods as a whole with a higher \(F1\) score. Figure 1. Illustrations of (a) typical cross-modal image retrieval systems; (b) learning-based multi-vector retrieval systems; (c) our CoLT method. Red star represents the query. Points of different colors denote images of different semantics. Gray points represent irrelevant images, and triangles represent the prototypes of the corresponding semantics. Dotted circles denote the projection regions. ## 2. Related Work ### Cross-Modal Image Retrieval Typical cross-modal image retrieval methods (Wang et al., 2019) can be roughly divided into two categories: cross-modal similarity measurement based methods (Wang et al., 2019; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020) that directly calculate the cross-modal distance and common space learning-based methods (Wang et al., 2019; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020) that map the query and images into a shared space via various techniques like attention mechanism and generative adversarial network etc. Nowadays, thanks to the transformer structure and pre-training techniques, large-scale pre-trained encoders (_e.g._ CLIP (Wang et al., 2020), ALIGN (Liu et al., 2020), GroupViT (Wang et al., 2020), and U-BERT (Wang et al., 2020)) have shown their superiority in relevance-based retrieval tasks. Although these methods have significantly improved the retrieval relevance, their ignorance of modeling semantic diversity hurts the semantic diversity of their retrieval lists. ### Diverse Retrieval Existing diverse retrieval approaches roughly fall into two groups. The first group is post-processing based methods (Wang et al., 2019; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020), which usually use existing feature encoders (Wang et al., 2019; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020) to generate features, then mine the diversity with a post-processing algorithm. Among them, (Wang et al., 2020) first filters irrelevant images, then clusters the rest via DBSCAN (Bong et al., 2019) to promote diversity. MMR (Wang et al., 2020) is proposed to re-rank the retrieval list to balance diversity and relevance. Bo and Gao (2019) extract keywords to control the diversity of the results. The second group includes recently proposed learning-based methods, which aim to represent the semantic diversity in the latent space (Bong et al., 2019; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020). In particular, Su et al. (Su et al., 2020) propose a dynamic intent graph (GRAPH4DIV) to balance content and intent in a document. Song and Soleymani (Song and Soleymani, 2019) utilize multiple transformers to extract visual features. Wu and Ngo (Wu and Ngo, 2020) design an inactive word loss to expand the semantic concepts to represent various video contents. VMIG (Wang et al., 2020) embeds each image and text query into multiple vectors via multiple instance learning. Although these methods succeed in boosting the semantic diversity of the retrieval results, they perform unsatisfactorily in guaranteeing semantic relevance and mining images of rare semantics. ### Differences between Our Method and Existing Works To expound the differences between CoLT and typical existing methods, in Tab. 1 we present a qualitative comparison from three dimensions: how are images and queries projected? how are the final retrieval results generated? and how is the performance in terms of both relevance and diversity? As presented in Tab. 1, recent pre-trained cross-modal encoders (_e.g._ CLIP (Wang et al., 2020)) cannot model semantic diversity well due to the limitations of the one-to-one projection. Two typical post-processing based methods MMR and UMONS are poor at either modeling diversity (Wang et al., 2020) due to the lack of an accurate diversity measurement mechanism or guaranteeing relevance due to clustering irrelevant features together. The recently proposed VMIG suffers from the robustness issue due to the uncertainty of multi-vector projection and the rare semantics handling issue caused by the top-\(k\) search algorithm, which leads to undesirable performance. Our method CoLT is the only retrieval method that achieves both high semantic relevance and rich semantic diversity thanks to the proposed _semantics-aware contrastive learning_ (SCL) and powerful _transformer-based token classifier_ (TTC). Experiments and visualization studies demonstrate the advantages of our method. ## 3. Methodology ### Overview Given a text query \(Q\) and an image dataset \(\mathcal{D}\), our aim is to generate a retrieval list \(\mathcal{R}\) that consists of \(K\) images of high semantic relevance and diversity. Fig. 2 shows the architecture of our method CoLT, which is composed of six components: a _fixed feature encoder_\(f\) that takes \(Q\) and \(\mathcal{D}\) as input to generate initial query feature \(h_{q}\) and visual features \(\{h^{i}_{b}\}\), _i.e._, \(\{h_{q},\{h^{i}_{o}\}\}=f(Q,\mathcal{D})\), a _visual feature re-encoder_\(g\) that re-encodes the visual features with the help of the _semantics-aware contrastive learning_ (SCL) module, and the _transformer-based token classifier_ (TTC) \(\phi\) that takes the query feature \(h_{q}\) and the re-encoded image features \(\hat{h}^{i}_{b}\) as an input token sequence to subsume each token into a suitable semantic category according to their representations. The TTC module consists of two sub-modules: the token classification transformer that is composed of \(L\) transformer encoder layers, and a fully-connected layer as the classifier. Finally, a _post-processing_ module is adopted to select typical images from these categories as the final results. During training, a _token-wise data augmentation_ module is used to make full use of the training data, which is employed between the SCL module and the TTC module. ### Semantics-aware Contrastive Learning In CoLT, we first use a fixed pre-trained feature encoder \(f\) to extract preliminary high-quality and robust visual features and query feature. Nevertheless, as mentioned above, these one-to-one projected features are not distinguishable enough to support effective diverse retrieval. Ergo, a visual feature re-encoder \(g\), which is implemented by a multi-layer perception and powered by a novel semantics-aware contrastive learning is used to refine the image features to promote semantic diversity. In particular, for each visual feature \(\hat{h}^{i}_{o}\), we re-encode its representation as follows: \[\hat{h}^{i}_{o}=h^{i}_{o}+\beta g(h^{i}_{o}), \tag{1}\] where \(\beta\) is a hyper-parameter used to control the learned re-encoded feature \(g(h^{j}_{o})\). In what follows, we introduce the proposed semantics-aware contrastive loss in detail. \begin{table} \begin{tabular}{c|c c c} \hline \hline Method & Projection & Generation & Performance \\ \hline CLIP (Wang et al., 2020) & One-to-one & top-\(k\) & Low diversity \\ MMR (Wang et al., 2020) & One-to-one & Re-ranking & Low diversity \\ UMONS (Wang et al., 2020) & One-to-one & Clustering & Low relevance \\ VMIG (Wang et al., 2020) & Multi-vector & top-\(k\) & Medium relevance \& diversity \\ CoLT (ours) & SCL & TTC & High relevance \& diversity \\ \hline \hline \end{tabular} \end{table} Table 1. A qualitative comparison between CoLT and major existing methods from three dimensions: feature projection, retrieval result generation and performance. As shown in Fig. 3, the goal of semantics-aware contrastive learning (SCL) is: (1) Enlarging the distance between query feature and irrelevant features; (2) Enlarging the distance between relevant image features and irrelevant image features; (3) Enlarging the distance among image features of different semantics, which makes these features more distinguishable while benefiting diversity; (4) Shrinking the distance between the query feature and relevant image features, which can improve accuracy as (1) and (2); (5) Shrinking the distance among image features of similar semantics. In SCL, we use semantic category prototypes stored in a bank \(\mathcal{B}\) to efficiently compute (3) and (5), which can avoid inputting a large batch size. As a result, each image query will be projected to a position with suitable distance between the query feature, its matched semantic prototypes, unmatched semantic prototypes, and irrelevant image features. Here we discuss the implementation of the proposed semantics-aware contrastive learning. In SCL, the positive pairs include (1) relevant image-query feature pairs and (2) relevant image-category prototype pairs, while the negative pairs are (3) irrelevant image-query feature pairs and (4) irrelevant image-category prototype pairs. For a query \(h_{q}\) with a set of relevant image features \(\{\hat{h}_{o}^{r,i}\}\) and a set of irrelevant image features \(\{\hat{h}_{o}^{ir,i}\}\). Let \(\mathcal{B}(i)\) denotes the \(i-\)th semantic category prototype stored in the bank and \(G(\cdot)\) is a function that maps the image features to the corresponding indices of the matched semantic category prototypes, the loss of SCL can be formulated as follows: \[\mathcal{L}_{scl}=-log\underbrace{\overbrace{\Sigma_{i}exp(h_{q}\cdot\hat{h}_ {o}^{r,i}/r)}^{(1)}+\overbrace{\Sigma_{i}exp(\mathcal{B}(G(\hat{h}_{o}^{r,i}) )\cdot\hat{h}_{o}^{r,i}/r)}^{(2)}}_{(3)}+\underbrace{\Sigma_{i}exp(\mathcal{B} (j)\cdot\hat{h}_{o}^{ir,i}/r)}_{(4)}+(1)+(2)}, \tag{2}\] where \(r\) is a hyper-parameter used to control the temperature. The category prototypes stored in the bank \(\mathcal{B}\) play an important role in the proposed SCL. Therefore, they need to be initialized and updated during training to obtain accurate and latest representations. Ergo, we use the fine-grained textual description features extracted by the fixed feature encoder to initialize the bank. As for update, exponential moving average (EMA) (Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017) is utilized to update the category prototypes: \[\mathcal{B}(G(\hat{h}_{o}^{r,i}))=\alpha\mathcal{B}(G(\hat{h}_{o}^{r,i}))+(1- \alpha)\hat{h}_{o}^{r,i}, \tag{3}\] where \(\alpha\) is the momentum coefficient used to update the bank. ### Transformer-based Token Classification After obtaining a set of representation features, the next problem is how to generate the final result of high relevance and diversity. To this end, a powerful _transformer-based token classifier_ (TTC) is developed to do feature fusion and token classification. Specifically, we treat each feature as a token, and concatenate the query feature \(h_{q}\) and \(N\) image features \(\{\hat{h}_{o}^{r}\}_{i=1}^{N}\) to form the input token sequence, _i.e._, \(\mathcal{I}=[h_{q},\{\hat{h}_{o}^{l}\}_{i=1}^{N}]\). Here, to avoid irrelevant tokens, only \(N\) image features semantically most similar to the query feature are used. We sort the image features with respect to their cosine similarity with the query feature, and generate the corresponding ground truth \(\mathcal{J}=\{y_{i}\}_{i=1}^{N+1}\) in terms of their fine-grained semantic annotations. It is worth mentioning that in TTC, all the irrelevant image features and the query feature are labeled with special indexes to further distinguish them. As shown in Fig. 2, \(L\) transformer encoder layers (Krizhevsky et al., 2017) powered by multi-head self-attention and feed forward layers are used to fuse these tokens. Subsequently, a fully connected layer is stacked as a classifier to do prediction. Formally, the predictions of TTC can be written as follows: \[\{p_{i}\}_{i=1}^{N+1}=\phi(\mathcal{I}), \tag{4}\] Figure 3. Illustration of semantics-aware contrastive learning. Different colors indicate various categories. Figure 2. The architecture of CoLT. Blue lines are valid only during training. where \(p_{i}\) is the predicted distribution probability of the \(i-\)th token. Cross entropy loss is served as the classification loss to train TTC: \[\mathcal{L}_{cls}=-\Sigma_{i}y_{i}log(p_{i}). \tag{5}\] After classifying each image token into an appropriate category, a post-processing algorithm \(t\) is applied to generate the final retrieval list. That is, selecting \(X\) images with the highest similarity to the query feature from each semantic category: \[\mathcal{R}=t(\{p_{i}\}_{i=1}^{N+1},X). \tag{6}\] Finally, after selecting images from \(\lfloor k/X\rfloor\) semantic categories, a retrieval list \(\mathcal{R}\) of length \(k\) is obtained. ### Token-wise Data Augmentation Due to the lack of specific fine-grained annotations, directly training our model on \(\mathcal{D}\) is prone to over-fitting. Therefore, to better exploit the potential of the transformer-based token classification module, we employ token-wise data augmentation to enrich the input tokens \(\mathcal{I}\). In particular, four different kernel operations are introduced: **Query perturbation:** We perturb the query feature \(h_{q}\) as follows: MIXUP (Wang et al., 2017) the query feature with a relevant image feature \(\hat{h}_{o}^{r,i}\) with a probability of \(p_{q}\). Formally, let \(\lambda\sim Beta(1.0,1.0)\), we generate the perturbed query feature as follows: \[h_{q}=max(\lambda,1.0-\lambda)h_{q}+min(\lambda,1.0-\lambda)\hat{h}_{o}^{r,i}. \tag{7}\] **Image perturbation:** We perturb the image feature \(\hat{h}_{o}^{r,i}\) as follows: MIXUP the image feature with a relevant query feature \(h_{q}\) with a probability of \(p_{\text{e}}\). By sampling \(\lambda\) from \(Beta(1.0,1.0)\), we have \[\hat{h}_{o}^{r,i}=max(\lambda,1.0-\lambda)\hat{h}_{o}^{r,i}+min(\lambda,1.0- \lambda)\hat{h}_{q}. \tag{8}\] **Deletion:** We delete an image feature with a probability of \(p_{d}\). **Copy:** We copy an image feature with a probability of \(p_{c}\). Among the 4 operations, query perturbation and image perturbation directly augment the features without modifying the semantics-aware representations, which is beneficial to the robustness of the model while the operations of deletion and copy can enhance the model's ability of distinguishing rare and similar tokens, respectively. Following the experience in (Wang et al., 2017), we perform data augmentation to the input tokens \(\mathcal{I}\) in such a manner: sampling each data augmentation operation in the following order: (1) query perturbation; (2) deletion; (3) copy; (4) image perturbation, then individually performing the selected operation on each token. ### Training and Evaluation Algorithms The training procedure of CoLT is presented in Alg. 1, which can be divided into three steps. First, the initial query feature and image features are extracted by \(f\) (L2). Then, we train the visual feature re-encoder by the proposed semantics-aware contrastive learning (L3-L9) to re-encode the preliminary features to semantics-aware ones. Finally, we take the query feature and the re-encoded image features as input to train the transformer-based token classifier with the help of token-wise data augmentation (L10-L16). The evaluation procedure is given in Alg. 2, which is like this: we first generate the initial features (L2), then re-encode the image features (L3). Subsequently, take these features as tokens to generate the predicted distribution probabilities (L4-L5). Finally, using the post-processing algorithm \(t\) to generate the final retrieval list \(\mathcal{R}\). ``` 1:Input: Fixed feature encoder \(f\), visual feature re-encoder \(g\), transformer-based token classifier \(\phi\), query \(\mathcal{Q}\), and image dataset \(\mathcal{D}\) 2:\(h_{q},\{h_{o}^{i}\}=f(\mathcal{Q},\mathcal{D})\) 3:initialize \(\mathcal{B}\) with fine-grained description 4:while\(g\) is not convergenced do 5:\(\hat{h}_{o}^{i}=h_{o}^{i}+\beta g(h_{o}^{i})\) 6:\(\hat{h}_{o}^{r,i},\hat{h}_{o}^{ir,i}\sim g(h_{o}^{i})\) 7: Compute \(\mathcal{L}_{scl}\) via Eq. (2) 8: Optimize \(g\) according to \(\mathcal{L}_{scl}\) 9: Update \(\mathcal{B}\) via Eq. (3) 10:while\(\phi\) is not convergenced do 11:\(\mathcal{I}=[h_{q},\{\hat{h}_{o}^{i}\}_{i=1}^{N}]\) 12: Perform data augmentation to \(\mathcal{I}\) according to Sec. 3.4 13: Obtain the final \(\mathcal{I}\) according to Sec. 3.3 14:\(\{p_{i}\}_{i=1}^{N+1}=\phi(\mathcal{I})\) 15: Compute \(\mathcal{L}_{cls}\) via Eq. (5) 16: Optimize \(\phi\) according to \(\mathcal{L}_{cls}\) 17:return\(g\) and \(\phi\) ``` **Algorithm 1** The training of CoLT. ## 4. Performance Evaluation ### Research Questions In this section, we evaluate the proposed method by conducting extensive experiments to answer the following research questions: 1. How does CoLT perform in comparison with the state-of-the-art cross-modal image retrieval models in terms of both relevance and diversity? 2. Can the proposed semantics-aware contrastive learning and the transformer-based token classifier effectively boost relevance and diversity? 3. How do different components/parameters contribute to the effectiveness of CoLT? ### Datasets and Metrics Here we briefly summarize the datasets and metrics used in our paper. More details can be referred to (Kang et al., 2017). Two datasets are used in our paper: **Div400:** Div4001 is collected by the MediaEval Workshop (Huang et al., 2017). It contains 396 queries with \(43,418\) images. All queries are mainly related to tourist locations and the average length of queries is 3.7 words. On average, the ground truth of a query covers \(11.8\) semantic categories of images in the dataset. Each image has a coarse-grained textual description (_e.g._ "Big Ben") and a fine-grained one (_e.g._ "partial view"). **Div150Cred:** Div150Cred2 is derived from the competition dataset for diverse social image retrieval in 2014 (Huang et al., 2017). It has a total of 153 queries with \(45,375\) images. The ground truth of a query averagely covers \(22.6\) semantic categories of images in the dataset. Footnote 2: [http://campus.pub.ro/lab/biionesc/Div150Cred.html](http://campus.pub.ro/lab/biionesc/Div150Cred.html) Three metrics are used to evaluate the performance, including _precision_ (\(P\)) for measuring semantic relevance, _cluster recall_ (\(CR\)) for measuring semantic diversity, and the \(F1\)_score_ of \(P\) and \(CR\) to measure the overall balanced performance. Specifically, we calculate the evaluation metrics of the top-\(k\) results, where \(k\) is set to \(10\) and \(20\) by following (Zhou et al., 2017). In the rest of this paper, we use P@k, CR@k, and F1@k to denote the \(P\), \(CR\), and \(F1\) value of the top-\(k\) results, respectively. Higher P@k indicates better relevance, and higher CR@k means richer semantic diversity. ### Implementation Details CoLT is implemented in PyTorch-1.10. All experiments are conducted on 4 NVIDIA 3090 GPUs with 24GB memory. The model is trained using the Adam (Kingmare et al., 2014) optimizer with a learning rate of \(10^{-5}\) for the visual feature re-encoder \(g\) and \(10^{-4}\) for the transformer-base token classifier \(\phi\). The batch size is set to 32. \(\tau\), \(\alpha\), \(\beta\), and \(\epsilon\) are set to small values: \(\tau=0.2\), \(\alpha=0.01\), \(\beta=0.02\), and \(\epsilon\)=0.01 by following (He et al., 2017; He et al., 2018; He et al., 2019; Zhang et al., 2019). \(X\), \(N\), and \(L\) are set to \(1,200\), and \(8\) through ablation study. The probabilities used for data augmentation are set by following (Zhou et al., 2017). In particular, we have \(p_{q}=0.5\), \(p_{p}=0.2\), \(p_{d}=0.2\) and \(p_{c}=0.2\). All different semantic categories (or simply semantics) in each dataset are stored as prototypes. As a result, we store 629 prototypes for Div400 dataset while 725 for Div150Cred dataset. ### Comparing with SOTA Methods (RQ1) To demonstrate the effectiveness of our method CoLT, we compare it with several state-of-the-art approaches, including three typical cross-modal image retrieval methods: IMRAM (Dong et al., 2016), FCA-Net (Huang et al., 2017) and CLIP (Liu et al., 2017), two post-processing-based diverse retrieval methods: MMR (Liu et al., 2017) and UMONS (Zhou et al., 2017), and three learning-based diverse retrieval approaches: DESA (Liu et al., 2017), GRAPH4DIV (Liu et al., 2017) and VMIG (Zhou et al., 2017). Since MMR, UMONS and VMIG require a feature encoder, for fairness, we use the same feature encoder CLIP (Liu et al., 2017) to implement them and our method CoLT. For MMR and UMONS, we use grid search to obtain their best results. Generally, our results are higher than those in the original papers thanks to the strong feature encoder. For example, the P@20, CR@20, and F1@20 values of VMIG on the DIV400 dataset are lifted from 78.27%, 59.01% and 67.29% to 83.01%, 59.46% and 69.28%. Experimental results on Div400 and Div150Cred are given in Tab. 2 and Tab. 3, respectively. Here, the best values are bolded while the second-best results are underlined. From Tab. 2 and Tab. 3, we can see that 1) typical cross-modal image retrieval methods including large-scale pre-trained encoder CLIP perform well in relevance-based retrieval but cannot do diverse retrieval well. For example, although CLIP achieves the best relevance performance, it is inferior to the others in diversity score. 2) Post-processing-based approaches can only moderately trade-off accuracy and diversity. For example, as can be seen in Tab. 2, the diversity improvement achieved by MMR is very limited (CR@10 increases from 35.68% to 36.96% on Div400). As for UMONS, its accuracy score is greatly degraded (P@10 decreases from 90.17% to 79.28% on Div400) though it obtains a relatively large diversity improvement. As a result, their \(F1\) scores are not satisfactory. 3) Recently proposed learning-based methods achieve balanced relevance and diversity scores. For instance, VMIG outperforms most existing methods in CR@10 and CR@20, and performs better than UMONS in relevance score. However, its relevance and diversity are both limited due to the weaknesses of the multi-vector projection. 4) Our method CoLT obtains the best diversity score, high precision, and obviously the highest overall \(F1\) score on both Div400 and Div150Cred. In particular, CoLT outperforms CLIP and VMIG by significant margins, i.e., 7.70% and 4.02% of F1@20 on Div400, respectively. This indicates that CoLT is able to get retrieval results of both high relevance and rich semantic diversity. Besides, we present a variant of CoLT that outperforms CLIP on both relevance and diversity, we will discuss the details in Sec. 4.6. From Fig. 4(a), we can see that the preliminary OOP representations extracted by CLIP can distinguish some irrelevant images. However, its weaknesses are also evident: (1) The query is closer to some image features of common semantics (the blue and brown points in the 1st case); (2) Images of various semantics are mixed. As a result, such representations are not suitable for mining diversity. Then, let us pay attention to multi-vector projection (MVP). As can be seen from Fig. 4(b), each image and query are projected Figure 4. Visualization comparison of different representations. (a) One-to-one projection (OOP) representations generated by the cross-modal encoder \(f\). (b) Multi-vector projection (MVP) representations. (c) Semantics-aware one-to-one projection (SA-OOP) representations re-encoded by \(g\). Retrieved images are marked by black square. (d) The final retrieval results generated by different methods. Figure 5. The visualization of CoLT results. (a) One-to-one projection (OOP) representations generated by a cross-modal encoder \(f\). (b) Semantics-aware one-to-one projection (SA-OOP) representations generated by the re-encoder \(g\). (c) Classification results of TTC \(\phi\). To make the figures clear, irrelevant images are not marked. The numbers around the boxes are the category ID predicted by TTC. (d) The final retrieval results obtained by our post-processing algorithm. into multiple points to enrich diversity. However, on the one hand, some outliers are mistakenly projected into the neighborhood of the query feature to represent diversity (a grey point in the 1st case while two in the 2nd case). On the other hand, some image features of rare semantics are projected into remote regions (the green points in the 1st case) where the top-\(k\) algorithm cannot reach. Thus, as shown in Fig. 4(d), some irrelevant images are selected while some images of rare semantics are not retrieved. Finally, we check the representations of our SCL and the images retrieved by TTC. From Fig. 4(c) we can see that (1) the representations of images of the same semantics are clustered and much more distinguishable compared with the typical OOP representations in Fig. 4(a); (2) Some irrelevant images are also pushed away. For instance, in the 2nd case, some grey points are pushed to the left-bottom corner. This demonstrates the advantages and effectiveness of our SCL. Then, TTC and a post-processing are employed to classify and select images from each category, including rare semantic categories like green points in the 1st case, to form the final results. We also visualize the classification results of TTC to further demonstrate its effect. The visualization is shown in Fig. 5, from which we can see that (1) TTC is able to distinguish different semantics and irrelevant images. Taking the 1st case for example, the yellow points are classified into the 1st category, the majority of the green points are subsumed into the 4th category, and blue points to the 2nd category. Irrelevant images are also correctly classified. This demonstrates the effectiveness of the proposed TTC. (2) The classification performance of TTC can be further improved. For example, as can be seen in the 2nd case, TTC mistakenly classifies one of the green points into the 1st category. In summary, the power of TTC is demonstrated well via visualization. ### Ablation Study (RQ3) Here we conduct ablation study on Div400 to demonstrate the contributions of different modules and the effect of some parameters in our method. The metrics P@20, CR@20 and F1@20 are used. Results are presented in from Tab. 4 to Tab. 10. **Overall performance improvement.** As shown in the 1st row and 2nd row in Tab. 4, our method significantly boosts the diversity score and \(F1\) score from 52.97% to 64.16% and 65.60% and 73.30%, respectively, with only a slight decrease in relevance score. This supports the superiority of our method. **Effect of SCL.** Here we check the effect of the proposed SCL. Specifically, we first design a variant that removes SCL and directly applies TTC. Obviously, as can be seen in the 2nd row and the 3rd row of Tab. 4, all metrics including relevance, diversity, and \(F1\) score are degraded. Besides, we also design a variant that combines SCL with the idea of UMONS to generate the final retrieval results via DBSCAN. Comparing the results of the 4th row and the 5th row, the performance with SCL is better than that of the original UMONS. The reason lies in that SCL is able to make the image features more distinguishable, and such representations are more suitable for existing post-processing schemes. Then, we check the effect of the constructed pairs in SCL. As mentioned in Sec. 3.2, SCL uses 4 kinds of pairs. Among these pairs, (1) and (3) are common in contrastive learning (Shen et al., 2017) to align images and queries while (2) and (4) play important roles in distinguishing images of various semantics. Ergo, we remove (2) and (4) separately to examine their influence. As can be seen in Tab. 5, without pair (2) and pair (4), diversity score and F1 score are degraded. On the contrary, their influence on relevance score is minor. This justifies the effectiveness of SCL -- making the representations much more distinguishable for promoting diversity. **Effect of TTC.** To check the effect of the proposed transformer-based token classifier, we design two variants that replace TTC by DSCAN (the 5th row of Tab. 4) or top-\(k\) (the 6th row of Tab. 4) to generate the retrieval results. Obviously, such variants are inferior to our method (the 2nd row of Tab. 4). This demonstrates the advantage of TTC. **Effect of token-wise data augmentation.** Here we implement a variant that removes the token-wise data augmentation module. Results of this variant are given in the 7th row of Tab. 4. Evidently, the resulting performance is inferior to ours (the 2nd row of Tab. 4). **Why fix the cross-modal feature encoder?** In CoLT, we fix the cross-modal feature encoder \(f\) to better maintain the pre-trained knowledge. To support this design, we implement a variant that finetunes the feature encoder \(f\). Experimental results are given in the 8th row of Tab. 4. Obviously, all metrics including relevance, diversity and \(F1\) are significantly degraded, comparing with ours (the 2nd row of Tab. 4). Possibly, finetuning too many parameters is prone to over-fitting. **Can CoLT support various feature encoders?** As mentioned above, CoLT is general, i.e., it can work with various cross-modal encoders to do diverse image retrieval. To verify this point, we try three different encoder configurations, including ViT (Chen et al., 2017) and BERT (Chen et al., 2017) developed by (Shen et al., 2017), R50 (Krizhevsky et al., 2017) and BERT (Chen et al., 2017) implemented \begin{table} \begin{tabular}{c|c c c} \hline Variant & P@20 & CR@20 & F1@20 \\ \hline ViT-BERT & 87.75\% & 52.39\% & 65.60\% \\ +CoLT & 85.48\% & 64.16\% & 73.30\% \\ \hline R50-BERT & 87.01\% & 52.10\% & 65.17\% \\ +CoLT & 85.62\% & 60.83\% & 72.94\% \\ \hline GroupVIT & 83.68\% & 52.02\% & 64.21\% \\ +CoLT & 82.62\% & 70.91\% & 70.91\% \\ \hline \end{tabular} \end{table} Table 6. Performance when using different feature encoders. \begin{table} \begin{tabular}{c|c|c c c} \hline ID & Variant & P@20 & CR@20 & F1@20 \\ \hline 1 & without SCL+TTC & 87.92\% & 52.97\% & 65.60\% \\ 2 & SCL + TTC & 85.48\% & 64.16\% & 73.30\% \\ \hline 3 & without SCL & 84.26\% & 62.94\% & 72.06\% \\ 4 & UMONS & 73.37\% & 63.24\% & 67.93\% \\ 5 & SCL + DSSCAN & 75.94\% & 63.09\% & 69.40\% \\ 6 & SCL + top-\(k\) & 89.06\% & 89.93\% & 67.97\% \\ \hline 7 & without DA & 86.34\% & 62.19\% & 72.30\% \\ \hline 8 & unfixed \(f\) & 76.52\% & 51.25\% & 61.38\% \\ \hline \end{tabular} \end{table} Table 4. Ablation study of CoLT on Div400. \begin{table} \begin{tabular}{c|c c c} \hline Variant & P@20 & CR@20 & F1@20 \\ \hline All 4 pairs & 85.48\% & **64.16\%** & **73.30\%** \\ Without pair (2) & 85.04\% & 63.78\% & 72.80\% \\ Without pair (4) & **85.72\%** & 62.17\% & 72.07\% \\ \hline \end{tabular} \end{table} Table 5. Effect of pair construction in SCL. by (Wang et al., 2017), and the encoders proposed in GroupViT (Wang et al., 2018). The experimental results are given in Tab. 6, from which we can see that (1) all these pre-trained cross-modal encoders are good at relevance-based retrieval but perform poorly in terms of CR@20; (2) After applying our method CoLT, the diversity score is significantly boosted, with only slight decrease in precision. As a result, superior F1 score is achieved. This validates that CoLT can work well with various feature encoders to boost performance. **Can CoLT flexibly balance accuracy and diversity?** As mentioned above, we can flexibly trade-off the relevance and diversity of the retrieval results without modifying network parameters. This is achieved by controlling the hyper-parameter \(X\). As described in Sec. 3.3, the post-processing algorithm will select \(X\) images from each semantic category to form a retrieval list \(\mathcal{R}\) of length \(k\). Thus, a smaller \(X\) will select fewer images from each category but can include more different categories (estimated by \(\lfloor k/X\rfloor\)), which will benefit the diversity of the retrieval list \(\mathcal{R}\) but may hurt the relevance since classification accuracy on rare semantic categories is poor. On the contrary, a larger \(X\), i.e., selecting more images from each category of common semantics will benefit the accuracy but limit the semantic diversity since fewer categories are exploited. We present the experimental results of how \(X\) impacts performance in Tab. 7. We can see that the best diversity is achieved when \(X=1\) while the best accuracy is obtained when \(X\)=3. This indicates that CoLT can meet various retrieval settings, which demonstrates the flexibility of our approach. In this paper, we set \(X\)=1 by default to obtain the best diversity and \(F1\) score. **Time cost.** We first compare the time cost of our method with that of various SOTA methods. The experimental results are given in Tab. 8. On the one hand, our method CoLT is 2.87\(\times\) faster than the state-of-the-art learning-based method VMIG. On the other hand, our method consumes moderately more time than the post-processing-based methods. For example, CoLT takes 6.23ms more than MMR. This justifies the efficiency of our method. Then, we further check the time cost of each major module in CoLT: the fixed feature encoder \(f\), the visual feature re-encoder \(g\), and TTC \(\phi\). The experimental results are given in Tab. 9. We can see that \(g\) and \(\phi\) incur much less time than the feature encoder \(f\). The reason lies in that \(g\) is a simple multi-layer perceptron while \(\phi\) consists of multiple transformer encoder layers that can run in parallel. It is worthy of mentioning that the image features generated by \(f\) and \(g\) can be cached offline in application. Hence, the main cost is from TTC \(\phi\), which is very limited (11.80ms according to Tab. 9). This also verifies the efficiency of our method. **Effect of the parameter \(L\).** Here we study the effect of the number of transformer layers \(L\). On the one hand, a larger \(L\) may result in over-fitting at a higher probability due to the limited training data. On the other hand, a smaller \(L\) cannot fully exploit the potential of TTC. Therefore, we conduct a grid search to determine the value of \(L\). As can be seen in Tab. 10, the best performance is achieved when \(L\)=8. **Effect of the parameter \(N\).** Here we check how the number of images \(N\) fed to the transformer-based token classifier \(\phi\) impacts the performance. Intuitively, a large \(N\) will include images with more semantics. On the other hand, a large \(N\) will introduce more irrelevant images that may make token classification more difficult. On the contrary, a small \(N\) includes less irrelevant images but also fewer semantics. Therefore, both small \(N\) and large \(N\) are not appropriate for TTC. We conduct grid search to determine \(N\) on two datasets. Based on our results, we set \(N\) to 200 for the DIV400 dataset because the F1@20 scores of \(N=150\) and \(N=250\) are 72.10% and 72.13%, which is inferior to that of \(N=200\) where F1@20 is 73.30%. While on the DIV150Cred dataset, the best performance is achieved when \(N=200\) (an \(F1\) of 55.49%) and 250 (an \(F1\) of 55.32%). Ergo, we set this hyper-parameter to 200. ## 5. Conclusion In this paper, we address keyword-based diverse image retrieval and propose a new method called Semantics-aware Classification Transformer (CoLT) to do this task. Different from existing works, CoLT first extracts highly representative images and query features via semantics-aware contrastive learning, then a transformer-based token classifier is employed to fuse these features and subsume them into their appropriate categories. Finally, a post-processing algorithm is applied to flexibly selecting images from each category to form the retrieval results. The advantages of CoLT are four-fold: _high semantic relevance, high semantic diversity, general_ and _easy-to-use_, and _easy-to-control_. Extensive experiments on two datasets Div400 and Div150Cred demonstrate the superiority of our method. ## Acknowledgments Minyi Zhao was supported in part by the 2022 Tencent Rhino-Bird Research Elite Training Program. Shuigeng Zhou was supported by National Key R&D Program of China under grant No. 2021YFC3340302. \begin{table} \begin{tabular}{c|c c c c} \hline \(L\) & P@20 & CR@20 & F1@20 \\ \hline 6 & 85.82\% & 61.34\% & 71.54\% & 8.93 \\ 8 & 85.48\% & 64.16\% & 73.30\% & 11.80 \\ 10 & 85.29\% & 61.67\% & 71.58\% & 18.56 \\ \hline \end{tabular} \end{table} Table 10. The effect of parameter \(L\) in TTC. \begin{table} \begin{tabular}{c|c c c c} \hline \(X\) & P@20 & CR@20 & F1@20 \\ \hline 1 & 85.48\% & 64.16\% & 73.30\% \\ 2 & 88.09\% & 58.54\% & 70.34\% \\ 3 & 88.45\% & 57.28\% & 69.54\% \\ \hline \end{tabular} \end{table} Table 7. Performance vs. the number of images selected from each category. \begin{table} \begin{tabular}{c|c c c c} \hline & CLIP & MMR & UMONS & VMIG & CoLT (Ours) \\ \hline Time (ms) & 18.06 & 24.10 & 22.65 & 86.77 & 30.23 \\ \hline \end{tabular} \end{table} Table 9. Time cost comparison among major components. We report the result of one time retrieval on a 3090 GPU. \(f\) and \(g\) are tested in a parallel manner. \begin{table} \begin{tabular}{c|c c c c} \hline & \(f\) & \(g\) & \(\phi\) \\ \hline Time (ms) & 18.06 & 0.37 & 11.80 \\ \hline \end{tabular} \end{table} Table 9. Time cost comparison among major components. We report the result of one time retrieval on a 3090 GPU. \(f\) and \(g\) are tested in a parallel manner.
2307.12069
Some remarks on compositeness of $T^+_{cc}$
Recently LHCb experimental group find an exotic state $T^+_{cc}$ from the process $p\bar{p} \to D^0D^0\pi^+ + X$. A key question is if it is just a molecule or may have confined tetraquark ingredient. To investigate this, different methods are taken, including two channel ($D^{*+}D^0$ and $D^{*0}D^+$) K-matrix unitarization and single channel Flatt\'e-like parametrization method analysed by pole counting rule and spectral density function sum rule. It demonstrates that $T^+_{cc}$ is a molecular state, though the possibility that there may exist elementary ingredient can not be excluded, by rough analysis on its production rate.
Chang Chen, Ce Meng, Zhiguang Xiao, Han-Qing Zheng
2023-07-22T12:36:56Z
http://arxiv.org/abs/2307.12069v3
# Some remarks on compositeness of \(T_{cc}^{+}\) ###### Abstract Recently LHCb experimental group find an exotic state \(T_{cc}^{+}\) from the process \(p\bar{p}\to D^{0}D^{0}\pi^{+}+X\). A key question is if it is just a molecule or may have confined tetraquark ingredient. To investigate this, different methods are taken, including two channel (\(D^{*+}D^{0}\) and \(D^{*0}D^{+}\)) \(K\)-matrix unitarization and single channel Flatte-like parametrization method analysed by pole counting rule and spectral density function sum rule. It demonstrates that \(T_{cc}^{+}\) is a molecular state, though the possibility that there may exist elementary ingredient can not be excluded, by rough analysis on its production rate. Chang Chen\({}^{\dagger}\), Ce Meng\({}^{\dagger}\), Zhi-Guang Xiao\({}^{\heartsuit,\dagger}\), Han-Qing Zheng\({}^{\heartsuit}\) \({}^{\dagger}\) _Department of Physics, Peking University, Beijing 100871, P. R. China \({}^{\heartsuit}\) College of Physics, Sichuan University, Chengdu, Sichuan 610065, P. R. China_ ## 1 Introduction The LHCb collaboration has found a very narrow peak structure named \(T_{cc}^{+}\) in the \(D^{0}D^{0}\pi^{+}\) invariant mass spectrum, in \(pp\to X+D^{0}D^{0}\pi^{+}\) process [1]. The mass parameters obtained from a generic constant-width Breit-Wigner fit are obtained as \[\delta m_{BW}=-273\pm 61\pm 5^{+11}_{-14}\ {\rm keV}\,\ \Gamma_{BW}=410\pm 165 \pm 43^{+18}_{-38}\ {\rm keV}\] where \(\delta m_{BW}\) defines the mass shift with respect to the \(D^{*+}D^{0}\) threshold. Later it is suggested that \(T_{cc}^{+}\) is more possibly an isoscalar state with spin-parity quantum numbers \(J^{P}=1^{+}\)[2]. The constituent of \(T_{cc}^{+}\) is \(cc\bar{u}\bar{d}\) and there is no annihilated quark pair, similar as \(X_{1}(2900)\) (\(ud\bar{s}\bar{c}\)) [3, 4]. The experimental observation has stimulated a lot of theoretical discussions. First of all, there are some dynamical lattice QCD simulations about double charmed tetraquarks, though not providing a definite conclusion on the existence of the \(T_{cc}^{+}\) state [5, 6]. Recently, based on (2 + 1)-flavor lattice QCD simulations, \(D^{*}D\) system is studied more carefully. It is verified that there is a loosely bound state near threshold (-10 keV) [7]. Many phenomenological studies have also been made. A theoretical prediction is that there may exist a \(cc\bar{u}\bar{d}\) tetraquark with \(J^{P}=1^{+}\) near \(D^{*+}D^{0}\) threshold [8]. Besides, the heavy meson chiral effective field theory (HMChEFT) is used, which considers contact and one pion exchange (OPE) interaction. The analysis prefers that \(T_{cc}^{+}\) state is a molecule of \(D^{*+}D^{0}\) and \(D^{*0}D^{+}\)[9, 10]. The effect of triangle diagram singularity is also evaluated with \(D^{*}D\pi\) interactions. It is found that the contribution is too weak than that of the tree diagram, which suggests that \(T_{cc}^{+}\) is not from triangle singularity [11]. The pole parameters of \(T_{cc}^{+}\) extracted from a simple \(K\)-matrix amplitude are also studied and it is found that \(T_{cc}^{+}\) may originate from a \(D^{*+}D^{0}\) virtual state [12]. The extended chiral lagrangian with \(K\)-matrix unitarity approach is also applied, and it is suggested that vector meson exchanges play a crucial role in forming \(T_{cc}^{+}\) bound state of \(D^{*}D\)[13]. In this work, to estimate that whether \(T_{cc}^{+}\) is just a loosely-bound s-wave molecule of \(D^{*}D\) or it contains \(cc\bar{u}\bar{d}\) ingredient, different approaches are used. Firstly we take the approach similar to that of Ref. [13], using two channel (\(D^{*+}D^{0}\) and \(D^{*0}D^{+}\)) \(K\)-matrix unitarization with hidden gauge chiral lagrangian. In reference [13], the authors only consider the vector exchanging diagram contributions and there is no experimental data fitting. In this paper, more complete interactions including pseudoscalar, vector exchanges and \(D^{*}D\) contact terms are introduced and a combined fit of \(DD\pi\), \(D\pi\) and \(DD\) channels is made. It indicates that the vector meson \(\rho\) coupling exchanges really make non-negligible contributions in generating \(T_{cc}^{+}\) resonance comparing with other two interactions. In this scheme, there exists a bound state near the \(D^{*+}D^{0}\) threshold which suggests that \(T_{cc}^{+}\) may be a \(D^{*}D\) molecule. Furthermore, the Flatte-like parametrization is also used. Through a combined fit on three-body and two-body invariant mass spectrum, we find that the result is same based on PCR and spectral density function sum rule calculation [14, 15, 16, 17, 18, 19]: there is only one pole near \(D^{*}D\) threshold and the corresponding \({\cal Z}\simeq 1\). We also try to judge the compositeness of \(T_{cc}^{+}\) by comparing its productivity (\(pp\to T_{cc}^{+}+X\)) with different theoretical estimations. The \(D^{*}D\) molecule productivity should be significantly less than a confined tetraquark productivity [20]. However, according to a rough comparison with \(\Xi_{cc}\) data, the order of magnitude of \(T_{cc}^{+}\) production rate is in between, hence it is also hard to make a judgement. This paper is as follows: Sec. I is the introduction, in Sec. II, the traditional K-matrix unitarization approach with hidden gauge chiral lagrangian in \(s\)-wave approximation is introduced and its numerical fit is shown. In Sec. III, other frameworks are employed to analyse the compositeness of \(T_{cc}^{+}\). Finally, in Sec. IV, a brief conclusion on the structure of \(T_{cc}^{+}\) is made. ## 2 K-matrix unitarity approach A chiral lagrangian with hidden gauge symmetry is often used to describe vector and pseudoscalar meson interactions [21, 22, 23, 24]. Here we list relevant coupling terms \[\begin{split}{\cal L}={\cal L}_{0}-ig{\rm Tr}([P,\partial_{\mu} P]V^{\mu})+ig{\rm Tr}([V^{\nu},\partial_{\mu}V_{\nu}]V^{\mu})\\ -\frac{g^{2}}{2}{\rm Tr}([P,V_{\mu}]^{2})+\frac{g^{2}}{4}{\rm Tr }([V_{\mu},V_{\nu}]^{2})\,\end{split} \tag{1}\] where \({\cal L}_{0}\) is the free lagrangian for pseudoscalar and vector mesons. P and V denote, respectively, properly normalized pseudoscalar and vector meson matrices \[P=\begin{pmatrix}\frac{\eta}{\sqrt{3}}+\frac{\eta^{\prime}}{\sqrt{6}}+\frac{ \pi^{0}}{\sqrt{2}}&\pi^{+}&K^{+}&\bar{D}^{0}\\ \pi^{-}&\frac{\eta}{\sqrt{3}}+\frac{\eta^{\prime}}{\sqrt{6}}-\frac{\pi^{0}}{ \sqrt{2}}&K^{0}&D^{-}\\ K^{-}&\bar{K}^{0}&-\frac{\eta}{\sqrt{3}}+\sqrt{\frac{2}{3}}\eta^{\prime}&D^{-} _{s}\\ D^{0}&D^{+}&D^{+}_{s}&\eta_{c}\end{pmatrix}\, \tag{2}\] \[V=\begin{pmatrix}\frac{\omega}{\sqrt{2}}+\frac{\rho^{0}}{\sqrt{2}}&\rho^{+}&K ^{*+}&\bar{D}^{*0}\\ \rho^{-}&\frac{\omega}{\sqrt{2}}-\frac{\rho^{0}}{\sqrt{2}}&K^{*0}&D^{*-}\\ K^{*-}&\bar{K}^{*0}&\phi&D^{*-}_{s}\\ D^{*0}&D^{*+}&D^{*+}_{s}&J/\psi\end{pmatrix}. \tag{3}\] It needs to point out that, only \(SU(2)\) symmetry really holds. That is all couplings constants, appearing in vertices of PPV, VVV and PPVV types, are invariable in isospin space. Different vertices (including \(s\) and \(c\)) should be considered case by case. We also adopt previous theoretical work to estimate these coupling constants. From Eq. (1), we can get the contact, \(t\) and \(u\) channel diagrams about \(D^{*+}D^{0}\to D^{*0}D^{+}\) process. We list their amplitudes successively. First is contact diagrams, \[iM^{c}_{ij}=i\ g^{2}_{D^{*}DD^{*}D}, \tag{4}\] Figure 1: Contact diagrams. where \(i,j=1,2\) refer to \(D^{*+}D^{0}\) and \(D^{*0}D^{+}\) channel, respectively. The coupling \(g_{4D}\) has been estimated when studying \(X(3872)\)[17], that \(g(=g_{D^{*}DD^{*}D})\simeq 16\). The \(t\) channel diagrams include vector meson (\(J/\psi\) or \(\rho\), \(\omega\)) exchanges [13].2 We also neglect the momentum dependence in the denominator of the propagator near the threshold. Here an estimation about coupling constants from the PPV and VVV vertices are given. That is when \(i,j=1,1\) or \(2,2\), the coupling constant \(g(=g_{J/\psi D^{(*)}D^{(*)}})\simeq 7.7\), and when \(i,j=1,2\) or \(2,1\), \(g(=g_{\rho D^{(*)}D^{(*)}})\simeq 3.9\), Which is obtained from the vector meson dominance (VMD) assumption [22]. The \(t\) channel amplitudes are hence written as follows (Fig. 2), Footnote 2: The \(\omega\) and \(\rho^{0}\) exchange diagram have the same coupling constant with opposite signs. So they almost cancel each other. \[iM^{t}_{ij}=iD_{ij}(p_{1}+p_{3})\cdot(p_{2}+p_{4})\ \epsilon(p_{1})\cdot \epsilon^{*}(p_{3}), \tag{5}\] where \[D_{ij}=\left(\begin{array}{cc}\frac{g_{J/\psi D^{(*)}D^{(*)}}^{2}}{M_{J/\psi }^{2}}&\frac{g_{\rho(^{*})D^{(*)}}^{2}}{m_{\rho}^{2}}\\ \frac{g_{\rho D^{(*)}D^{(*)}}^{2}}{m_{\rho}^{2}}&\frac{g_{J/\psi D^{(*)}D^{(*)} }^{2}}{M_{J/\psi}^{2}}\end{array}\right)_{ij}. \tag{6}\] The third type is \(u\) channel diagrams with \(\pi\) exchanges as in Fig. 3, and the amplitudes are \[iM^{u}_{ij}=iE_{ij}g_{\pi DD^{*}}^{2}\frac{\epsilon(p_{1})\cdot(p_{1}-2p_{4}) \ \epsilon^{*}(p_{3})\cdot(p_{3}-2p_{2})}{(p_{1}-p_{4})^{2}-m_{\pi}^{2}}, \tag{7}\] where \[E_{ij}=\left(\begin{array}{cc}-1&\frac{1}{2}\\ \frac{1}{2}&-1\end{array}\right)_{ij}. \tag{8}\] The coupling strength \(g_{\pi DD^{*}}\) can be restricted by the decay process \(D^{*+}\to D^{0}\pi^{+}\), and we take the value \(g(=g_{\pi DD^{*}})\simeq 8.4\)[25]. There also exist \(u\) channel diagrams with pseudo-scalar meson \(\eta_{c}\) exchanges. They are not important at this energy range as tested numerically, so we neglect these diagrams. As for the amplitudes corresponding to Fig. 3, the \(u\) channel exchanging \(\pi\) is somewhat special because it is possible to exchange one on-shell \(\pi\) meson. After partial wave projection, there exists, in tree level amplitudes, an additional cut in the energy region above \(D^{*+}D^{0}\) threshold. Here this singularity will disturb unitarity. To remedy this, we keep the Figure 3: The \(u\) channel diagrams. Figure 2: The \(t\) channel diagrams. relation \(m_{D^{*}}=m_{D}+m_{\pi}\) to keep away from the singularity, similar to Refs. [9, 10]. At last, we get the total couple channel amplitudes \[M_{ij}=M_{ij}^{c}+M_{ij}^{t}+M_{ij}^{u}. \tag{9}\] Furthermore, it can be unified by the relation \[\mathbf{T}^{-1}=\mathbf{K}^{-1}-\mathbf{g}(s), \tag{10}\] where \(\mathbf{T}\) is the unitarized scattering \(T\) matrix, \(\mathbf{K}\) is a two channel scattering amplitude matrix in s-wave approximation [26]. And \(\mathbf{g}(s)\equiv\mathrm{diag}\{g_{i}(s)\}\). In our normalization \[g_{i}(s;M_{i},m_{i})=-16\pi i\int\frac{d^{4}q}{(2\pi)^{4}}\frac{1}{(q^{2}-M_{ i}^{2}+i\epsilon)((P-q)^{2}-m_{i}^{2}+i\epsilon)},\ (s=P^{2}) \tag{11}\] where \(M_{i}\) is the vector meson mass and \(m_{i}\) the pseudoscalar meson mass in the \(i\)-th channel. The expression of \(g_{i}(s)\) in Eq. (11) is renormalized using standard \(\overline{\mathrm{MS}}\) scheme, which introduces an explicit renormalization scale (\(\mu\)) dependence. In our fit we choose the same \(\mu\) parameter in two channels. To get a finite width for the \(T_{cc}^{+}\) state below \(D^{*}D\) threshold we need to consider the finite width of the \(D^{*}\) state. This is accomplished by performing a convolution of the \(g_{i}(s)\) functions with the mass distribution of the \(D^{*}\) states [27]: \[S\left(s_{V};M_{V},\Gamma_{V}\right)\equiv-\frac{1}{\pi}\operatorname{Im} \left\{\frac{1}{s_{V}-M_{V}^{2}+iM_{V}\Gamma_{V}}\right\} \tag{12}\] such that \[\tilde{g}_{i}(s;M_{i},m_{i})=\mathcal{C}\int_{s_{Vmin}}^{s_{Vmax}}ds_{V}g_{i}( s;\sqrt{s_{V}},m_{i})S\left(s_{V};M_{i},\Gamma_{i}\right)\, \tag{13}\] where \(\mathcal{C}\) is a normalization factor. The main contribution to this integration is from the region around the unstable mass \(s_{V}\sim M_{V}^{2}\), so we can introduce a cutoff \(s_{Vmin}\) and \(s_{Vmax}\). For example, for \(\tilde{g}_{1}\) it is integrated from \((m_{D_{0}}+m_{\pi^{+}})^{2}\) to \((m_{D^{*+}}+2\Gamma_{D^{*+}})^{2}\) and for \(\tilde{g}_{2}\) it is integrated from \((m_{D_{0}}+m_{\pi^{0}})^{2}\) to \((m_{D^{*0}}+2\Gamma_{D^{*0}})^{2}\). Here we take the decay widths as constants because we only focus on the region near the \(D^{*+}D^{0}\) threshold and it makes little difference neglecting the \(s\) dependence in numerical calculations. The constant decay widths suggested by PDG [28] read \[\Gamma_{D^{*+}}=83.4\ \mathrm{keV},\ \ \Gamma_{D^{*0}}=55.3\ \mathrm{keV}. \tag{14}\] In order to fit the final state three-body invariant mass spectrum of \(D^{0}D^{0}\pi^{+}\), the final-state interaction (FSI) [19] between \(D^{*+}D^{0}\) and/or \(D^{*0}D^{+}\) needs to be considered, before considering the \(D^{*+}\to D^{0}+\pi^{+}\) decay. The FSI amplitude reads \[\mathcal{F}_{D^{*+}D^{0}}(s)=\alpha_{1}\ T_{11}+\alpha_{2}\ T_{21}\, \tag{15}\] where \(\alpha_{1}\), \(\alpha_{2}\) are smooth real polynomials, and since the energy region of interest is very small, we treat them as constant parameters near the thresholds of \(D^{*+}D^{0}\) and \(D^{*0}D^{+}\). Finally, the decay of \(T_{cc}^{+}\to D^{0}D^{0}\pi^{+}\) can therefore be expressed as in Fig. 4. And the final scattering amplitude is written as3 Footnote 3: Analogous equation is used in [13] earlier. The difference is that here the propagator of \(D^{*}\) is written in unitary gauge rather than feynman gauge. Tough these two gauges make little numerical difference near and below \(D^{*+}D^{0}\) threshold region. \[t= \mathcal{F}_{D^{*+}D^{0}}\left[\frac{\epsilon\cdot[(p_{1}-p_{2})+ \frac{m_{\pi^{+}}^{2}-m_{D^{0}}^{2}}{m_{D^{*+}}^{2}}(p_{1}+p_{2})]}{M_{12}^{2} -m_{D^{*+}}^{2}+iM_{12}\Gamma_{D^{*+}}(M_{12})}\right. \tag{16}\] \[\left.+\frac{\epsilon\cdot[(p_{3}-p_{2})+\frac{m_{\pi^{+}}^{2}-m_ {D^{*0}}^{2}}{m_{D^{*+}}^{2}}(p_{3}+p_{2})]}{M_{23}^{2}-m_{D^{*+}}^{2}+iM_{23} \Gamma_{D^{*+}}(M_{23})}\right].\] where \(M_{12}\) and \(M_{23}\) are Dalitz kinematic variables of the final three-body state. The corresponding definition is \(M_{ij}^{2}=(p_{i}+p_{j})^{2}\), and \(\epsilon=\epsilon(P)\) corresponds to the polarization vector of \(T_{cc}^{+}\), \(P=(p_{1}+p_{2}+p_{3})\), \(P^{2}=s\). These invariants have the relation that \(M_{12}^{2}+M_{13}^{2}+M_{23}^{2}=P^{2}+p_{1}^{2}+p_{2}^{2}+p_{3}^{2}\). Finally the decay width of \(T_{cc}^{+}\) is given by \[d\Gamma(\sqrt{s})=\frac{\mathcal{N}}{2}\frac{32}{\pi}\frac{1}{s^{3/2}}|t|^{2} ds_{12}ds_{23}. \tag{17}\] The factor \(\frac{1}{2}\) in the above equation comes from averaging the two integrals of \(D^{0}\) in the final state. In order to fit the experimental data, the normalization factor \(\mathcal{N}\) should be introduced. As for the two FSI parameters, \(\alpha_{1}\) can be absorbed in the coefficients \(\mathcal{N}\). So \(\alpha_{1}=1\) is fixed and there remains one free parameter \(\alpha_{2}\). Besides, when we get the yields for the \(D^{0}D^{0}\pi^{+}\) invariant mass spectrum, the resolution function is convoluted \[\text{Yields}(l)=\int_{l-2\sigma}^{l+2\sigma}dl^{\prime}\frac{1}{\sqrt{2\pi \sigma}}\Gamma\left(l^{\prime}\right)\text{e}^{-\frac{(l^{\prime}-l)^{2}}{2 \sigma^{2}}}, \tag{18}\] where \(\sigma=1.05\times 263\)keV [1]. At last, invariant mass distributions for the selected two body (particles 2 and 3 for example) can also be derived as the the following function \[\frac{dBr}{dm_{23}}=\mathcal{N^{\prime}}\int_{m_{D^{0}D^{0}\pi^{+}}}^{m_{max} ^{2}}ds\int ds_{12}|t(s,s_{12},s_{23})|^{2} \tag{19}\] where \(\mathcal{N^{\prime}}\) is another normalization constant, \(m_{23}\) is the invariant mass of particles 2 and 3. The \(T_{cc}^{+}\) energy is integrated from the initial energy \(m_{D^{0}D^{0}\pi^{+}}\) to a cutoff \(m_{max}\).4 Footnote 4: Since \(T_{cc}^{+}\) lies just below the threshold of \(D^{*}D\) with a sharp peak, we can make a rough cutoff about one or two its Breit-Wigner widths above the threshold. The subsequent results are not sensitive to this uncertainty. Data obtained from LHCb collaboration about three body final states \(D^{0}D^{0}\pi^{+}\)[1] and two body invariant mass distributions \(D^{0}\pi^{+}\), \(D^{0}D^{0}\) and \(D^{+}D^{0}\)[2] are used to make a combined fit. The normalization \(\mathcal{N}\), \(\mathcal{N^{\prime}}\), FSI parameter \(\alpha_{2}\) and renormalization scale \(\mu\) are regarded as fit parameters and all coupling constants found in the literature are regarded as fixed parameters (Scheme I). See Fig. 5 for the fit (Scheme I). It is shown that the fit result is very sensitive to the parameter \(\mu\). That is because the peak (\(T_{cc}^{+}\) state) is too narrow, considering that unit of \(\mu\) is GeV but the signal range is in MeV. The discussion above seems to suggest that the fit result prefers a particular choice of parameter \(\mu\). This looks annoying but is not actually a physical problem. That is because in the fit we have fixed all coupling strength constants previously determined. To avoid the problem, we can for example take scheme II that all coupling parameters are regarded as fit parameter, and \(\mu\) is removed by replacing \(g_{i}(s;M_{i},m_{i})\) as \(i\rho_{i}(s;M_{i},m_{i})\) (i.e., on shell approximation). The expression \(\rho_{i}(s;M_{i},m_{i})\) is the two-body phase space factor \[\rho_{i}(s;M_{i},m_{i})=\frac{\sqrt{s-(M_{i}+m_{i})^{2}}\sqrt{s-(M_{i}-m_{i})^{ 2}}}{s}. \tag{20}\] The result is that it can still fit well, just that the coupling \(g_{\rho DD}\) becomes larger, see Table 1. The pole location on the \(s\)-plane is also studied. If \(D^{*}\) is taken as a stable particle, Then \(T_{cc}^{+}\) appear as a bound state pole located at \(\sqrt{s}=3.8746\), i.e., about 500keV below \(D^{*+}D^{0}\) threshold (\(\sqrt{s}=3.8751\)). Since there is no nearby accompanying virtual pole, we conclude that, according to the pole counting rule (PCR), \(T_{cc}^{+}\) is a pure molecl instability of \(D^{*}\), the \(D^{*}D\) channel opens at the energy a little bit smaller than \(m_{D^{*}}+m_{D}\) and the decay of \(T_{cc}^{+}\) take place [13]. Furthermore, invariant mass distributions for any two of three final state particles are also taken into consideration. As for \(D^{0}\pi^{+}\) state, which comes from \(D^{*+}D^{0}\), we take \(m_{max}=3.8751\)GeV and it implies only \(T_{cc}^{+}\) below \(D^{*+}D^{0}\) threshold make sense to decay into \(D^{0}\pi^{+}\) (normalization constant \({\cal N}^{\prime}={\cal N}_{\cal D\pi}\)). As for \(D^{0}D^{0}\) and \(D^{+}D^{0}\) two-body final states, we take the same \(m_{max}=3.8751\)GeV (\({\cal N}^{\prime}={\cal N}_{\cal D\cal D}\) here). On the other side, the \(D^{+}D^{0}\) final state, which comes from the \(D^{+}D^{0}\pi^{0}\) final state, is different. Since \(D^{+}D^{0}\) state may come from two channels, \(D^{*+}D^{0}\) and \(D^{*0}D^{+}\), they need to be considered altogether aided by isospin symmetry. Since the threshold of second channel is higher, we take \(m_{max}=3.8766\)GeV, and on account of a symmetry factor \(\frac{1}{2}\) in channel including \(D^{0}D^{0}\), the normalization constant here is doubled (\({\cal N}^{\prime}=2{\cal N}_{\cal D\cal D}\)). The fitting results are plotted in Fig. 7. Both invariant mass spectrums (\(D^{0}D^{0}\)and \(D^{+}D^{0}\)) are with an incoherent background component, parameterised as a product of two-body phase space function \(\Phi_{DD}\) and a linear function. For \(D^{+}D^{0}\) from channel \(T_{cc}^{+}\to D^{*0}D^{+}\to D^{+}D^{0}\pi^{0}/D^{+}D^{0}\gamma\), because the decay channel \(D^{*0}\to D^{0}\gamma\) accounts for 35% of total \(D^{*0}\) decay width, this incoherent background contribution is non-negligible and needs to be counted specially. Here we take this estimation from Ref. [2] directly. Figure 5: \(D^{0}D^{0}\pi^{+}\) final state invariant mass spectrum. The vertical purple dash line indicates the \(D^{*+}D^{0}\) threshold and the green one corresponds to the \(D^{*0}D^{+}\) threshold. Data come from Ref. [1]. Figure 6: The \(D^{0}\pi^{+}\) invariant mass spectrum from three body final state \(D^{0}D^{0}+X\) (Data from Ref. [2]). Finally, the fit parameters are listed in Table 1. ## 3 Other insights on \(T_{cc}^{+}\) In this section, the production of \(T_{cc}^{+}\) in some other methods are also analysed to figure out its compositeness. First of all, a single channel Flatte-like parametrization is used. Like the previous calculation in Sec. II, this process is regarded as a cascade decay. The propagator of \(T_{cc}^{+}\) is approximated by Flatte form. The later propagator of \(D^{*}\) here can only take an simple Breit-Wigner amplitude form, because the energy of this process is near the threshold of \(D^{*}D\) and its range is small enough. Besides, the momentum dependent polynomial in the numerator is also normalized by a constant factor \(\mathcal{N}\) for convenience. Numerical calculations indicate that these approximation can make little difference. The total s-wave approximation amplitude about process \(T_{cc}^{+}\to D^{0}D^{0}\pi^{+}\) can be written as \[\begin{split} t=&\frac{1}{s-M^{2}+iM\left(g\rho(s) \right)}\times\\ &\left(\frac{1}{M_{12}^{2}-m_{D^{*+}}^{2}+iM_{12}\Gamma_{D^{*+}}} +\frac{1}{M_{23}^{2}-m_{D^{*+}}^{2}+iM_{23}\Gamma_{D^{*+}}}\right),\end{split} \tag{21}\] \begin{table} \begin{tabular}{l c c} \hline Scheme & I & II \\ \(\chi^{2}/d.o.f\) & 1.16 & 1.06 \\ \hline \(\alpha_{2}\) & \(-0.43\pm 0.10\) & \(-0.48\pm 0.13\) \\ \(\mu\)/GeV & \(1.122\pm 0.001\) & – \\ \(g_{D^{*}DD^{*}D}\) & fixed=16 [17] & \(16.0\pm 7.36\) \\ \(g_{\pi DD^{*}}\) & fixed=8.4 [25] & \(8.4\pm 6.51\) \\ \(g_{\rho D^{(*)}D^{(*)}}\) & fixed=3.9 [22] & \(7.78\pm 1.46\) \\ \(g_{J/\psi D^{(*)}D^{(*)}}\) & fixed=7.7 [22] & \(13.8\pm 9.16\) \\ \hline \end{tabular} \end{table} Table 1: Parameters. Figure 8: Cascade decay Figure 7: \(D^{0}D^{0}\) (\(D^{+}D^{0}\)) invariant mass spectrum from three body final state \(D^{0}D^{0}+X\) (\(D^{+}D^{0}+X\)) [2], where the vertical line of dashes show \(D^{0}D^{0}\) (\(D^{+}D^{0}\)) threshold. where \(g\) presents the coupling strength with only channel \(D^{*}D\). The doubling of the kinetic variables of the \(D^{*}\) propagator is due to the indistinguishability of \(D\pi\) in three-body final state, and the symmetry factor \(\frac{1}{2}\) is absorbed by the total normalization \(\mathcal{N}\). By this parametrization, we make the energy resolution convolution as before and fit the three-body decay width and two-body invariant mass spectrum at the same time using previous Eqs. (17, 19). It is worth pointing out that under normal conditions it will form a divergent peak because of the zero partial decay width. But if we regard \(D^{*}\) as an unstable particle, in other words the amplitude can emerge imaginary part when the energy does not reach the \(D^{*}D\) threshold yet, the peak is not divergent anymore. We can use the same trick as Eq. (13) to treat \(\rho\) in Eq. (21), or more simply take the value \(m_{D^{*}}\) in \(\rho\) with a imaginary part \(\Gamma_{D^{*}}\). The choice of them does not effect the result except for the goodness of fit. Here we take the latter scenario. The united fits result is following. We list the corresponding parameters in Table 3, and the pole structure of the Flatte amplitude is drawn in Fig. 3. Furthermore, according to the Flatte-like parametrization, it is nature to calculate the probability of finding an 'elementary' state in the continuous spectrum by the spectral density function [29] \[\omega(E)=\frac{1}{2\pi}\frac{\tilde{g}\sqrt{2\tilde{M}E}\theta(E)+\tilde{ \Gamma}_{0}}{\left|E-E_{f}+\frac{\mathrm{i}}{2}\tilde{g}\sqrt{2\tilde{M}E}+ \frac{\mathrm{i}}{2}\tilde{\Gamma}_{0}\right|^{2}}, \tag{22}\] \begin{table} \begin{tabular}{c c c} \hline \(\chi^{2}/d.o.f\) & \(g\) & \(M\) \\ \hline 0.81 keV & \(0.075\pm 0.015\) & \(3874.1\pm 0.2\)MeV \\ \hline \end{tabular} \end{table} Table 2: Fit parameter Figure 10: The pole structure of flatte amplitude Figure 9: The three-body and two-body invarint mass spectrum where \(E=\sqrt{s}-m_{D^{*}D}\), \(E_{f}=M-m_{D^{*}D}\), \(\tilde{M}\) is the reduced mass of \(D^{*}D\), \(\theta\) is the step function at threshold and \(\tilde{\Gamma}_{0}\) is the constant partial width for the remaining couplings. By integrating it with a cut off (usually comparable to the total decay width\(\sim\Gamma\)), the possibility of finding an 'elementary' state in the final state is \[\mathcal{Z}=\int_{E_{\rm min}}^{E_{\rm max}}\omega(E){\rm d}E. \tag{23}\] Considering that there is no other channels coupling with \(T_{cc}^{+}\) under the \(D^{*}D\) threshold, the \(\tilde{\Gamma}_{0}\) here should be set zero. In this case, the integrating results in different sections are as follows. The result suggests that in a simple single channel Flatte-like parametrization framework, \(T_{cc}^{+}\) is a pure molecular state. This is in agreement with the result of Ref. [30], obtained using effective range expansion approximation. Our result is much more definite than that obtained in Ref. [2]. Furthermore, there are also discussions on the compositeness from the production rate of a particle. There is a cross section relation between confined state \(\Xi_{cc}(ccu/ccd)\)[31] and \(T_{cc}^{+}\). These two experimental data are all collected after 2016, and they are from the same experimental condition, like transverse momentum truncation \(p_{T}\) and luminance \(9fb^{-1}\). After taking the detection efficiency and branching fractions difference [32], there is a rough relation that \[\frac{\sigma(pp\to T_{cc}^{+})}{\sigma(pp\to\Xi_{cc})}\sim\frac{1}{3}\times \frac{1}{10}. \tag{24}\] If we agree that there exists a universal relation between \((Q/QQ)q\) and \((Q/QQ)qq\) productivity in high energy collision [33], where \(Q\) represents heavy quark and \(q\) is light quark, we can get a factor \(1/3\), which means catching two light quarks are always more difficult, i.e., \(\sigma(pp\to\Xi_{cc})\simeq 3\sigma(pp\to(cc\bar{u}\bar{d}))\). So the ratio between observed \(T_{cc}^{+}\) and hypothetical tetraquark cross section is derived \[\frac{\sigma(pp\to T_{cc}^{+})}{\sigma(pp\to(cc\bar{u}\bar{d}))}\sim\frac{1}{ 10}. \tag{25}\] On the other hand, it is natural to estimate the different theoretical cross section orders between 'elementary' and'molecular' picture of \(T_{cc}^{+}\). Thanks to that \(X(3872)\) resonance has analogous characteristics [34, 35](e.g., binding energy and double c quark), there have been some comparisons about these orders of magnitude on \(X(3872)\)[20, 36]. One can borrow the discussions here and it can be estimated that roughly for \(T_{cc}^{+}\) \[\frac{\sigma(pp\to(c\bar{u})(c\bar{d}))}{\sigma(pp\to(cc\bar{u}\bar{d}))} \sim\mathcal{O}(10^{-2})-\mathcal{O}(10^{-3}). \tag{26}\] By comparing Eq. (25) and Eq. (26), the productivity of \(T_{cc}^{+}\) just falls in between two different cases. So using the productivity argument does not provide a clear conclusion on the nature of \(T_{cc}^{+}\). On the contrary, the analysis provided in this paper, e.g., Table 3 clearly indicates the molecular nature of \(T_{cc}^{+}\). ## 4 Summary In this work, we study the nature of \(T_{cc}^{+}\) by different methods. We use the extended hidden local chiral lagrangian with two channel (\(D^{*+}D^{0}\) and \(D^{*0}D^{+}\)) \(K\)-matrix approach to describe the process \(T_{cc}^{+}\to D^{0}D^{0}\pi^{+}\). The three-body and two-body invariant mass spectrum can be fitted well at the same time. Also the numerical fit results reveal that the vector meson exchanges is more important comparing with \(\pi\) exchanges and contact interactions. We also use Flatte \begin{table} \begin{tabular}{c c c} \hline \([M-\Gamma,M+\Gamma]\) & \([M-2\Gamma,M+2\Gamma]\) & \([M-3\Gamma,M+3\Gamma]\) \\ \hline \(\sim\) 0 & \(\sim\) 0 & 0.01 \\ \hline \end{tabular} \end{table} Table 3: Spectral density function integrating \(\mathcal{Z}\) formula to study the problem. We conclude that \(T_{cc}^{+}\) is definitely a pure molecular state composed of \(D^{*}D\), in agreement with many of the results found in the literature, but on a much more confident level. _Acknowledgements_ : We would like to thank Hao Chen for a careful reading of the manuscript and very helpful discussions. At last, This work is supported in part by National Nature Science Foundations of China under Contract Numbers 11975028.
2308.03939
Deterministic Neural Illumination Mapping for Efficient Auto-White Balance Correction
Auto-white balance (AWB) correction is a critical operation in image signal processors for accurate and consistent color correction across various illumination scenarios. This paper presents a novel and efficient AWB correction method that achieves at least 35 times faster processing with equivalent or superior performance on high-resolution images for the current state-of-the-art methods. Inspired by deterministic color style transfer, our approach introduces deterministic illumination color mapping, leveraging learnable projection matrices for both canonical illumination form and AWB-corrected output. It involves feeding high-resolution images and corresponding latent representations into a mapping module to derive a canonical form, followed by another mapping module that maps the pixel values to those for the corrected version. This strategy is designed as resolution-agnostic and also enables seamless integration of any pre-trained AWB network as the backbone. Experimental results confirm the effectiveness of our approach, revealing significant performance improvements and reduced time complexity compared to state-of-the-art methods. Our method provides an efficient deep learning-based AWB correction solution, promising real-time, high-quality color correction for digital imaging applications. Source code is available at https://github.com/birdortyedi/DeNIM/
Furkan Kınlı, Doğa Yılmaz, Barış Özcan, Furkan Kıraç
2023-08-07T22:44:26Z
http://arxiv.org/abs/2308.03939v1
# Deterministic Neural Illumination Mapping for Efficient Auto-White Balance Correction ###### Abstract Auto-white balance (AWB) correction is a critical operation in image signal processors for accurate and consistent color correction across various illumination scenarios. This paper presents a novel and efficient AWB correction method that achieves at least 35 times faster processing with equivalent or superior performance on high-resolution images for the current state-of-the-art methods. Inspired by deterministic color style transfer, our approach introduces deterministic illumination color mapping, leveraging learnable projection matrices for both canonical illumination form and AWB-corrected output. It involves feeding high-resolution images and corresponding latent representations into a mapping module to derive a canonical form, followed by another mapping module that maps the pixel values to those for the corrected version. This strategy is designed as resolution-agnostic and also enables seamless integration of any pre-trained AWB network as the backbone. Experimental results confirm the effectiveness of our approach, revealing significant performance improvements and reduced time complexity compared to state-of-the-art methods. Our method provides an efficient deep learning-based AWB correction solution, promising real-time, high-quality color correction for digital imaging applications. Source code is available at [https://github.com/birdortyedi/DeNIM/](https://github.com/birdortyedi/DeNIM/) ## 1 Introduction In the realm of digital imaging, auto-white balance (AWB) correction is one of the most critical operations in image signal processors (ISPs). The colors presented in the final sRGB image should be somehow aligned with the colors perceived by the human eye. This operation mainly aims to ensure accurate and consistent color correction across a variety of illumination scenarios. Due to the effect of differing light sources in real-world scenarios, which possess continuous range of color temperatures, AWB correction task still remains challenging. Recent studies on AWB correction generally introduce a method to model leading illumination settings and undesired color casts in the scene, and then subsequently adjust the color balance. A number of AWB correction methods have been introduced, which employ various strategies (_e.g.,_ low-level statistical methods, gamut-based methods, and learning-based methods). Earlier studies [12, 11, 18, 42, 24, 14, 29, 39, 38] benefit from low-level statistics of images or patches to infer the illumination, and employ a simple diagonal-based correction matrix [23] of predicted illumination to rectify the color casts in the scene. In addition to low-level statistical methods, gamut-based methods [19, 16, 17, 22] mainly introduce models that aims to learn mappings from the images captured under unknown lighting conditions to the reference colors captured under known lighting conditions. Learning-based methods [9, 20, 10, 21, 26] have become more popular when compared to their ancestors, due to their better capability of representing the illumination in real-world scenarios. With the advancements in computational photography, deep learning-based methods [36, 40, 8, 27, 43, 34, 1, 4, 37, 32, 33] have demonstrated an outstanding performance edge over all previous AWB correction strategies. However, the high computational requirements and significant power demands of these approaches restrict their direct integration within a camera pipeline. Especially, the recent approaches suffer from the computational complexity mostly leading to better performance without considering the time efficiency and their practical usage. Addressing this issue, we propose a novel, deep learning-based AWB correction method, which makes the current state-of-the-art methods at least 35 times faster, while delivering equivalent or better performance on high-resolution images. The main contributions of this study can be summarized as follows: * We propose a novel and efficient strategy for AWB correction, which learns deterministic color mappings for both canonical illumination and AWB-corrected forms with the help of learnable projection matrices. * Our design allows the input to be resolution-agnostic and any pre-trained AWB network can be integrated into this design as the backbone network. * We demonstrate that employing deterministic illumination color mapping for AWB correction yields a substantial improvement in the performance of existing state-of-the-art methods, while significantly reducing the time complexity, achieving a speedup of at least 35 times faster. ## 2 Methodology Given a set of high-resolution images with different white balance (WB) settings \(I\), our proposed strategy learns to achieve a deterministic illumination color mapping for efficient AWB correction. Prior works [4, 32] focus on learning the weighting maps for all different WB settings in low-resolution space. Then, they render the AWB-corrected version in high-resolution space by linearly combining images with different WB settings and their corresponding weighting maps. Although this approach can produce quite well outputs, it essentially requires multi-scale inference and smoothing after resizing the weighting maps to the original resolution to significantly improve the results. However, these post-processing steps make this approach challenging to use in practical scenarios. Inspired by deterministic color style transfer [30], we developed an idea of deterministic illumination color mapping for AWB correction. The overall design of our proposed illumination mapping strategy is shown in Figure 2. First, we reduce the resolution of the input images \(I\) to make them compatible with the architectures of prior works (_i.e._, \(256\)px). By using only the encoder part of one of these ar Figure 2: Overall design of our proposed illumination mapping strategy. We first reduce the resolution of the input images to a compatible size with the AWB correction backbone (_i.e._, Mixed WB [4], Style WB [32]). Then, high-resolution input images and the latent representations of low-resolution versions, extracted by AWB correction backbone, are fed into a deterministic color mapping module (DNCM) [30] to obtain a canonical form. Another DNCM module (without fusion capability) takes the canonical form as input and learns to map the pixel values to the ones for AWB corrected version. This strategy ensures that the AWB correction model is resolution-agnostic. chitectures, we feed low-resolution images \(\hat{I}\) into the encoder to extract rich information from different WB settings. To obtain the latent representations, we use \(1\times 1\) convolutional layer followed by GeLU activation [25] to vectorize the feature maps. This provides an image-adaptive color mapping matrix \(d\) for DNCM module [30] to generate a canonical form. \[d^{(k\times k)}=V(E(\mathbf{\hat{I}})) \tag{1}\] where \(E\) refers to the AWB encoder (_i.e._, [4] or [32]), \(V\) stands for the vectorization operation by \(1\times 1\) convolutional layer and activation. Note that we use pre-trained weights for \(E\) and freeze its weights during our training. For _DNCM to canonical_ module, the first step involves unfolding high-resolution image \(I\) into a 2D matrix of dimensions (\(HW\times 3N\)), where \(N\) refers to the number of WB settings, \(H\) and \(W\) represent height and width, respectively. Each pixel in \(I\) is then transformed into a \(k\)-dimensional vector using a projection matrix \(P\) (\(3N\times k\)). \(k\) can be any number depending on the computational power, but we set it to \(32\) in our design. The extracted image-adaptive color mapping matrix \(d\) is multiplied with \(k\)-dimensional vector to inject the rich information into the projected space. \(Q\) (\(k\times k\)) and \(R\) (\(k\times 3\)) are the following learnable projection matrices to form the canonical form in this module. We can formulate this module, namely _DNCMc_, as follows \[\small DNCMc(\mathbf{I},d)=\mathbf{I}^{(HW\times 3)}\cdot\mathbf{P}^{(3\times k )}\cdot d^{(k\times k)}\cdot\mathbf{Q}^{(k\times k)}\cdot\mathbf{R}^{(k\times 3)} \tag{2}\] where \(\cdot\) denotes the matrix multiplication. Next, we feed the canonical form into _DNCM to AWB correction_ module (_DNCMa_). It does not have any fusion capability but learns to directly map the pixel values in the canonical form to the correct ones for the AWB version. Each pixel in the canonical form \(I_{c}\) is transformed into a \(k\)-dimensional vector by a projection matrix \(P\) (\(3\times k\)). By using a similar design to _DNCMc_, \(Q\) (\(k\times k\)) and \(R\) (\(k\times 3\)) are responsible for converting the embedded \(k\)-dimensional vector back to the RGB color space, which finally forms the output \(I_{AWB}\). The formal definition of _DNCMa_ can be seen in Equation 3. \[\small DNCMa(\mathbf{I_{c}})=\mathbf{I_{c}}^{(HW\times 3)}\cdot\mathbf{P}^{(3 \times k)}\cdot\mathbf{Q}^{(k\times k)}\cdot\mathbf{R}^{(k\times 3)} \tag{3}\] Apart from the self-supervised learning mechanism for DNCM, followed in [30], the learning objective is to minimize the reconstruction error between the ground truth and the AWB corrected output, as shown in Equation 4. \[\mathcal{L}=||\mathbf{I_{GT}}-\mathbf{I_{AWB}}||_{F}^{2} \tag{4}\] where \(I_{GT}\) and \(I_{AWB}\) denote the ground truth image and the output. To keep the training process simple and tractable, we did not include the smoothing loss [4] or perceptual loss [28] in our final objective function. Our design removes the decoder part that generates the weighting maps in the prior works, and instead, it directly computes the illumination color mapping with two distinct DNCM modules for the canonical form and AWB-corrected version. This design mitigates the need for further post-processing of the weighting maps, which leads to reducing the time complexity without compromising the performance. Moreover, due to the one-by-one pixel value mapping characteristic delivered by matrix multiplications, it gives AWB correction model the ability to be resolution-agnostic. Lastly, any AWB correction method can be easily plugged into this design for extracting rich information in low-resolution space from different WB settings, which makes our design also model-agnostic. ## 3 Experiments ### Experimental Details For our training, we have employed the RenderedWB dataset [4], which contains 65,000 sRGB images with pre-defined WB settings and corresponding white-balanced versions, captured by different cameras. Following the experimental setup in the prior works, we have two sets of pre-defined WB settings, which are {t,f,d,c,s} and {t,d,s}. The color temperatures used for pre-defined WB settings are as follows: Tungsten (c, 2850K), Fluorescent (f, 3800K), Daylight (d, 5500K), Cloudy (c, 6500K), and Shade (s, 7500K). We did not apply any data augmentation technique to the images during our training. For all experiments, we freeze the weights of the AWB backbone and only trained DNCM modules in our proposed strategy from scratch. We set the size of the low-resolution space to \(256\). We used AdamW optimizer [35] (\(\beta_{1}=0.9\), \(\beta_{2}=0.999\)) with batch size of 16. The learning rate is set to \(1e-4\) and we did not employ any scheduling strategy. We did not apply any post-processing operations after obtaining the output. ### Evaluation Following the prior works [2, 4, 32], we evaluate the AWB correction quality in terms of the mean-squared error (MSE), mean angular error (MAE) and color difference (\(\Delta\)E 2000). We report the mean, first (Q1), second (Q2), and third (Q3) quantile averages for all metrics. For qualitative and quantitative evaluation scenarios, we have used three different evaluation sets: Cube+ [7] and MIT-Adobe FiveK [13], along with the night photography rendering set [41]. The Cube+ dataset consists of 1,707 single illumination color-calibrated images, captured with a Canon EOS 550D camera during various seasons. The MIT-Adobe FiveK dataset comprises 5,000 images captured by different DSLR cameras, with each image manually retouched by multiple experts to correct the white balance. ## 4 Results and Discussion This section presents a detailed review of notable findings in our experiments. We primarily focus on three aspects while analyzing the results obtained in our experiments: visual quality, numeric evaluation, and efficiency. Qualitative analysis is conducted by comparing the results obtained by Mixed WB [4], Style WB [32] and our strategy built on top of both methods on MIT-Adobe FiveK dataset and night photography rendering set. Following the literature, the evaluation of performance using quantitative metrics, analysis of model complexity, and comparison of efficiencies are all conducted using the Cube+ dataset. **Qualitative analysis:** To use the images in MIT-Adobe FiveK dataset for our experiments, we first render the linear raw DNG images with different WB settings (_e.g.,_ Daylight, Tungsten, Shade) by using the method presented in [6]. Figure 3 demonstrates the qualitative comparison of our AWB correction results and the prior works' on selected samples from the dataset. The indices of selected samples in the dataset are as follows: \(323\), \(606\), \(2431\), \(2808\), and Figure 3: Comparison of the qualitative results of our efficient AWB correction method, namely _DeNIM_, with the prior works on the selected samples from MIT-Adobe FiveK dataset [13]. We compare our results with Mixed WB [4] and Style WB [32]. Image indices from top to bottom: \(323\), \(606\), \(2431\), \(2808\), \(2838\). \(2838\). These results indicate that our proposed strategy performs comparably well to the prior works on a per-pixel basis for AWB correction in the sRGB space. Utilizing per-pixel color mapping seems to result in color casts that are closer to human perception by more accurately representing the lighting conditions within the scene. Night photography rendering [15, 41] is an emerging topic in digital imaging. Night image capturing poses significant challenges due to its inherent nature, characterized by low light conditions, diverse illuminant sources, and hardware limitations. In night image capturing, AWB correction plays a pivotal role in preserving the realistic perspective of the output, ensuring that it aligns with human perception and avoids distortions. As practiced in [32], we integrate our AWB correction strategy into the camera ISP for processing night images given in the evaluation part of Night Photography Challenge 23' [41]. In our pipeline, we incorporate the same operations, including gamma correction, tone mapping, auto-contrast, and denoising [44], in the same order for all methods, but the only modification made is to the white-balancing strategies. Figure 4 illustrates the rendering results of various camera pipeline variants that encompass the prior works and our proposed strategy as the AWB correction method. The rendering results demonstrate that our strategy effectively produces more natural night images by mitigating undesired color casts commonly encountered in real-world scenarios. **Quantitative evaluation:** The benchmark on single-illuminant Cube+ dataset [13] is presented in Table 1. Following the same experimental setup in the prior works [4, 32], we have used two different patch sizes for the back \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Method**} & \multicolumn{4}{c|}{**MSE**} & \multicolumn{4}{c|}{**MAE**} & \multicolumn{4}{c|}{\(\Delta\)**E 2000**} & \multirow{2}{*}{**Size**} \\ \cline{2-2} \cline{4-13} & **Mean** & **Q1** & **Q2** & **Q3** & **Mean** & **Q1** & **Q2** & **Q3** & **Mean** & **Q1** & **Q2** & **Q3** \\ \hline FC4 [27] & 371.90 & 79.15 & 213.41 & 467.33 & 6.49\({}^{\circ}\) & 3.34\({}^{\circ}\) & 5.59\({}^{\circ}\) & 8.59\({}^{\circ}\) & 10.38 & 6.60 & 9.76 & 13.26 & 5.89 MB \\ \hline Quasi-U CC [8] & 292.18 & 15.57 & 55.41 & 261.58 & 6.12\({}^{\circ}\) & 1.95\({}^{\circ}\) & 3.88\({}^{\circ}\) & 8.83\({}^{\circ}\) & 7.25 & 2.89 & 5.21 & 10.37 & 622 MB \\ \hline KNN WB [5] & 194.98 & 27.43 & 57.08 & 118.21 & 4.12\({}^{\circ}\) & 1.96\({}^{\circ}\) & 3.17\({}^{\circ}\) & 5.04\({}^{\circ}\) & 5.68 & 3.22 & 4.61 & 6.70 & 21.8 MB \\ \hline Interactive WB [3] & 159.88 & 21.94 & 54.76 & 125.02 & 4.64\({}^{\circ}\) & 2.12\({}^{\circ}\) & 3.64\({}^{\circ}\) & 5.98\({}^{\circ}\) & 6.20 & 3.28 & 5.17 & 7.45 & **38 KB** \\ \hline Deep WB [2] & 80.46 & 15.43 & 33.88 & 74.42 & 3.45\({}^{\circ}\) & 1.87\({}^{\circ}\) & 2.82\({}^{\circ}\) & 4.26\({}^{\circ}\) & 4.59 & 2.68 & 3.81 & 5.53 & 16.7 MB \\ \hline MIMT [33] & - & - & - & - & - & 2.52\({}^{\circ}\) & 0.98\({}^{\circ}\) & 1.38\({}^{\circ}\) & 2.96\({}^{\circ}\) & 2.88 & 1.94 & 2.42 & 2.87 & - \\ \hline \multicolumn{13}{|c|}{**Vlow WB [4]**} \\ \hline \hline \(p=64\), WB={t,f,d,s} & 168.38 & 8.97 & 19.87 & 105.22 & 4.20\({}^{\circ}\) & 1.39\({}^{\circ}\) & 2.18\({}^{\circ}\) & 5.54\({}^{\circ}\) & 5.03 & 2.07 & 3.12 & 7.19 & 5.09 MB \\ \hline \(p=64\), WB={t,f,d,c,s} & 161.80 & 9.01 & 19.33 & 90.81 & 4.05\({}^{\circ}\) & 1.40\({}^{\circ}\) & 2.12\({}^{\circ}\) & 4.88\({}^{\circ}\) & 4.89 & 2.16 & 3.10 & 6.78 & 5.10 MB \\ \hline \(p=128\), WB={t,f,d,c,s} & 176.38 & 16.96 & 35.91 & 115.50 & 4.71\({}^{\circ}\) & 2.10\({}^{\circ}\) & 3.09\({}^{\circ}\) & 5.92\({}^{\circ}\) & 5.77 & 3.01 & 4.27 & 7.71 & 5.10 MB \\ \hline \multicolumn{13}{|c|}{**Style WB [32]**} \\ \hline \(p=64\), WB={t,f,d,s} & 92.65 & **6.52** & **14.23** & 35.01 & 2.47\({}^{\circ}\) & 0.82\({}^{\circ}\) & 1.44\({}^{\circ}\) & 2.49\({}^{\circ}\) & 2.99 & **1.36** & 2.04 & 3.32 & 61.0 MB \\ \hline \(p=64\), WB={t,f,d,c,s} & 151.38 & 29.49 & 56.35 & 125.33 & 4.18\({}^{\circ}\) & 2.13\({}^{\circ}\) & 3.03\({}^{\circ}\) & 4.81\({}^{\circ}\) & 5.42 & 3.11 & 4.42 & 6.76 & 61.1 MB \\ \hline \(p=128\), WB={t,f,d,c,s} & 88.03 & 7.92 & 17.73 & 45.01 & 2.61\({}^{\circ}\) & 0.93\({}^{\circ}\) & 1.58\({}^{\circ}\) & 2.85\({}^{\circ}\) & 3.24 & 1.50 & 2.30 & 3.95 & 61.2 MB \\ \hline \(p=128\), WB={t,f,d,c,s} & 100.24 & 10.77 & 37.74 & 70.18 & 3.09\({}^{\circ}\) & 1.15\({}^{\circ}\) & 2.61\({}^{\circ}\) & 3.87\({}^{\circ}\) & 3.96 & 1.59 & 3.55 & 5.51 & 61.3 MB \\ \hline \multicolumn{13}{|c|}{**DeNIM + Mixed WB [4] (ours)**} \\ \hline \(p=64\), WB={t,f,d,s} & 120.14 & 36.39 & 77.40 & 152.96 & 2.57\({}^{\circ}\) & 1.53\({}^{\circ}\) & 2.17\({}^{\circ}\) & 3.19\({}^{\circ}\) & 5.26 & 3.38 & 4.71 & 6.64 & 28.7 MB \\ \hline \(p=64\), WB={t,f,d,c,s} & 129.01 & 14.39 & 27.69 & 57.90 & 2.67\({}^{\circ}\) & 0.99\({}^{\circ}\) & 1.45\({}^{\circ}\) & 2.29\({}^{\circ}\) & 3.96 & 2.10 & 2.85 & 4.24 & 28.7 MB \\ \hline \(p=128\), WB={t,f,d,s} & 158.58 & 60.14 & 115.66 & 198.59 & 4.20\({}^{\circ}\) & 2.38\({}^{\circ}\) & 3.77\({}^{\circ}\) & 5.63\({}^{\circ}\) & 5.69 & 3.91 & 5.41 & 7.10 & 28.8 MB \\ \hline \(p=128\), WB={t,f,d,c,s} & 99.70 & 13.89 & 24.71 & 43.88 & 2.49\({}^{\circ}\) & 1.07\({}^{\circ}\) & 1.62\({}^{\circ}\) & 2.41\({}^{\circ}\) & 3.44 & 1.95 & 2.74 & 3.78 & 28.8 MB \\ \hline \multicolumn{13}{|c|}{**DeNIM + Style WB [32] (ours)**} \\ \hline \hline \multicolumn{13}{|c|}{**65.80**} \\ \hline \(p=64\), WB={t,f,d,c,s bone network (_i.e._, \(64\) and \(128\)), and we designed the input image with two sets of WB settings where the default choices include Tungsten, Daylight, and Shade, while we further incorporate Fluorescent and Cloudy color temperatures to enhance the versatility of the method. The quantitative results indicate that our strategy achieves not only increasing efficiency but also improving performance across all different patch sizes and WB settings, as evidenced by all evaluation metrics. The main observations extracted from these results are as follows: (1) In contrast to the results obtained with Style WB, the best-performing variant appears to be when using a patch size of 64 and incorporating all possible WB settings. This configuration leads to superior performance when compared to other settings. (2) The notable increase in performance, specifically observed on the third quantiles of all evaluation metrics, deserves highlighting. This observation suggests that our strategy can produce more robust results, particularly when dealing with challenging samples. (3) Confirming the findings in [4, 32], we observe that smaller patch sizes tend to lead to better modeling of the illuminant, and in our case, also learning color mappings. (4) We encountered difficulties in identifying a consistent pattern for the mean-squared error (MSE) metric when compared to the other two metrics, and this may suggest that MSE may not adequately capture the quality of color correction achieved by the different methods. We Figure 4: Comparison of the night photography rendering results of our AWB correction strategy with Mixed WB [4] and Style WB [32] on the selected samples from Night Photography Rendering Challenge 23’ evaluation set [41]. Image indices from top to bottom: \(8678\), \(8210\), \(8817\), \(8894\), \(8941\). believe that this particular metric might not be suitable for accurately measuring the performance of AWB correction. **Efficiency:** The results presented in Table 2 demonstrate the efficiency of our proposed strategy when compared to its prior works across different criteria. Specifically, we evaluated the efficiency based on the following criteria: the processing time (Time (s)), the model complexity in terms of parameter count (Param (M)), and computational load measured in Floating Point Operations Per Second (FLOPS (G)). In terms of processing time, our strategy significantly reduces the time required to process the images for AWB correction. The reduction in processing time is accomplished by designing a model that allows discarding the post-processing operations (_i.e._, multi-scale inference, and edge-aware smoothing) and adopting simple learnable projection matrices in place of the decoder. DeNIM shows a remarkable speed advantage, being at least 35 times faster than previous models (up to 1700 times faster when post-processing is included). Next, the model complexity is an essential factor to consider. DeNIM leads to a slight increase in the number of parameters compared to the prior works, even though it discards the decoder of the baseline models. The reason behind the increasing number of parameters lies in the decision to use fully-connected layers as projection matrices, as opposed to convolutional layers in the decoder. Fully-connected layers require more parameters, due to their dense connections between all input and output neurons. This design choice may have led to a slightly higher model complexity, however, it is important to note that this decision does not significantly impact the processing time. Lastly, we measure the computational load of all methods in terms of FLOPS. Lower FLOPS values imply less computational resources required, hence better efficiency. When DeNIM is trained with the Mixed WB backbone, it achieves a remarkable reduction in FLOPS by approximately 97%. Similarly, when trained with the Style WB backbone, the FLOPS are reduced by approximately 65%. This substantial decrease in computational load highlights the remarkable efficiency of our strategy compared to the prior works. **Limitations:** Although deep-learning-driven AWB methods generally demonstrate significant resilience across various different scenarios, there are occasional examples where they yield unsatisfactory results. As shown in Figure 5, AWB correction operations may fail to address unrealistic color casts and produce poor results which do not align with human visual perception. At this point, our strategy also may not be able to handle the challenges effectively, primarily since it relies on the feature extraction part of the prior models. We can state that it may struggle to address certain complex and uncommon scenarios, which leads to sub-optimal results. Moreover, to further investigate the performance in handling more challenging cases, our strategy can be tested on multi-illuminant datasets [4, 31]. By subjecting this strategy to such datasets, we can gain valuable insights into its capabilities and limitations in handling diverse and complex lighting scenarios, and we left it as future work. ## 5 Conclusion In this paper, we have introduced a novel and efficient deep learning-based AWB correction strategy built on top of the current state-of-the-art methods. This strategy incorporates the idea of deterministic color mapping by leveraging the encoder of existing AWB models and learnable projection matrices. Through extensive experiments, we showed the effectiveness of our strategy by achieving at least 35 times faster processing while surpassing the performance of state-of-the-art methods on high-resolution images. Our research provides a promising solution for real-time, high-quality color correction in practical scenarios, even in digital camera chipsets, addressing the challenges posed by increasing model complexities for better performance.
2308.05390
Product Review Image Ranking for Fashion E-commerce
In a fashion e-commerce platform where customers can't physically examine the products on their own, being able to see other customers' text and image reviews of the product is critical while making purchase decisions. Given the high reliance on these reviews, over the years we have observed customers proactively sharing their reviews. With an increase in the coverage of User Generated Content (UGC), there has been a corresponding increase in the number of customer images. It is thus imperative to display the most relevant images on top as it may influence users' online shopping choices and behavior. In this paper, we propose a simple yet effective training procedure for ranking customer images. We created a dataset consisting of Myntra (A Major Indian Fashion e-commerce company) studio posts and highly engaged (upvotes/downvotes) UGC images as our starting point and used selected distortion techniques on the images of the above dataset to bring their quality at par with those of bad UGC images. We train our network to rank bad-quality images lower than high-quality ones. Our proposed method outperforms the baseline models on two metrics, namely correlation coefficient, and accuracy, by substantial margins.
Sangeet Jaiswal, Dhruv Patel, Sreekanth Vempati, Konduru Saiswaroop
2023-08-10T07:09:13Z
http://arxiv.org/abs/2308.05390v1
# Product Review Image Ranking for Fashion E-commerce ###### Abstract. In a fashion e-commerce platform where customers can't physically examine the products on their own, being able to see other customers' text and image reviews of the product is critical while making purchase decisions. Given the high reliance on these reviews, over the years we have observed customers proactively sharing their reviews. With an increase in the coverage of User Generated Content (UGC), there has been a corresponding increase in the number of customer images. It is thus imperative to display the most relevant images on top as it may influence users' online shopping choices and behavior. In this paper, we propose a simple yet effective training procedure for ranking customer images. We created a dataset consisting of Myntra (A Major Indian Fashion e-commerce company) studio posts and highly engaged (upvotes/downvotes) UGC images as our starting point and used selected distortion techniques on the images of the above dataset to bring their quality to par with those of bad UGC images. We train our network to rank bad-quality images lower than high-quality ones. Our proposed method outperforms the baseline models on two metrics, namely correlation coefficient, and accuracy, by substantial margins. Image Aesthetics, Image ranking, Deep Learning, Neural Networks, Pre-trained Models + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings + Footnote †: journal: Computer vision tasks; Semi-supervised learning settings Our model is trained on the image pairs generated in such a way that one image will almost always be superior to another one in quality. We train a multi-layer perceptron to score good images higher than bad ones using pairwise hinge loss. To generate such a dataset, we make certain assumptions. One such assumption is that a professionally taken image will be better than a user-generated image. Another assumption is that for highly engaged reviews, users also consider the quality of the associated image while deciding whether that review is helpful or not. Our key contributions are, 1. We propose an effective learning scheme by leveraging pretrained models to extract features for image aesthetic assessment in fashion e-commerce without manual annotation. 2. To the best of our knowledge, this is the first attempt at ranking fashion UGC images. The rest of the paper is organized in the following way. Section 2 gives an overview of the related work. Section 3 explains how we generate a synthetic ranked dataset and our approach to learning the ranking. In Section 4, we explain the experimental setup. Results are given in Section 5. Section 6 concludes the work. ## 2. Related Literature The recent trends in approaching the Image Aesthetics Assessment (IAA) problem have been based on either regression or classification. Most of these models use the AVA[(19)] or AADB[(14)] datasets to benchmark their performance. Teqing et al.[(30)] consider the IAA as a binary classification task where they segregate images based on their Mean Opinion Score (MOS). Images with MOS less than 5 will be treated as bad images, and those whose MOS is greater than or equal to 5 are considered as good images. They finetune Convolution Neural Network (CNN) models pre-trained on ImageNet such as AlexNet and VggNet to report their accuracies. There are other approaches that deal with the problem of fixed-size image constraints of CNN [(11; 17; 18; 29)] but eventually, they also solve IAA as binary classification. Neural Image Assessment (NIMA) [(28)] introduced a simple strategy. While most of the then-existing approaches were based on predicting the MOS, they predicted the aesthetic rating distribution using a CNN that is trained using Earth Mover's Distance Loss (EMD) on human-sourced rating distribution from AVA dataset. Despite its simple architecture, it achieves a result that is comparable to State of the art results. We have adopted NIMA for our experiments. We have used MobileNet[(12)] architecture-based CNN as our backbone network to generate image features that are trained on AVA and TID2013[(22)] datasets. A task related to IAA is No Reference - Image Quality Assessment (NR-IQA) which assesses the technical quality of an image. Many recent approaches[(2; 13; 28)] make use of labeled data such as TID2013, IIVE[(24)] and CSIQ[(3)] to predict the quality score. Another set of approaches treats this task as a ranking problem and tries to minimize the ranking loss using ground truth labels[(23; 4)]. One of the drawbacks of using deep learning-based NR-IQA methods is the need of a large labeled dataset which is not available for NR-IQA tasks. The annotation process for IQA image datasets requires multiple human annotations for every image. This process of collection annotation is very time-consuming and expensive, due to which all the above approaches train shallow networks directly on the dataset. To address this problem, Liu et al. in RankIQA[(16)] paper have taken large unlabeled high-quality images and applied image distortions to generate a ranking image dataset. For example, given an image, upon the addition of Gaussian blurs of increasing intensities on it, we end up with a set of images which can be ranked easily as Gaussian blur will decrease the image quality. In such datasets, we don't have the absolute aesthetic score of an image, but we certainly know for a pair of images which one is of the higher quality. This synthetic data generation allowed them to better train a deep network. Subsequently, they trained a siamese network[(6)] using efficient ranking loss and further fine tuned the network on a labeled dataset to achieve better performance in NR-IQA task. Our approach is inspired by RankIQA, which distort the technical aspect of high-quality images by adding Gaussian noise, gaussian blur, etc. to generate a ranking dataset, we are using image manipulation techniques that not just degrade the technical aspect but other aspects of image qualities as well which we generally encounter in our "bad" UGC images. We have also created a pair-sampling strategy that is suited to our use case of ranking images, this strategy narrows the scope of learning and provides more consistent training to our network. ## 3. Methodology Recent IAA methods rely on training CNN that receives an image as an input and generate a score that is higher for an aesthetically superior image. These networks are generally pre-trained on the ImageNet dataset and further trained end-to-end on AVA or TID datasets for image aesthetics or technical assessment. However, such a trained network performs suboptimally in ranking domain-specific images directly because of the high diversity in the image content of these datasets. ### Data Collection In the RankIQA[(16)] paper, high-quality images were subject to different kinds of image manipulation techniques with different control parameters. Applying such techniques to high-quality images ensures that introducing any type of distortion will certainly degrade the aesthetics. Using this approach as a reference point for creating a synthetic dataset, prior to introducing pertinent distortions, we started off by compiling 19k Highly aesthetic images from Myntra Studio, and to prevent the network from overfitting to a specific type of image creation style we sampled 16k highly upvoted UGC images drawn from customer reviews. After introducing distortions to the compiled images, to the resulting dataset, we added another 3.5k highly downvoted UGC images. **Myntra Studio**: Myntra studio is a platform where fashion influencers post their images/videos wearing products that one can buy from Myntra. **UGC Images**: In Myntra, a verified buyer can write reviews and can upload images to support their opinion. Any customer can upvote or downvote a review. A sample representative image from the training set is shown in figure 1. ### Image Manipulations Techniques As described in the papers (Kang et al., 2017; Liu et al., 2018) different image manipulation techniques have diverse effects on the manipulated image. For example, in grayscale image conversion, it is difficult to compare the input with the output in terms of aesthetics. But in our case we want to rank such images lower than the colored counterpart because it will be hard for the customer to make sense of the color of the product in a grayscale image. Likewise, we have identified certain image manipulation techniques that are guaranteed to render a degraded effect on the image quality, as listed in Table 1. We have adopted a variety of techniques to generate synthetic training instances which emulate the low-quality images that we get in our product reviews, including (1) Vertical and horizontal crop - The location of the subject and object in an image do play an important role in defining the aesthetics of an image. Partial subject in an image do affect the aesthetics of an image, for instance, shirt buyers excludes the portion below the abdominal area while uploading pictures of them wearing the shirts they purchased. To mimic the same, good-quality images were subjected to cropping (Vertical and Horizontal). (2) Addition of color jittering by changing the Brightness, contrast and hue based on a scaling factor to mimic the poor lighting conditions. (3) Gaussian Blur and Gaussian noise to add fuzziness and graininess in an image. (4) Grayscale to mimic one simple basic filter which customers apply. Customers sometimes do use advanced filters, but we are not taking them into account for now. (5) Random Rotation and Rotation with Mixup are applied to achieve the camera shake effect. ### Ranked Images Generation and Sampling Strategy In addition to what has already been described in section 3.1, It would be worthwhile to mention that UGC images were further divided into two classes, one in which the customer is actually wearing a product and the other being flat shot images. We achieved this using YOLOv5(Kang et al., 2017) model to classify images as with/without humans. We segregated them because in the case of apparel, images with usability have precedence and relevance when it comes to standalone images of products purchased. With all the images, we make the following pairs - \[\{(x,y):\exists x\in D_{+},y\in D_{-}\}\] where \(D_{+}\) and \(D_{-}\) comes from the Table 2. We sample pairs as defined in the table uniformly. For pair 1, we considered studio images as positive samples and used image distortion techniques as described in section 3.2 in random order on positive samples to generate corresponding negative samples. An analogous description holds for Pair 2. In pair 3,4 we considered studio images as positive samples and UGC images as negative samples. In pair 5, we considered "good" UGC images of users wearing the product as a positive sample and "bad" UGC images of users wearing the product as negative samples. In pair 6 we ranked "good" flat shot images higher than "bad" flat shot images. ### Neural Network Architecture In our experiments, we have used NIMA(Wang et al., 2017) as our reference to extract image features. As described in the NIMA paper, they have trained deep CNNs on two datasets. One network tries to capture the style, content, composition etc. and another network tries to capture the technical quality of an image. We aggregate the features generated from the penultimate layer, i.e. the average pooling layer, which is 1024 dimensions in our case. We have also aggregated the probability distribution of predicted rating as a feature, as described in Figure 3. **NIMA - Aesthetics**: This CNN model is based on MobileNet architecture whose weights are initialized by training on ImageNet dataset and then end-to-end training is performed on AVA1 dataset. The AVA dataset contains 2,55,000 images, rated based on image aesthetics such as style, content, composition etc. by photographers. Each image is roughly rated by 200 people in response to a photography context on a scale of 1-10. This model tries to predict the normalized distribution of ratings for an image. Footnote 1: AVA images are obtained from www.dpchallenge.com, which is an online community for amateur photographers. **NIMA - Technical**: This CNN model is based on MobileNet architecture. It is trained on the TID2013(Wang et al., 2017) dataset. This contains 3000 images which are generated from 25 reference images, and 24 types of distortion with 5 levels of each distortion. Ratings are collected by showing a pair of distorted images for each reference image, and the observer has to select the better one in the pair. Unlike the AVA dataset, TID2013 provides just the mean opinion score and standard deviation. NIMA paper requires training on score probability Figure 1. Sample training set images. The first row represents Studio images, second and third row represents good UGC and bad UGC images, respectively. distribution. The score distribution is approximated through maximum entropy optimization(Beng et al., 2019). We have also taken the height, the width, and the aspect ratio of an image as features. The reason to incorporate them as features is that in NIMA we have to rescale all the images to a fixed size regardless of their original image aspect ratios. The lack of information about the original image size during the training of CNN in NIMA can affect its prediction, as the human rater may not give the same rating to the resized version of the image. Given a pair of images \(I_{1}\) and \(I_{2}\) as input to the NIMA feature extractor, the output feature representation is denoted by \(x_{1}\) and \(x_{2}\) respectively. Now these features will be given as an input to our Siamese network shown in Figure 2. The output is represented by \(f(x;\Theta)\) which is obtained by capturing the output of the last layer. Here \(\Theta\) are the network parameters, here we will use \(y\) to denote the ground truth value for the image. The network output of the final layer is a single scalar. The network is supposed to output higher scores for high-quality images and smaller scores for low-quality images. For a pair of images, the ranking loss is defined as - \[L_{rank}=\sum_{i,j}max(0,m-\delta(y_{i}\geq y_{j})(f(x_{i};\Theta)-f(x_{j}; \Theta)))\] where \[\delta(y_{i}\geq y_{j})=\begin{cases}1,\text{ if }y_{i}\geq y_{j}\\ -1,\text{ if }y_{i}<y_{j}.\end{cases}\] where m is the minimal margin denoting the desired difference between the scores generated by the ranking network for a pair of images. ## 4. Experiments ### Implementation Details We train our Siamese network with the image pairs that we generated as described in section 3.3. We have set the hyperparameter m, which describes the minimal margin between the positive and negative image pair, to 1. We have collected 19K images from Myntra studio, 16K highly upvoted UGC images and 3.5K highly downvoted UGC images for generating training image pairs. \begin{table} \begin{tabular}{|l|c|c|} \hline Operations & Parameters & Rationale \\ \hline Random Crop, Vertical Crop, & [0.4,0.6] & Partial Subject \\ Horizontal Crop & & \\ \hline Color Jitter - Brightness, & [0.3,0.6] \(\cup\) [1.2,1.4] & Poor Lighting \\ Contrast and Hue & & \\ \hline Gaussian Blur & [0.8,1.2] & Soft Images \\ \hline Gaussian Noise & [0.2,0.8] & Grainy Images \\ \hline Grayscale & - & Image Filter \\ \hline Random Rotation, & [5,10,15,20] & \\ Random Rotation + Mixup & alpha - [0.2,0.4] & Camera Shake \\ \hline \end{tabular} \end{table} Table 1. Image Manipulation Techniques used in our approach \begin{table} \begin{tabular}{c|c|c} \hline S. No & \(D_{+}\) & \(D_{-}\) \\ \hline 1 & \(D_{studio}\) & \(D_{studio\_distorted}\) \\ 2 & \(D_{age\_good}\) & \(D_{age\_good\_distorted}\) \\ 3 & \(D_{studio}\) & \(D_{age\_good}\) \\ 4 & \(D_{studio}\) & \(D_{age\_bad}\) \\ 5 & \(D_{age\_good\_human}\) & \(D_{age\_bad\_human}\) \\ 6 & \(D_{age\_good\_non\_human}\) & \(D_{age\_bad\_non\_human}\) \\ \hline \end{tabular} \end{table} Table 2. Image Pair Sampling Strategy Figure 3. NIMA Aesthetic and Technical Model Figure 2. Network Architecture For validation, we have kept 1000 images each from the studio, "good" UGC images and "bad" UGC images for 1000 different styles2. Therefore, we have 3 images for each style. We have used accuracy (which is defined in section 5) as the metric on the validation set to select the best model. Footnote 2: At Myrta we use the term style to mean a specific product. We have used pre-trained NIMA feature extractor(Keras et al., 2015) which is implemented in Keras(Keras et al., 2015) and we have converted them into ONNX(Keras et al., 2016), as we do not train these networks. We run this ONNX model using ONNX Runtime(Keras et al., 2016). Our MLP network contains 3 hidden layers and an output layer. The first hidden layer transforms the output from the NIMA feature extractor to 512 dimensions. Subsequent layers transform it to 256 and 128 dimensions, respectively. The network output of the final layer is a scalar value representing the score. During training, input images are rescaled to 224 x 224. We train the network using ADAM optimizer. We have used the default learning rate(\(10^{-3}\)) for fully connected layers and default weight decay regularization of \(5*10^{-4}\). We have also used a learning rate scheduler, which will halve the learning rate if validation accuracy doesn't improve in five consecutive epochs. We have experimented with 16 as a batch size. All our implementation is done in PyTorch(Keras et al., 2016). ## 5. Results To compare our model, we use NIMA models as our baselines. Both NIMA-Aesthetics and NIMA-Technical predict the probability distribution of the score in the range 1-10, inclusive. We take the expected value of the NIMA-X's output on an image as the score of that image by model x. That is, \[f^{X}(I)=\sum_{i=1}^{i=10}iPr_{X}(i;I)\] where X is either A (for Aesthetics) or T (for Technical), and \(Pr_{X}(i;I)\) denotes the probability for a score i by NIMA-X. Images with higher predicted scores are ranked higher. Since we do not have the ground truth rankings, we take users' ratings as a proxy for the quality of an image. That is, if a particular image \(I\) has \(u\) upvotes and \(d\) downvotes, we assume that the ground truth quality score for that image is \(\frac{u}{u+d}\). To create such a test set, we gathered around 850 images associated with highly engaged reviews from 20 popular styles. None of the images from these 20 styles (highly engaged or not) were kept in our training set or validation set. As mentioned in the introduction, the ratings are not given to the images, but the reviews. However, since these reviews are highly engaged, we assume that raters would have considered accompanying images while rating the review. To validate this hypothesis, we analyzed our reviews for their engagement and found that reviews with associated images had on average 6.5x engagement in terms of upvotes/downvotes as compared to reviews without images. We also did a paired t-test and found that this difference was statistically significant. A subset of the images with their computed scores is presented in Figure 4. We use two metrics to quantify our results. The first one is the Pearson correlation coefficient, which is a common metric to compare the performance of image ranking models on labeled datasets. It is computed as, \[\rho(f,S)=\frac{\sum_{I\in S}(f(I)-\bar{f}(S))(g(I)-\bar{g}(S))}{\sqrt{\sum_{I \in S}(f(I)-\bar{f}(S))^{2}(g(I)-\bar{g}(S))^{2}}}\] where \(f\) is either our baseline or our model, \(g\) is the ground truth score function, \(S\) is the set of images belonging to a particular style(i.e. all images in \(S\) belong to the same product), \(\bar{h}(S)\) is the mean of the scores of the images in \(S\) computed by \(h\). 0 correlation implies that the model gives scores that are unrelated to the ground truth scores (i.e. likes), positive correlation implies that the model scores highly liked images higher than highly disliked images. Another metric we use is accuracy. To compute accuracy for a particular style, we randomly (without replacement) pick 50 pairs of images for that style. We compute scores for these pairs using our model or the baselines. If we pick a pair \((I_{1},I_{2})\), and \(g(I_{1})>g(I_{2})\), then models should generate \(f(I_{1})>f(I_{2})\), otherwise that pair is considered as misclassified. Table 3 compares our approach with our baselines. We report the average of the metrics for all 20 styles. As can be seen in the first two rows of the table, NIMA models without finetuning, even though trained for aesthetics, outputs results that are completely uncorrelated with the proxy ground truth. Note that we did not train Figure 4. Sample test set images for a particular style. For each image, the number in the bottom center is the ground truth score for that image. our model to predict the proxy scores. We are using a pairwise loss function to differentiate between positive and negative images, still, our model has a positive correlation with the proxy ground truth. The accuracies of the NIMA models are no good than the accuracy one would get by guessing the binary prediction randomly. ## 6. Conclusion and Future Work This paper presents an effective scheme of leveraging existing Deep learning models for Image Aesthetic Assessment as a feature extractor to fine-tune a Siamese network trained on synthetic data generated to rank UGC images. We created the dataset by systematically degrading the quality of the studio and "good" UGC images. This was done to emulate the kind of low-quality imagery that we encounter on a routine basis. We have seen that this approach helps in improving the ranking as compared to baseline NIMA-Aesthetics and NIMA-Technical. Our technique is not limited to NIMA feature extractor, as we can replace NIMA feature extractor with any other feature extractor e.g. (Krizhevsky et al., 2014; Krizhevsky et al., 2014) trained on image aesthetics assessment datasets (Krizhevsky et al., 2014; Krizhevsky et al., 2014). Extending our existing approach to pre-train a CNN and fine-tune our labeled ranked dataset for fashion will be an interesting experiment to perform. We can leverage this model for Thumbnail generation, ranking catalog & studio images. This model can be leveraged for providing customer feedback through a prompt about the quality of the image they have taken before they submit the images.
2306.11649
Symplectic lattice gauge theories on Grid: approaching the conformal window
Symplectic gauge theories coupled to matter fields lead to symmetry enhancement phenomena that have potential applications in such diverse contexts as composite Higgs, top partial compositeness, strongly interacting dark matter, and dilaton-Higgs models. These theories are also interesting on theoretical grounds, for example in reference to the approach to the large-N limit. A particularly compelling research aim is the determination of the extent of the conformal window in gauge theories with symplectic groups coupled to matter, for different groups and for field content consisting of fermions transforming in different representations. Such determination would have far-reaching implications, but requires overcoming huge technical challenges. Numerical studies based on lattice field theory can provide the quantitative information necessary to this endeavour. We developed new software to implement symplectic groups in the Monte Carlo algorithms within the Grid framework. In this paper, we focus most of our attention on the Sp(4) lattice gauge theory coupled to four (Wilson-Dirac) fermions transforming in the 2-index antisymmetric representation, as a case study. We discuss an extensive catalogue of technical tests of the algorithms and present preliminary measurements to set the stage for future large-scale numerical investigations. We also include the scan of parameter space of all asymptotically free Sp(4) lattice gauge theories coupled to varying number of fermions transforming in the antisymmetric representation.
Ed Bennett, Peter A. Boyle, Luigi Del Debbio, Niccolò Forzano, Deog Ki Hong, Jong-Wan Lee, Julian Lenz, C. -J. David Lin, Biagio Lucini, Alessandro Lupo, Maurizio Piai, Davide Vadacchino
2023-06-20T16:17:33Z
http://arxiv.org/abs/2306.11649v3
# Symplectic lattice gauge theories on Grid: approaching the conformal window ###### Abstract Symplectic gauge theories coupled to matter fields lead to symmetry enhancement phenomena that have potential applications in such diverse contexts as composite Higgs, top partial compositeness, strongly interacting dark matter, and dilaton-Higgs models. These theories are also interesting on theoretical grounds, for example in reference to the approach to the large-\(N\) limit. A particularly compelling research aim is the determination of the extent of the conformal window in gauge theories with symplectic groups coupled to matter, for different groups and for field content consisting of fermions transforming in different representations. Such determination would have far-reaching implications, but requires overcoming huge technical challenges. Numerical studies based on lattice field theory can provide the quantitative information necessary to this endeavour. We developed new software to implement symplectic groups in the Monte Carlo algorithms within the Grid framework. In this paper, we focus most of our attention on the \(Sp(4)\) lattice gauge theory coupled to four (Wilson-Dirac) fermions transforming in the 2-index antisymmetric representation, as a case study. We discuss an extensive catalogue of technical tests of the algorithms and present preliminary measurements to set the stage for future large-scale numerical investigations. We also include the scan of parameter space of all asymptotically free \(Sp(4)\) lattice gauge theories coupled to varying number of fermions transforming in the antisymmetric representation. + Footnote †: preprint: CTP-PTC-23-26 + Footnote †: preprint: CTP-PTC-23-26 + Footnote †: preprint: CTP-PTC-23-26 + Footnote †: preprint: CTP-PTC-23-26 + Footnote †: preprint: CTP-PTC-23-26 + Footnote †: preprint: CTP-PTC-23-26 + Footnote †: preprint: CTP-PTC-23-26 + Footnote †: preprint: CTP-PTC-23-26 + Footnote †: preprint: CTP-PTC-PTC-23-26 + Footnote †: preprint: CTP-PTC-PTC-23-26 + Footnote †: preprint: CTP-PTC-PTC-23-26 + Footnote †: preprint: CTP-PTC-PTC-23-26 + Footnote †: preprint: CTP-PTC-PTC-23-26 + Footnote †: preprint: CTP-PTC-PTC-23-26 + Footnote †: preprint: CTP-PTC-PTC-23-26 + Footnote †: preprint: CTP-PTC-23-26 + Footnote †: preprint: CTP-PTC-PTC-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PTC-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PTC-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: preprint: CTP-PTC-C-23-26 + Footnote †: preprint: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PTC-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 Footnote †: preprint: CTP-PT-C-23-26 + Footnote †: preprint: CTP-PT-C-23-26 Footnote †: preprint: CTP-PT-C-23-26 Footnote †: preprint: CTP-PT-C-23-26 Footnote †: preprint: CTP-PT-C-23-26 + [MISSING_PAGE_POST] + ###### Contents * I Introduction * II Gauge theories with symplectic group * II.1 The conformal window * II.2 The lattice theory * III Numerical Implementation: Grid * III.1 Software development * III.2 Basic tests of the algorithm * III.3 More about the Molecular Dynamics * III.4 Comparing HMC and RHMC * IV The \(N=2\) lattice Yang-Mills theory * V The \(N=2\) theories coupled to fermions: bulk phase structure * V.1 Varying \(N_{\rm as}\) * VI Scale setting and topology * VII Summary and outlook * A Group-theoretical definitions * B Generators of the algebra in Grid ## I Introduction Gauge theories with symplectic group, \(Sp(2N)\), in four space-time dimensions have been proposed as the microscopic origin of several new physics models that stand out in the literature for their simplicity and elegance. We list some compelling examples later in this introduction. Accordingly, lattice field theory methods have been deployed to obtain numerically a first quantitative characterisation of the strongly coupled dynamics of such gauge theories [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. Different regions of lattice parameter space have been explored; by varying the rank of the group, \(N\), the number, \(N_{\rm f,as}\), and mass, \(m^{t,{\rm as}}\), of (Dirac) fermions transforming in the fundamental (f) and 2-index antisymmetric (as) representation, one can tabulate the properties of these theories. And, after taking infinite volume and continuum limits, the results can be used by model builders, phenomenologists, and field theorists working on potential applications. A prominent role in the recent literature is played by the theory with \(N=2\), \(N_{\rm f}=2\), and \(N_{\rm as}=3\). It gives rise, at low energies, to the effective field theory (EFT) entering the minimal Composite Higgs model (CHM) that is amenable to lattice studies [20],1 and also realises top (partial) compositeness [85] (see also Refs. [86; 87]). It hence provides an economical way of explaining the microscopic origin of the two heaviest particles in the standard model, the Higgs boson and the top quark, singling them out as portals to new physics. Footnote 1: The literature on CHMs in which the Higgs fields emerge as pseudo-Nambu-Goldstone bosons (PNGBs) from the spontaneous breaking of the approximate global symmetries of a new, strongly coupled theory [21; 22; 23], is vast. See, e.g., the reviews in Refs. [24; 25; 26], the summary tables in Refs. [27; 28; 29], and the selection of papers in Refs. [30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68] and Refs. [69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96]. The \(Sp(2N)\) gauge theories with \(N_{\rm f}=2\) and \(N_{\rm as}=0\) find application also in the simplest realisations of the strongly interacting massive particle (SIMP) scenario for dark matter [88; 89; 90; 91; 92; 93; 94; 95; 96]. They can address observational puzzles such as the _'core vs. cusp'_[97] and _'too big to fail'_[98] problems. In addition, they might have profound implications in the physics of the early universe and be testable in present and future gravitational wave experiments [99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116]. This is because they can give rise to a relic stochastic background of gravitational waves [117; 118; 119; 120; 121; 122], that are the current subject of active study [123; 124; 125]. On a more abstract, theoretical side, in \(Sp(2N)\) Yang-Mills theories one can compute numerically the spectra of glueballs and strings [126; 127; 128; 129; 130; 131; 132; 133; 134; 135], as well as the topological charge and susceptibility [136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152]. This allows for a comparison with other gauge groups (\(SU(N_{c})\) in particular), by means of which to test non-perturbative ideas about field theories and their approach to the large-\(N_{c}\) limit--see, e.g., Refs. [153; 154; 155; 5, 10]. Indeed, even the pioneering lattice study of symplectic theories in Ref. [156] was performed to the purpose of better characterising on general grounds the deconfinement phase transition. A special open problem is that of the highly non-trivial determination of the extent of the conformal window in strongly coupled gauge theories with matter field content. It has both theoretical and phenomenological implications, of general interest to model-builders, phenomenologists, and field theorists alike. Particular attention has been so far paid to \(SU(N_{c})\) theories, more than \(Sp(2N)\) (with \(N>1\)) ones. Let us pause and explain what the problem is, on general grounds. Robust perturbation-theory arguments show that if the number of matter fields is large enough--but not so much as to spoil asymptotic freedom--gauge theories can be realised in a conformal phase. This is the case when long distance physics is governed by a fixed point of the renormalisation group (RG) evolution [157; 158], and the fixed point is described by a conformal field theory (CFT). It is reasonable to believe that such fixed points may exist also outside the regime of validity of perturbation theory, when the number of matter fields is smaller. What is the smallest number of fermions for which the theory still admits a fixed point, rather than confining in the infrared (IR), is an open question. While gaining some control over non-perturbative physics is possible in supersymmetric theories (see Ref. [159] and references therein), the non-supersymmetric ones are the subject of a rich and fascinating literature [160; 161; 162; 163; 164; 165; 166; 167; 168], part of which uses perturbative instruments and high-loop expansions [169; 170; 171; 172; 173; 174; 175; 176; 177; 178], but there is no firm agreement on the results--we include a brief overview of work in this direction, in the body of the paper. Knowledge of the extent of the conformal window also has relevant phenomenological implications. Various arguments suggest that at the lower edge of the conformal window, the anomalous dimensions of the CFT operators might be so large as to invalidate naive dimensional analysis (NDA) expectations for the scaling of observable quantities [181; 161]. And it has been speculated that this might affect even confining theories that live outside the conformal window, with applications to technicolor, CHMs, top (partial) compositeness, SIMP dark matter (e.g., see Refs. [182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 20; 20; 20; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268] and references therein). Lattice studies of the extent of the conformal window have mostly focused on \(SU(N_{c})\) groups, with fermion matter in various representations of the gauge group.2 Closely related to these studies is the emergence, in \(SU(3)\) gauge theories with eight (Dirac) fermions transforming in the fundamental representation [241; 242; 243; 244; 245; 246; 247; 248; 249], or (Dirac) fermions transforming in the 2-index symmetric representation [250; 251; 252; 253; 254; 255], of numerical evidence pointing to the existence of a light isosinglet scalar state, that is tempting to identify with the dilaton, the PNGB associated with dilatations. Footnote 2: See for instance the review in Ref. [188], and references therein, in particular Refs. [189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 199; 199; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 211; 222; 223; 224; 245; 256; 257; 266; 277; 28; 290; 229; 226; 268; 269; 270; 291; 292; 293; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 322; 324; 325; 326; 327; 328; 329; 333; 340; 35; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 371; 361; 362; 363; 364; 365; 366; 366; 367; 368] and Refs. [69; 90; 919; 191; 1920; 193; 1941; 195; 196; 197; 198; 199; 199; 199; 198; 199; 199; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 209; 210; 204; 206; 207; 209; 221; 207; 208; 205; 209; 223; 209; 211; 224; 241; 242; 243; 244; 245; 246; 247; 249; 250; 251; 252; 254; 256; 257; 258; 259; 261; 259; 262; 263; 264; 265; 266; 267; 268]. It has been predicted long ago that a light dilaton should exist in strongly coupled, confining theories living in proximity of the lower end of the conformal window [256; 257; 258], and the EFT description of such state has a remote historical origin [259; 260]. It might have huge consequences in extensions of the standard model [261]. A plethora of phenomenological studies exists on the dilaton (see, for example, Refs. [262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273] and references therein). The \(SU(3)\) lattice evidence for the existence of this state has triggered renewed interest in the dilaton effective field theory (dEFT), which combines the chiral Lagrangian description of the PNGBs associated with the internal global symmetries of the system, with the additional, light scalar, interpreted as a dilaton [274; 275; 276; 277; 278; 279]. The aforementioned lattice studies of symplectic theories, motivated by CHMs and SIMPs, can be carried out with comparatively modest resources, and using lattices of modest sizes, because they require exploring the intermediate mass range for the mesons in the theory. By contrast, the study of the deep-IR properties of \(Sp(2N)\) gauge theories requires investigating the low mass regime of the fermions, for which one needs lattices and ensembles big enough to overcome potentially large finite size effects and long autocorrelation times. The supercomputing demands (both on hardware and software) of these calculations are such that a new dedicated set of instruments, and a long-term research strategy, is needed to make these investigations feasible. With this paper, we make the first, propaedeutic, technical steps on the path towards determining on the lattice the extent of the conformal window in theories with \(Sp(2N)\) group, for \(N>1\). To this end, we elected to build, test, and make publicly available new software [290], that supplements previous releases of the Grid library [291; 292; 293; 294], by adding to it new functionality specifically designed to handle \(Sp(2N)\) theories with matter fields in multiple representations. The resulting software takes advantage of all the features offered by the modularity and flexibility of Grid, in particular its ability to work both on CPU- as well as GPU-based architectures. We present two types of preliminary results relevant to this broader endeavour: technical tests of the algorithm and of the physics outcomes are supplemented by preliminary analyses, conducted on coarse lattices, of the parameter space of the lattice theory. The latter set the stage for future large-scale numerical studies, by identifying the regions of parameter space connected to continuum physics. The former are intended to validate the software, and test its performance for symplectic theories on machines with GPU architecture. Unless otherwise specified, we use the \(Sp(4)\) theory, coupled to \(N_{\rm as}=4\) Wilson-Dirac fermions transforming in the 2-index antisymmetric representation, as a case study. The lessons we learn from the results we report have general validity and applicability. This paper is organised as follows. We start by defining the \(Sp(2N)\) gauge theories of interest in Sect. II, both in the continuum and on the lattice. We also summarise briefly the current understanding of the extent of the conformal window in these theories. Section III discusses the software implementation of \(Sp(2N)\) on Grid, and the basic tests we performed on the algorithm. In Sect. IV we concentrate on lattice theories in which the fermions do not contribute to the dynamics, focusing both on the Yang-Mills theory and the quenched approximation. New results about the bulk structure of all the \(Sp(4)\) theories coupled to (Wilson-Dirac) fermions transforming in the 2-index antisymmetric representation can be found in Sect. V, while Sect. VI discusses scale setting (Wilson flow) and topology. A brief summary and outlook concludes the paper, in Sect. VII. Additional technical details are relegated to the appendix. ## II Gauge theories with symplectic group The \(Sp(2N)\) continuum field theories of interest (with \(N>1\)), written in Minkowski space with signature mostly '\(-\)', have the following Lagrangian density (we borrow notation and conventions from Ref. [4]): \[\mathcal{L} = -\frac{1}{2}{\rm Tr}\;G_{\mu\nu}G^{\mu\nu}\,+\,\frac{1}{2}\sum_{ i}^{N_{\ell}}\left(i\overline{Q^{i}}_{\phantom{a}a}\gamma^{\mu}\left(D_{\mu}Q^{i} \right)^{a}\,-\,i\overline{D_{\mu}Q^{i}}_{\phantom{a}a}\gamma^{\mu}Q^{i\,a} \right)\,-\,m^{\ell}\sum_{i}^{N_{\ell}}\overline{Q^{i}}_{\phantom{a}a}Q^{i\,a}\,+ \tag{1}\] \[+\,\frac{1}{2}\sum_{k}^{N_{\rm as}}\left(i\overline{\Psi^{k}}_{ \phantom{k}ab}\gamma^{\mu}\left(D_{\mu}\Psi^{k}\right)^{ab}\,-\,i\overline{D_ {\mu}\Psi^{k}}_{\phantom{k}ab}\gamma^{\mu}\Psi^{k\,ab}\right)\,-\,m^{\rm as} \sum_{k}^{N_{\rm as}}\overline{\Psi^{k}}_{\phantom{k}ab}\Psi^{k\,ab}\,.\] The fields \(Q^{i\,a}\), with \(i=1,\,\cdots,\,N_{\rm f}\), are Dirac fermions that transform in the fundamental representation of \(Sp(2N)\), as indicated by the index \(a=1,\,\cdots,\,2N\), while the \(\Psi^{k\,ab}\) ones, with \(k=1,\,\cdots,\,N_{\rm as}\), transform in the 2-index antisymmetric representation of the gauge group. The covariant derivatives are defined by making use of the transformation properties under the action of an element \(U\) of the \(Sp(2N)\) gauge group, according to which \[Q\to UQ\,,\quad{\rm and}\quad\Psi\to U\Psi U^{\rm T}\,. \tag{2}\] They can be written in terms of the gauge field \(A_{\mu}\equiv A_{\mu}^{a}t^{a}\), where \(t^{a}\) are the generators of \(Sp(2N)\), normalised so that \(\text{Tr}\ t^{a}t^{b}=\frac{1}{2}\delta^{ab}\), to read as follows: \[D_{\mu}Q^{i} = \partial_{\mu}Q^{i}\,+\,igA_{\mu}Q^{i}\,, \tag{3}\] \[D_{\mu}\Psi^{j} = \partial_{\mu}\Psi^{j}\,+\,igA_{\mu}\Psi^{j}\,+\,ig\Psi^{j}A_{\mu }^{\rm T}\,, \tag{4}\] where \(g\) is the gauge coupling. The field-strength tensor is given by \[G_{\mu\nu}\ \equiv\ \partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}+ig\left[A_{ \mu}\,,\,A_{\nu}\right]\,, \tag{5}\] where \([\cdot\,,\cdot]\) is the commutator. The form of Eq. (1) makes it easy to show that the \(SU(N_{\rm f})_{L}\times SU(N_{\rm f})_{R}\) and \(SU(N_{\rm as})_{L}\times SU(N_{\rm as})_{R}\) global symmetries acting on the flavor indexes of \(Q^{i}\) and \(\Psi^{k}\), respectively, are enhanced to \(SU(2N_{\rm f})\) and \(SU(2N_{\rm as})\)--following the rewriting of Eq. (1) in terms of 2-component fermions, see Refs. [295; 4] for details. The mass terms break the symmetries to the maximal \(Sp(2N_{\rm f})\) and \(SO(2N_{\rm as})\) subgroups. Bilinear fermion condensates arise non-perturbatively, breaking the symmetries according to the same pattern, and hence one expects the presence of \(N_{\rm f}(2N_{\rm f}-1)-1\) PNGBs in the (f) sector (for \(N_{f}>1\)), and \(N_{\rm as}(2N_{\rm as}+1)-1\) in the (as) sector. The main parameters governing the system are hence \(N\), \(N_{\rm f}\), and \(N_{\rm as}\), and in most of the paper we refer to the theory with \(N=2\), \(N_{\rm f}=0\), and \(N_{\rm as}=4\) as a case study. The running coupling, \(g\), obeys a renormalisation group equation (RGE) in which the beta function at the 1-loop order is scheme-independent, \[\beta\ =\ -\frac{g^{3}}{(4\pi)^{2}}b_{1}, \tag{6}\] and is governed by the coefficient \(b_{1}\), which for a non-Abelian theory coupled to Dirac fermions can be written as \[b_{1}=\frac{11}{3}C_{2}(G)-\frac{4}{3}N_{\rm f}\frac{d_{\rm f}}{d_{G}}C_{2}({ \rm f})-\frac{4}{3}N_{\rm as}\frac{d_{\rm as}}{d_{G}}C_{2}({\rm as}) \tag{7}\] and, specifically for \(Sp(2N)\) groups, becomes \[b_{1}=\frac{11}{3}(N+1)-\frac{2}{3}N_{\rm f}-\frac{4}{3}N_{\rm as}\frac{N(2N- 1)-1}{N(2N+1)}N\,. \tag{8}\] The coefficients \(C_{2}(G)\), \(C_{2}({\rm f})\), \(C_{2}({\rm as})\) are quadratic Casimir operators in the adjoint, fundamental and antisymmetric representations, while \(d_{G}\), \(d_{\rm f}\), \(d_{\rm as}\) are the dimensions of these representations, respectively. We restrict attention to asymptotically free theories, for which \(b_{1}\) is positive. For \(Sp(2N)\) theories with \(N_{\rm f}=0\), this requirement sets the upper bound \(N_{\rm as}<\frac{11(N+1)}{4(N-1)}\), which for \(N=2\) yields \(N_{\rm as}<33/4\)--perturbatively, as-type fermions make double the contribution of f-type ones, in \(Sp(4)\). The spectrum of mesons depends on the mass, \(m^{\rm f,as}\), of the fermions, by varying which we can test which of the following three possible classes the theory falls into. 1. The theory confines, similarly to Yang-Mills theories. One expects to find a gapped spectrum, and a set of PNGBs that become parametrically light in respect to other states, when \(m^{\rm f,as}\to 0\). The small mass and momentum regime is described by chiral perturbation theory (\(\chi\)PT) [296; 297; 298; 299]. 2. The theory is IR conformal. In this case, a gap arises only because of the presence of the mass terms, and would disappear into a continuum for \(m^{\rm f,as}\to 0\). The spectrum and spectral density exhibit scaling, in the form described for example in Refs. [300; 301; 302; 303; 194; 304; 305]--see also Ref. [306]. 3. The theory is confining, but has near-conformal dynamics. As in the confining case, when \(m^{\rm f,as}\to 0\) one finds massless PNGBs. An additional isosinglet scalar state, the dilaton, is also light, compared to the other mesons, and long distance physics is described by dEFT [274; 275; 276; 277; 278; 279]--see also the discussions in Refs. [307; 308; 309], and references therein. ### The conformal window The three possible classes of gauge theories described above are determined by whether the theory is, respectively, far outside, inside or just outside the boundary of the conformal window. The determination of the conformal window is tantamount to showing the existence of the IR fixed point at non-zero coupling so that the theory is interacting and IR conformal. We provide here some more detail and information about this challenging endeavour and what is known to date, starting from perturbative arguments. The coefficient of the (scheme-independent) 2-loop RG beta function, \(b_{2}\), which is found to be, for generic non-abelian gauge theories, \[b_{2}=\frac{34}{3}C_{2}(G)^{2}-\frac{4}{3}\left(5C_{2}(G)+3C_{2}(\mathrm{f}) \right)\frac{d_{\mathrm{f}}}{d_{G}}C_{2}(\mathrm{f})\mathrm{N_{f}}-\frac{4}{3 }\left(5C_{2}(\mathrm{G})+3C_{2}(\mathrm{as})\right)\frac{\mathrm{d_{\mathrm{as }}}}{\mathrm{d_{G}}}C_{2}(\mathrm{as})\mathrm{N_{as}}\,, \tag{9}\] and for \(Sp(2N)\) groups reduces to \[b_{2}=\frac{34}{3}(N+1)^{2}-\frac{2}{3}N_{\mathrm{f}}\left[5(N+1)+\frac{3}{4} \left(2N+1\right)\right]-\frac{4}{3}N_{\mathrm{as}}\left[3N+5\left(N+1\right) \right]\frac{N(2N-1)-1}{2N+1}\,, \tag{10}\] When \(b_{2}\) is negative, one finds that for a positive and sufficiently small value of \(b_{1}\), a perturbative IR fixed point at coupling \(\alpha_{\mathrm{IR}}\simeq\alpha_{\mathrm{BZ}}=-4\pi b_{1}/b_{2}\ll 1\) arises. This is referred to as a Banks-Zaks (BZ) fixed point [157; 158]. The upper bound of the conformal window therefore coincides with that of asymptotically free theories, given by \(b_{1}=0\). The determination of the lower bound of the conformal window is hindered by the vicinity of the strong coupling regime. To see this, one can fix the value of \(N\) and decrease the number of flavors \(N_{\mathrm{f,as}}\). The coefficient \(b_{2}\) then becomes less negative and eventually approaches zero, while \(b_{1}\) remains finite and positive. Accordingly, the coupling at the (perturbative) BZ fixed point, \(\alpha_{\mathrm{BZ}}\), becomes larger and larger and the perturbative analysis of the \(\beta\) function is no longer reliable. Despite such inherent limitations, several (approximate) analytical methods have been proposed to estimate the critical value \(N_{\mathrm{f,as}}^{\mathrm{cr}}\) corresponding to the lower edge of the conformal window. We now briefly summarise known results, for the theories of interests, that can be used to guide dedicated studies using non-perturbative numerical techniques, such as those based on lattice field theory. Let us start by setting \(N_{\mathrm{f}}=0\) and varying \(N_{\mathrm{as}}\). A naive estimate can be derived by taking the perturbative 2-loop beta function to hold beyond perturbation theory, using it to compute \(N_{\mathrm{as}}^{\mathrm{BZ,\,cr}}\), and assuming that the fixed point disappears when \(\alpha_{\mathrm{BZ}}\to\infty\), or equivalently by looking for solutions of the condition \(b_{2}\to 0\). Doing so yields \(N_{\mathrm{as}}^{\mathrm{BZ,\,cr}}\simeq 3.7\) for \(Sp(4)\). This approach can be systematically improved by including higher-order loops, up to \(\ell_{\mathrm{max}}>2\), in the expansion of the beta function \(\beta(\alpha)\). One then seeks values of \(N_{\mathrm{as}}\) for which \(\alpha_{\mathrm{IR}}\to\infty\), with \(\alpha_{\mathrm{IR}}\) determined by solving \(\beta(\alpha)\equiv-2\alpha\sum_{\ell=1}^{\ell_{\mathrm{max}}}b_{\ell}\left( \frac{\alpha}{4\pi}\right)^{\ell}=0\). In particular, one finds \(N_{\mathrm{as}}^{\mathrm{4-loop,\,cr}}\simeq 4.1\) from the perturbative beta function at four loops in the \(\overline{\mathrm{MS}}\)-scheme [310]. It should be noted, however, that the results are affected by uncontrolled systematics, since the coefficients, \(b_{\ell}\), of the beta function, \(\beta(\alpha)\), depend on the renormalisation scheme at three or higher loops, when \(\ell\geq 3\). An alternative approach makes use of the Schwinger-Dyson (SD) equation in the ladder approximation, in which case conformality is assumed to be lost when \(\alpha_{\mathrm{IR}}\equiv\alpha^{\mathrm{cr}}\), with \(\alpha^{\mathrm{cr}}=\pi/3C_{2}(R)\), which yields \(N_{\mathrm{as}}^{\mathrm{SD}}\simeq 6\) for \(Sp(4)\). Going beyond the perturbative coupling expansion, a conjectured all-orders beta function \(\beta^{\mathrm{all-orders}}(\alpha)\)[164], which involves the first two universal coefficients of \(\beta(\alpha)\) and the anomalous dimension of fermion bilinear operator, \(\gamma_{\bar{\psi}\psi}(\alpha)\), has been proposed.3 In this case, the conformal window is determined by solving the condition \(\beta^{\mathrm{all-orders}}=0\) with the physical input for the value of \(\gamma_{\bar{\psi}\psi}\) at the IR fixed point. For \(\gamma_{\bar{\psi}\psi}=1\), one finds \(N_{\mathrm{as}}^{\mathrm{all-orders,\,cr}}\simeq 5.5\) for \(Sp(4)\).4 Footnote 3: A modified version of the all-orders beta function can also be found in Ref. [165]. Footnote 4: This choice for \(\gamma_{\bar{\psi}\psi}\) has been argued to be the critical condition associated with the chiral phase transition through the IR and UV fixed point merger [181]. A less common choice is to set \(\gamma_{\bar{\psi}\psi}=2\), as suggested by unitarity considerations [311]. More recently, the scheme-independent BZ expansion in the small parameter \(\Delta_{N_{\mathrm{as}}}=N_{\mathrm{as}}^{\mathrm{AF}}-N_{\mathrm{as}}^{ \mathrm{IR}}\) has been extensively applied to the determination of physical quantities such as the anomalous dimension, \(\gamma_{\bar{\psi}\psi}\), at the IR fixed point--see Ref. [176] and refs. therein. In Ref. [167], the authors determined the lower edge of the conformal window by imposing the critical condition of \(\gamma_{\bar{\psi}\psi}(2-\gamma_{\bar{\psi}\psi})=1\). This condition is identical to \(\gamma_{\bar{\psi}\psi}=1\) at infinite order, but displays better convergence at finite order in the \(\Delta_{N_{\rm as}}\) expansion. The 4th order calculation yields \(N_{\rm as}^{\gamma_{\rm cc},\,{\rm cr}}\simeq 5.5\) for \(Sp(4)\)[168]. These analytical approaches can be extended to determine the conformal window for the theory containing fermions in the multiple representations, \(\{R_{1},\,R_{2},\,\cdots,\,R_{k}\}\), in which case the upper and lower bounds of the conformal window are described by \((k-1)\)-dimensional hyper-surfaces. For the \(Sp(4)\) theories of interest with \(N_{\rm f}\) Dirac fermions transforming in the fundamental and \(N_{\rm as}\) in the 2-index antisymmetric representation, the results are summarised in Fig. 1.5 The upper bound is determined by the condition \(b_{1}(N_{\rm f},\,N_{\rm as})=0\). The various alternative determinations of the lower bound are estimated as follows. The dashed line is obtained by setting \(b_{2}(N_{\rm f},\,N_{\rm as})=0\). The dot-dashed line corresponds to the result of the all-order beta function with the input of \(\gamma_{\bar{\Psi}\Psi}=\gamma_{\bar{Q}Q}=1\). The dotted and solid lines are the results of the SD analysis and the BZ expansion of \(\gamma_{\bar{\Psi}\Psi}\) at the 3rd order in \(\Delta_{N_{\rm f}(n_{\rm av})}\)[179] with the critical conditions applied to the antisymmetric fermions, \(\alpha_{\rm BZ}=\alpha_{\rm as}^{\rm cr}=\pi/3C_{2}({\rm AS})\) and \(\gamma_{\bar{\Psi}\Psi}(2-\gamma_{\bar{\Psi}\Psi})=1\), respectively, as fermions in the higher representation are expected to condense first, resulting in the larger values of \(\alpha^{\rm cr}\) and \(\gamma_{\rm IR}\)[312]. Footnote 5: The figure is basically the same as the analogous one found in Ref. [167], except that the input for the all-orders beta function analysis has been changed to \(\gamma_{\bar{\Psi}\Psi}=\gamma_{\bar{Q}Q}=1\). The parameter space has also been extended and the notation adapted to the conventions of this paper. For the purpose of phenomenological applications, the most interesting physical quantities one would like to determine within the conformal window are the anomalous dimensions of fermion bilinear operators (mesons) and chimera baryon operators. Perturbative calculations of the former are available in the literature, up to the 4th order of the coupling expansion [313] and at the 3rd order of the BZ expansion [179], while that of the latter is only available at the leading order in \(\alpha\)[62]. All of these considerations, summarised in Fig. 1, offer some intuitive guidance for what can be expected, but non-perturbative instruments are needed to test these predictions and put Fig 1 on firmer grounds. Figure 1: Estimates of the extent of the conformal window in \(Sp(4)\) theories coupled to \(N_{\rm f}\) Dirac fermions transforming in the fundamental and \(N_{\rm as}\) in the 2-index antisymmetric representation. The black solid line denotes the upper bound of the conformal window, while different colored and shaped lines denote alternative analytical estimates of the lower bound, obtained with different approximations. The dashed line is obtained by imposing the constraint \(b_{2}(N_{\rm f},\,N_{\rm as})=0\). The dot-dashed line is the result of the all-order beta function with the assumption that the anomalous dimensions of the fermion bilinears are \(\gamma_{\bar{\Psi}\Psi}=\gamma_{\bar{Q}Q}=1\). The dotted line is the result of the SD analysis. The BZ expansion leads to the lower (blue) solid line. Details about these approximations can be found in the main text and in the reference list. ### The lattice theory In presenting the lattice theory, we borrow again notation and conventions from Ref. [9]. The theory is defined on a Euclidean, hypercubic, four-dimensional lattice with spacing \(a\), with \(L/a\) sites in the space directions and \(T/a\) in the time direction. The generic lattice site is denoted as \(x\), and the link in direction \(\mu\) as \((x,\,\mu)\). The total number of sites is thus \(\tilde{V}/a^{4}=T\times L^{3}/a^{4}\). Unless stated otherwise, in the following we set \(L=T\). The action is the sum of two terms \[S\equiv S_{g}+S_{f}\,, \tag{11}\] where \(S_{g}\) and \(S_{f}\) are the gauge and fermion action, respectively. The former is the Wilson action, defined as \[S_{g}\equiv\beta\sum_{x}\sum_{\mu<\nu}\left(1-\frac{1}{2N}\text{Re}\,\text{Tr }\,\mathcal{P}_{\mu\nu}(x)\right), \tag{12}\] where \(\mathcal{P}_{\mu\nu}(x)\equiv U_{\mu}(x)U_{\nu}(x+\hat{\mu})U_{\mu}^{\dagger}( x+\hat{\nu})U_{\nu}^{\dagger}(x)\) is known as the _elementary plaquette_ operator, \(U_{\mu}(x)\in Sp(2N)\) is the _link variable_ defined on link \((x,\mu)\), and \(\beta\equiv 4N/g_{0}^{2}\), where \(g_{0}\) is the bare gauge coupling. For the fermions, we adopt the massive Wilson-Dirac action, \[S_{f} \equiv a^{4}\sum_{j=1}^{N_{f}}\sum_{x}\overline{Q}^{j}(x)D_{m}^{( \mathrm{f})}Q^{j}(x)+a^{4}\sum_{j=1}^{N_{\mathrm{av}}}\sum_{x}\overline{\Psi}^ {j}(x)D_{m}^{(\mathrm{as})}\Psi^{j}(x)\,, \tag{13}\] where \(Q^{j}\) and \(\Psi^{j}\) are the fermion fields transforming, respectively, in the fundamental and 2-index antisymmetric representation and \(j\) is a flavor index, while color and spinor indices are omitted for simplicity. The massive Wilson-Dirac operators in Eq. (13) are defined as \[D_{m}^{(\mathrm{f})}Q^{j}(x) \equiv (4/a+m_{0}^{\mathrm{f}})Q^{j}(x)\] \[-\frac{1}{2a}\sum_{\mu}\left\{(1-\gamma_{\mu})U_{\mu}^{(\mathrm{ f})}(x)Q^{j}(x+\hat{\mu})+(1+\gamma_{\mu})U_{\mu}^{(\mathrm{f}),\,\dagger}(x- \hat{\mu})Q^{j}(x-\hat{\mu})\,\right\}\,,\] and \[D_{m}^{(\mathrm{as})}\Psi^{j}(x) \equiv (4/a+m_{0}^{\mathrm{as}})\Psi^{j}(x)\] \[-\frac{1}{2a}\sum_{\mu}\left\{(1-\gamma_{\mu})U_{\mu}^{(\mathrm{ as})}(x)\Psi^{j}(x+\hat{\mu})+(1+\gamma_{\mu})U_{\mu}^{(\mathrm{as}),\,\dagger}(x- \hat{\mu})\Psi^{j}(x-\hat{\mu})\,\right\}\,,\] where \(m_{0}^{\mathrm{f}}\) and \(m_{0}^{\mathrm{as}}\) are the bare fermion masses in the fundamental and 2-index antisymmetric representation, and \(U_{\mu}^{(\mathrm{f})}(x)=U_{\mu}(x)\). The link variables \(U_{\mu}^{(\mathrm{as})}(x)\) are defined as in Ref. [9], as follows: \[U_{\mu,\,(ab)(cd)}^{(\mathrm{as})}=\text{Tr}\left(e^{(ab)\,T}U_{\mu}^{( \mathrm{f})}e^{(cd)}U_{\mu}^{(\mathrm{f})\,T}\right)\,, \tag{16}\] where \(e^{(ab)}\) are the elements of an orthonormal basis in the \((N(2N-1)-1)\)-dimensional space of \(2N\times 2N\) antisymmetric and \(\Omega\)-traceless matrices, and the multi-indices \((ab)\) run over the values \(1\leq a<b\leq 2N\). The entry \(ij\) of each element of the basis is defined as follows. For \(b\neq N+a\), \[e_{ij}^{(ab)}\equiv\frac{1}{\sqrt{2}}\left(\delta_{aj}\delta_{bi}-\delta_{ai} \delta_{bj}\right)\,, \tag{17}\] while for \(b=N+a\) and \(2\leq a\leq N\), \[e_{i,i+N}^{(ab)}=-e_{i+N,i}^{(ab)}\equiv\begin{cases}\frac{1}{\sqrt{2a(a-1)}} \;,&\text{for}\;\;i<a\,,\\ \frac{1-a}{\sqrt{2a(a-1)}}\;,&\text{for}\;\;i=a\,.\end{cases} \tag{18}\] It is easy to verify that each element of this basis satisfies the \(\Omega\)-traceless condition \(\text{Tr}(e^{(ab)}\Omega)=0\), where the symplectic matrix \(\Omega\) is defined in Eq. (14). Finally, we impose periodic boundary conditions on the lattice for the link variables, while for the fermions we impose periodic boundary conditions along the space-like directions, and anti-periodic boundary conditions along the time-like direction. ## III Numerical implementation: Grid Our numerical studies are performed using Grid: a high level, architecture-independent, C++ software library for lattice gauge theories. The portability of its single source-code across the many architectures that characterise the exascale platform landscape makes it an ideal tool for a long-term computational strategy. See Refs. [29; 21; 22; 23], for technical specifications and a description of its features. Grid has already been used to study theories based on \(SU(N_{c})\) gauge groups with \(N_{c}\geq 3\), and fermions in multiple representations, see Refs. [314; 315], for instance. In this section, we describe the changes that have been implemented in Grid in order to enable the sampling of \(Sp(2N)\) gauge field configurations. With the aim of including dynamical fermions in future explorations of \(Sp(2N)\) gauge theories, we focused our efforts6 on the Hybrid Monte Carlo (HMC) algorithm and on its variation, the Rational HMC (RHMC), used whenever the number of fermion species is odd. See Ref. [325] and Sect. IIIB of Ref. [190] for useful technical details. Footnote 6: An implementation of the Cabibbo-Marinari method [316] for pure gauge theories would be useful to explore general \(Sp(2N)\) theories and extrapolate to the large-\(N_{c}\) limit. We postpone this task to future work. The (R)HMC algorithms generate a Markov chain of gauge configurations distributed as required by the lattice action described in Sect. II.2. The ideas underpinning these two algorithms can be summarized as follows. Firstly, bosonic degrees of freedom \(\phi\) and \(\phi^{\dagger}\), known as pseudofermions, are introduced replacing a generic number \(n_{f}\) of fermions. Powers of the determinant of the hermitian Dirac operator, \(Q_{m}^{R}=\gamma_{5}D_{m}^{R}\), in representation \(R\) can then be expressed as \[(\det D_{m}^{R})^{n_{f}}=(\det Q_{m}^{R})^{n_{f}}=\int\mathcal{D}\phi \mathcal{D}\phi^{\dagger}e^{-a^{4}\sum_{x}\phi^{\dagger}(x)(Q_{m}^{2})^{-n_{f }/2}\phi(x)}\, \tag{19}\] where flavor and color indices of \(\phi\) and \(\phi^{\dagger}\) have been suppressed for simplicity. For odd values of \(n_{f}\), the rational approximation is used to compute odd powers of the determinant above, resulting in the RHMC. Second, a fictitious classical system is defined, with canonical coordinates given by the elementary links and Lie-algebra-valued conjugate momenta \(\pi(x,\,\mu)=\pi^{a}(x,\,\mu)\,t^{a}\), where \(t^{a}\) are the generators of the \(\mathfrak{sp}(2N)\) algebra in the fundamental representation. The fictitious hamiltonian is \[H=\frac{1}{2}\sum_{x,\mu,a}\pi^{a}(x,\,\mu)\pi^{a}(x,\,\mu)+H_{g}+H_{f}\,, \tag{20}\] where \(H_{g}=S_{g}\) and \(H_{f}=S_{f}\). The molecular dynamics (MD) evolution in fictitious time \(\tau\) is dictated by \[\frac{\mathrm{d}U_{\mu}(x)}{\mathrm{d}\tau}=\pi(x,\,\mu)U_{\mu}(x)\,\quad\frac{\mathrm{d}\pi(x,\,\mu)}{\mathrm{d}\tau}=F(x,\mu)\,, \tag{21}\] where \(F(x,\mu)\), known as the HMC force, is defined on the Lie algebra \(\mathfrak{sp}(2N)\), and can be expressed as \(F(x,\,\mu)=F_{g}(x,\,\mu)+F_{f}(x,\,\mu)\). The detailed form for \(F_{g}(x,\,\mu)\) and \(F_{f}(x,\,\mu)\), the gauge and fermion force, respectively, can be found in Section IIIA of Ref. [190]. Numerical integration of the MD equations thus leads to a new configuration of the gauge field, which is then accepted or rejected according to a Metropolis test. The update process can hence be described as follows: * pseudofermions distributed according to the integrand in Eq. (19) are generated with the Heat Bath algorithm, * starting with Gaussian random conjugate momenta, the MD equations in Eqs. (21) are integrated numerically, * the resulting gauge configuration is accepted or rejected by a Metropolis test. In this section we provide details on the implementation of the operations listed above, focusing on the alterations made to the pre-existing structure of the code designed for \(SU(N_{c})\) gauge theories. We then describe and carry out three types of technical checks, following Ref. [190]. We test the behaviour of the HMC and RHMC algorithms. We produce illustrative examples of the behaviour of the molecular dynamics (MD). Finally, we carry out a comparison between HMC and RHMC algorithms. The purpose of these tests is to verify that the dynamics is implemented correctly. ### Software development As in the case for the pre-existing routines handling the theories with gauge group \(SU(N_{c})\), our implementation of \(Sp(2N)\) allows for a generic number of colors. The starting point of the MD is the generation of random Lie-algebra-valued conjugate momenta. The generators of the \(\mathfrak{sp}(2N)\) Lie Algebra in the fundamental representation, as they appear in Grid, are provided by the relations described in Appendix B, where conventions for their normalisation are also established. Generators in higher representations of the gauge group can be derived from the fundamental ones (see Refs. [190; 314], for details). In particular, the generators of the algebra of \(Sp(2N)\) in the antisymmetric representation can be obtained from the definition in Eq. (16), by Taylor expanding to first order around the unit transformation, \[(t_{\text{as}}^{a})_{(ab)(cd)}=\text{Tr}\left(e^{(ab)\,T}t_{\text{f}}^{a}e^{( cd)}+e^{(ab)\,T}e^{(cd)}t_{\text{f}}^{a\,T}\right)\;. \tag{22}\] In the numerical integration of Eq. (21), it is required to project the HMC force on the Lie algebra of the gauge group. In Grid, the embedding of the force-projection within the integrator requires the forces to be anti-hermitian. Hence, a projection operation to the matrices of the algebra \(\mathfrak{sp}(2N)\) must be defined. This can be done in analogy with the projection to \(\mathfrak{su}(N_{c})\), defined for a generic matrix \(M\) as \[P_{\text{tr}}P_{\text{aH}}M\;, \tag{23}\] where \(P_{\text{tr}}M\equiv M-\mathbb{1}_{N_{c}}\text{Tr}(M)/N_{c}\) and \(P_{\text{aH}}M\equiv(M-M^{\dagger})/2\) are the projectors to its traceless and to its anti-hermitian parts, respectively. For \(\mathfrak{sp}(2N)\), the projection is instead defined as, \[P_{\text{aH}}P_{\text{Sp}}^{-}P_{\text{tr}}\,M\;, \tag{24}\] where \[P_{\text{Sp}}^{\pm}M\equiv\frac{M\pm\Omega M^{*}\Omega}{2}\;. \tag{25}\] Notice that \(P^{-}_{\rm Sp}\) returns an anti-hermitian matrix, while \(P^{+}_{\rm Sp}\) projects on a space of hermitian matrices. The resympleticisation of gauge links to the \(Sp(2N)\) group manifold has also been implemented in Grid. The algorithm, described in Ref. [1], is a modification of the Gram-Schmidt process designed to take into account the condition in Eq. (55). After normalising the first column of the matrix \(U\), the \((N+1)\)-th column is set to \[{\rm col}(U)_{j+N}=-\Omega\,{\rm col}(U)^{*}_{j}\;. \tag{26}\] The second column is then obtained by orthonormalisation with respect to both the first and the \(N+1\)-th column. An iteration of this process leads to a \(Sp(2N)\) matrix. This procedure, performed after every update, prevents the gauge fields from drifting away from the group manifold due to the finite precision of the simulation. Figure 4: Dependence of \(\langle\Delta H\rangle\) on the time-step, \(\Delta\tau\), used for the MD integration, for \(N=2\), \(N_{\rm f}=0\), and \(N_{\rm as}=4\). The expectation value \(\langle\Delta H\rangle\) is proportional to \((\Delta\tau)^{4}\), consistently with the use of a second-order integrator. The plot is shown in log-log scale. The relevant parameters of this study are the trajectory length \(\tau=1\), number of steps \(n_{steps}=14,16,18,22,26\) (\(\Delta\tau=\tau/n_{\rm steps}\)), for an ensemble with lattice volume \(\tilde{V}/a^{4}=8^{4}\), \(\beta=6.8\), and \(am_{0}=-0.6\). Figure 3: Test of independence of the plaquette on the time–step \(\Delta\tau\) used for the MD integration, for \(N=2\), \(N_{\rm f}=0\), and \(N_{\rm as}=4\). The relevant parameters of this study are the trajectory length \(\tau=1\), number of steps \(n_{\rm steps}=14,16,18,22,26\), \(\Delta\tau=\tau/n_{\rm steps}\), for an ensemble with lattice volume \(\tilde{V}/a^{4}=8^{4}\), \(\beta=6.8\), and \(am_{0}^{\rm as}=-0.6\). The horizontal line corresponds to the plaquette value obtained averaging over trajectories having different a number of step values, \(n_{\rm steps}\). ### Basic tests of the algorithm In this subsection, we follow closely Sects. III and IV of Ref. [190]. As in Ref. [190], the MD evolution is implemented using a second-order Omelyan integrator [318]. However, in this work, the inversion of the fermion matrix is treated without preconditioning [319]--see also Ref. [9]. We now restrict attention to the theory with \(N=2\), \(N_{\rm f}=0\), and \(N_{\rm as}=4\), and perform a set of preliminary checks on the algorithms we use. We present the results in Figs. 2, 3, 4, 5, and 6, obtained, for convenience, setting the lattice parameters to \(\beta=6.8\), and \(am_{0}=-0.6\), on an isotropic lattice with volume \(\tilde{V}=(8a)^{4}\). The first test pertains to Creutz equality [320]: by measuring the difference in Hamiltonian, \(\Delta H\), evaluated before and after the MD evolution, one should find that \[\left\langle\exp\left(-\ \Delta H\right)\right\rangle\ =\ 1\,. \tag{27}\] This is supported by our numerical results: Fig. 2 shows the value of \(\left\langle\exp\left(-\ \Delta H\right)\right\rangle\) for five different choices of the time-step used in the MD integration, with \(\Delta\tau=\tau/n_{steps}\), and the choice \(\tau=1\). The numerical results are obtained by considering a thermalised ensemble consisting of 3400 trajectories, that we find has integrated auto-correlation time \(\tau_{c}=6.1(2)\), measured using the Madras-Sokal windowing process [321]. A closely related test is shown in Fig. 3: the value of the ensemble average of the plaquette is independent of \(\Delta\tau\). A third test pertains to the dependence of \(\left\langle\Delta H\right\rangle\) on \(\Delta\tau\), which for a second-order integrator is supposed to scale as \(\left\langle\Delta H\right\rangle\propto(\Delta\tau)^{4}\)--see the discussion in Ref. [322]. In Fig. 4 we show our measurements, together with the result of a best-fit to the curve \(\log\langle\Delta H\rangle={\cal K}_{1}\,\log(\Delta\tau)+{\cal K}_{2}\), with \({\cal K}_{1}=3.6(4)\) determined by minimising a simple \(\chi^{2}\). We find good agreement, as quantified by the value of the reduced \(\chi^{2}/N_{\rm d.o.f.}=0.6\), and \({\cal K}_{1}\) is compatible to 4. A closely related test is displayed in Fig. 5, confirming the prediction that the acceptance probability of the algorithm, \(P_{\rm acc}\), obeys the relation [323]: \[P_{\rm acc}\ =\ \mbox{erfc}\left(\frac{1}{2}\sqrt{\left\langle\Delta\ H \right\rangle}\right)\,. \tag{28}\] The final test of this subsection is displayed in Fig. 6. We refer the reader to Refs. [190; 324] for discussions, rather than reproduce them here. The quantity \(|\delta H|\) is the average difference of the Hamiltonian evaluated by evolving the MD forward and backward and flipping the momenta at \(\tau=1\). Since the Hamiltonian in these tests is of order \(\sim 10^{6}\) and the typical \(\delta H\sim 10^{-11}\), the results show that the violation of reversibility is consistent with having \(|\delta H|/H\) of the order of the numerical accuracy. This is the expected relative precision for double-precision floating-point numbers. Moreover, the violation \(|\delta H|\) is independent of \(\Delta\tau\). Figure 7: Field contribution to the MD force for the theory with \(N=2\), \(N_{\rm f}=0\), and \(N_{\rm as}=4\), on isotropic lattice with \(\tilde{V}=(8a)^{4}\), and lattice coupling \(\beta=6.8\). The two blocks are respectively indicating the gauge (light shading, left) and the fermion (dark shading, right) contribution, the latter computed with the HMC algorithm. Fermion contributions are summed over flavor. The six panels correspond to different choices of bare mass: \(am_{0}^{\rm as}=-0.9\), \(-0.1\), \(+0.6\), \(+1.8\), \(+15\), \(+50\) (left to right, top to bottom). ### More about the Molecular Dynamics For illustration purposes, we find it useful to monitor the contribution to the MD of the fields, and how this changes as we dial the lattice parameters. We focus on the theory with \(N=2\), \(N_{\rm f}=0\), and \(N_{\rm as}=4\), and consider a few ensembles with isotropic lattice with \(\tilde{V}=(8a)^{4}\), and lattice coupling \(\beta=6.8\), but vary the mass \(am_{0}^{\rm as}\). We show in Fig. 7 the force, \(F\), as defined in Eq. (20) of Ref. [190]--see also Ref. [9]--split in its contribution from the gauge and fermion dynamics, the latter computed using the HMC for all fermions. The results are normalised so that the gauge contribution is held constant. As can be clearly appreciated, for large and positive values of \(am_{0}^{\rm as}\) the fermions can be neglected, as for these choices of the mass, one expects to be in the quenched regime. When decreasing the mass, the fermion contribution increases. For large, negative values of the Wilson bare mass (close to the chiral limit), the fermion contribution is even larger than the contribution of the gauge part of the action. Figure 8: Compatibility between plaquette averages \(\langle P\rangle\) obtained with HMC and RHMC algorithms for the theory with \(N=2\), \(N_{\rm f}=0\), and \(N_{\rm as}=4\). \(\langle P\rangle_{\rm HMC}\) is obtained running two couples of fermions with HMC. For \(\langle P\rangle_{\rm RHMC}\) (top panel), RHMC was applied individually to each of the fermions. \(\langle P\rangle_{\rm 2HMC+2RHMC}\) (bottom panel) is obtained running two fermions with HMC, while the other two were run with RHMC. The lattice coupling is \(\beta=6.8\), with the bare mass in the range \(-1.4\leq am_{0}^{\rm as}\leq 0.0\). The lattice is isotropic and has volume \(\tilde{V}=(8a)^{4}\). ### Comparing HMC and RHMC While in this paper we are mostly interested in the theory with \(N=2\), \(N_{\rm f}=0\), and \(N_{\rm as}=4\), and hence we can use the HMC algorithm, for the general purpose of identifying the extent of the conformal window in this class of lattice gauge theories it may be necessary to consider also odd numbers of fermions, for which we resort to the RHMC algorithm. The latter relies on a rational approximation in the computation of the fermion force, but the presence of a Metropolis accept-reject step ensures that the algorithm is exact. Thus, a preliminary test must be made to check the consistency of the implementation--as was done for \(SU(3)\) theories, see for instance Ref. [325].7 To gauge whether the numerical implementation is working at the desired level of accuracy and precision, we performed the exercise leading to Fig. 8. We computed the average plaquette, \(\langle P\rangle\), where \(P\) is defined as Footnote 7: We note that to check the correctness of the Remez implementation, one could in principle use any function of an arbitrary matrix \(M\). In particular, choosing diagonal matrices would make the comparison straightforward. Grid makes use of this methodology in its test suite. \[P\equiv\frac{a^{4}}{6\tilde{V}}\sum_{x}\sum_{\mu<\nu}\left[\frac{1}{2N}{\rm Re \,Tr}\,{\cal P}_{\mu\nu}(x)\right] \tag{29}\] for ensembles having lattice volume \(\tilde{V}=(8a)^{4}\) and coupling \(\beta=6.8\), for a few representative choices of the bare mass \(-1.4\leq am_{0}^{\rm as}\leq 0.0\). We repeated this exercise three times: at first, we treated all fermions with the HMC, then we treated them all with the RHMC, and finally we used a mixed strategy, treating two fermions with the HMC, and Figure 9: Study of finite-size effects on the lattice, for the \(Sp(4)\) Yang-Mills theory. The histograms depict the distribution of (real) Polyakov loops for ensembles with \(\beta=9.0\) and four choices of space-time volume: \(\tilde{V}=(2a)^{4}\), \((4a)^{4}\), \((12a)^{4}\), \((20a)^{4}\). The histograms’ areas are normalised to 1. two with the RHMC. We display, in the two plots in the figure, the differences of the second and third approaches to the first one, respectively. We detect no visible discrepancies, the differences being compatible with zero within the statistical uncertainties. ## IV The \(N=2\) lattice Yang-Mills theory In this section, we start to analyse the physics of the \(Sp(4)\) theory of interest. We begin from the pure Yang-Mills dynamics, with \(N_{\rm f}=0=N_{\rm as}\). We verify that centre symmetry, \((\mathbb{Z}_{2})^{4}\), is broken at small volumes, but restored at large volumes, by looking at the (real) Polyakov loop, in a way that is reminiscent of Ref. [217]. Following Ref. [314], we then consider the spectrum of the Dirac operator in the quenched approximation, both for fundamental and 2-index antisymmetric fermions, to verify the symmetry-breaking pattern expected from random matrix theory. The results for the first of these tests are shown in Fig. 9. At a coupling \(\beta=9.0\), we generate four ensembles in the pure \(Sp(4)\) theory, at different values of the space-time volume, \(\tilde{V}=(2a)^{4},\,(4a)^{4},\,(12a)^{4},\,(20a)^{4}\). For each configuration, we compute the spatial averaged (real) Polyakov loop, defined as \[\Phi\;\equiv\;\frac{1}{N_{c}N_{s}^{3}}\sum_{\vec{x}}{\rm Tr}\left(\prod_{t=0} ^{t=N_{t}-1}U_{0}(t,\vec{x})\right)\,, \tag{30}\] where \(U_{0}(t,\vec{x})\) is the time-like link variable. For our current purposes, we choose the lattice to be isotropic in all four directions, \(N_{t}=N_{s}=L/a\). For each ensemble, we display the frequency histogram of the values of \(\Phi\). The expectation is that the zero-temperature \(Sp(4)\) lattice theory should preserve the \(\left(\mathbb{Z}_{2}\right)^{4}\) symmetry of the centre of the group in four Euclidean space-time dimensions. This is indeed the case for sufficiently large volumes, as shown by the bottom-right panel of Fig. 9, for which \(N_{t}=N_{s}=20\), that exhibits a Gaussian distribution centred at the origin. But for small enough lattice volumes, this expectation is violated. This is visible in the other three panels in Fig. 9, in which the distribution is non-Gaussian, and two other peaks emerge. In the extreme case of \(N_{s}=N_{t}=2\), the two peaks at finite value of \(\Phi\) dominate the distribution, which is otherwise symmetrical around zero. Ensembles of gauge configurations without dynamical fermions can also be used to verify that our implementation of the Dirac operators is correct. To this purpose, following Ref. [314] (and Ref. [9]), we consider the theory with quenched fermions in either the fundamental or 2-index antisymmetric representation, and Figure 10: Distribution of the folded density of spacing between subsequent eigenvalues of the hermitian Dirac-Wilson operator \(Q_{m}=\gamma_{5}D_{m}\), and comparison with predictions from chRMT, computed in the quenched approximation, with ensembles having \(\beta=8.0\), \(am_{0}=-0.2\), and lattice volume \(\tilde{V}=(4a)^{4}\), in the \(Sp(4)\) theory. The left panel shows the case of fermions transforming in the fundamental representation, and the right is for fermions in the 2-index antisymmetric one. eigenvalues of the hermitian Wilson-Dirac operator \(Q_{m}=\gamma_{5}D_{m}\). The numbers of configurations are \(N_{\rm conf,f}=88\) and \(N_{\rm conf,as}=47\), while the number of eigenvalues in each configuration used is \(3696\) for fundamental fermions and \(5120\) for antisymmetric fermions. We compute the distribution of the folded density of spacing, \(P(s)\), following the procedure discussed in Ref. [314]. Finally, we compare the results to the exact predictions of chiral Random Matrix Theory (chRMT) [326] (see also Ref. [327] for a review on the subject). Because the spectrum captures the properties of the theory, in particular the pattern of chiral symmetry breaking [328], the distribution \(P(s)\) differs, depending on the symmetry-breaking pattern predicted. The folded density of spacing is \[P(s)=N_{\tilde{\beta}}s^{\tilde{\beta}}\exp\left(-c_{\tilde{\beta}}s^{2} \right)\,,\quad\text{where}\quad N_{\tilde{\beta}}=2\frac{\Gamma^{\tilde{ \beta}+1}\left(\frac{\tilde{\beta}}{2}+1\right)}{\Gamma^{\tilde{\beta}+2} \left(\frac{\tilde{\beta}+1}{2}\right)}\,,\,c_{\tilde{\beta}}=\frac{\Gamma^{2 }\left(\frac{\tilde{\beta}}{2}+1\right)}{\Gamma^{2}\left(\frac{\tilde{\beta}+ 1}{2}\right)}\,, \tag{31}\] where \(\tilde{\beta}\) is the Dyson index. This index can take three different values: \(\tilde{\beta}=1\) to \(SU(2N_{f})\to Sp(2N_{f})\), \(\tilde{\beta}=2\) corresponds to the symmetry breaking pattern \(SU(N_{f})\times SU(N_{f})\to SU(N_{f})\), and \(\tilde{\beta}=4\) to \(SU(2N_{f})\to SO(2N_{f})\). The latter two are the cases we are interested in, corresponding to fundamental and 2-index antisymmetric fermions for the symplectic theory. In order to make a comparison with the chRMT prediction in Eq. (31), we compute the eigenvalues of \(Q_{m}\) for \(N_{\rm conf}\) configurations. This process yields a set of eigenvalues \(\lambda_{i}^{(c)}\) with \(c=1,\cdots,N_{\rm conf}\). The eigenvalues are arranged in one long list, in which \(\lambda_{i}^{(c)}\) are ordered in ascending order. Any degeneracy that is present in the 2-antisymmetric case, as explained in Ref. [314], is discarded. Then, for each \(c=1,\cdots,N_{\rm conf}\), a new list of values is produced, that contains \(n_{i}^{(c)}\), the positive integer position of the eigenvalue \(\lambda_{i}^{(c)}\) in the long list ordered in ascending order, instead of \(\lambda_{i}^{(c)}\). The density of spacing, \(s\), is replaced by the expression \[s=\frac{n_{i+1}^{(c)}-n_{i}^{(c)}}{\mathcal{N}}\,. \tag{32}\] The constant \(\mathcal{N}\) is defined so that the density of spacing has unit average over the whole ensemble, \(\langle s\rangle=1\). Finally, the (discretised) unfolded density of spacings, \(P(s)\), is obtained by binning numerical results for \(s\) and normalising it. In Fig. 10, we show an example of the folded distribution of eigenvalues of the Wilson-Dirac operator, computed numerically. As it can be seen, in the case of fermions in the fundamental representation, one Figure 11: Parameter scan of the \(Sp(4)\) theory with \(N_{\rm as}=4\) fermions transforming in the 2-index antisymmetric representation, with ensembles generated from a cold start, using the HMC. We show the value of the average plaquette, \(\langle P\rangle\), as a function of the bare mass, for a few representative values of the coupling. The lattice size is \(\tilde{V}=(8a)^{4}\), and each point is obtained by varying the lattice coupling \(\beta=7.0,6.8,6.6,6.5,6.4,6.3,6.2,6.0,5.8,5.6\) and the bare mass \(-1.4\leq am_{0}^{\rm as}\leq 0.0\). is compatible with the symmetry breaking pattern leading to the coset \(SU(2N_{\rm f})/Sp(2N_{\rm f})\). Conversely, for fermions in the 2-index antisymmetric representation, our numerical results reproduce the prediction associated with the coset \(SU(2N_{\rm as})/SO(2N_{\rm as})\). The spectacular agreement with chRMT confirms that there are no inconsistencies in our way of treating fermions. The size of the lattices we have considered has been chosen in order to make finite-size effects negligible. These effects can become evident in smaller lattices and they lead to discrepancies due to some abnormally large spacings for the smallest and largest eigenvalues. We observe that, as in previous studies, the antisymmetric representation already matches the predictions in a \(4^{4}\) volume, while for the fundamental to reproduce the predictions chRMT, we had to remove the 200 lowest and highest eigenvalues (reducing the number of eigenvalues from 4096 to 3696). In this fashion, the differences with chRMT are no longer visible to the naked eye even for lattices with modest volume, \(\tilde{V}=(4a)^{4}\). ## V The \(N=2\) theories coupled to fermions: bulk phase structure In this section, we present our main results for the theory with \(N=2\), \(N_{\rm f}=0\), and varying number of fermions transforming in the antisymmetric representation, starting from \(N_{\rm as}=4\)--for which we apply the HMC algorithm. We performed a coarse scan of the lattice parameter space, to identify phase transitions in the \((\beta,m_{0})\) plane, by studying the average plaquette, \(\langle P\rangle\), its hysteresis, and its susceptibility. We provide an approximate estimate of the upper bound coupling for the bulk phase, \(\beta_{*}\), above which there is no bulk phase transition, and hence one can safely perform lattice numerical calculations at finite lattice spacing, yet confident that the results can be extrapolated to the appropriate continuum limit. Figure 11 displays the average plaquette, \(\langle P\rangle\), obtained in ensembles generated using a cold start. The lattice size is \(\hat{V}=(8a)^{4}\), and each point is obtained by varying the lattice coupling \(\beta=7.0,6.8,6.6,6.5,6.4,6.3,6.2,6.0,5.8,5.6\) Figure 12: Hysteresis between hot (red) and cold (other colors) starts for the \(Sp(4)\) theory with \(N_{\rm as}=4\) fermions in the 2-index antisymmetric representation. The lattice coupling is \(\beta=6.4,6.3,6.2,6.0,5.8,5.6\) (left to right, and top to bottom). The lattice size is \(\tilde{V}=(8a)^{4}\), and each point is obtained by varying the bare mass \(-1.4\leq am_{0}^{\rm as}\leq 0.0\). and the bare mass \(-1.4\leq am_{0}^{\rm as}\leq 0.0\). The figure shows that, for small values of \(\beta\) and large, negative values of the bare mass, the average plaquette displays an abrupt change at a particular value \(am_{0}^{\rm as\,*}\), while being a smooth, continuous function elsewhere. This is a first indication of the existence of a first-order bulk phase transition. To better understand whether a first-order phase transition is taking place, we study the effect of adopting two different strategies in the generation of the ensembles, repeating it using of thermalised (hot) starts, and redoing the measurements. Figure 12 shows the comparison of the average plaquette, \(\langle P\rangle\), computed for several fixed choices of the coupling \(\beta\), while varying the bare mass \(-1.4\leq am_{0}^{\rm as}\leq 0.0\). The two curves in the plots represent the behaviour measured in ensembles obtained from a cold and hot start configuration. The effects of hysteresis are clearly visible for \(\beta<6.4\) and are an indication of the presence of a first-order phase transition taking place at a critical value of the bare mass \(am_{0}^{\rm as\,*}\). The final test of the nature of the phase transition is shown in Fig. 13. For illustration purposes, we choose two values of the coupling for which we have evidence of a phase transition (\(\beta=6.2\)), or of smooth behaviour of \(\langle P\rangle\) for all value of \(am_{0}^{\rm as}\) (\(\beta=6.5\)), respectively. We compute the plaquette susceptibility, defined as \[\chi_{P}\ \equiv\ \frac{\tilde{V}}{a^{4}}\left(\langle P^{2}\rangle-\left( \langle P\rangle\right)^{2}\right)\,, \tag{33}\] and compare the numerical results obtained with ensembles having two different volumes, \(\tilde{V}=(8a)^{4}\) and \(\tilde{V}=(16a)^{4}\). The results indicate that the peak height scales as the 4-volume when \(\beta\) is small, in which case the position of the peak also moves to a different value of \(am_{0}^{\rm as}\). These are indeed the expected signature of a first order phase transition. For large \(\beta\), the curves obtained for different volumes are compatible with one another, a clear indication of a smooth crossover. We hence conclude that, in the theory with \(N=2\), \(N_{\rm f}=0\), and \(N_{\rm as}=4\), there is numerical evidence of a line of first-order phase transitions turning into a crossover at \(\beta>\beta_{*}=6.4\). ### Varying \(N_{\rm as}\) We repeat the parameter scan for other choices of \(N_{\rm as}\), using the RHMC for all fermions when \(N_{\rm as}\) is odd, and the HMC algorithm otherwise. The purpose of the exercise is to study the dependence of the upper bound coupling for the bulk phase \(\beta_{*}\) on the number of fermions, \(N_{\rm as}\). Indeed, it is expected that for small \(N_{\rm as}\) we expect the theory to confine, while for larger values of \(N_{\rm as}\sim N_{\rm as}^{c}\) the theory should approach the lower end of the conformal window, and eventually lose asymptotic freedom--we recall that the latter requires to impose the bound \(N_{\rm as}<33/4\) in \(Sp(4)\), while setting the stage for a first truly non-perturbative determination of the former is the main motivation for this study. Figure 14: Parameter scan in the \(Sp(4)\) theory with \(N_{\rm as}=0,\,1,\,2,\,3,\,5,\,6,\,7,\,8\) (left to right and top to bottom panels) fermions in the 2-index antisymmetric representation, obtained with ensembles generated from a cold start. For \(N_{\rm as}>0\), we show the value of the average plaquette, \(\langle P\rangle\), as a function of the bare mass, for a few representative values of the coupling. For pure gauge, we just vary the value of \(\beta\). All the fermions are treated with the HMC/RHMC algorithms. The lattice size is \(\tilde{V}=(8a)^{4}\) and the base mass is chosen in the range \(-1.4\leq am_{\rm s}^{\rm as}\leq 0.0\) for \(N_{\rm as}\geq 2\), and \(-1.5\leq am_{\rm s}^{\rm as}\leq 0.0\) for \(N_{\rm as}=1\). For the pure gauge theory, the coupling is chosen to be \(1.0\leq\beta\leq 16.0\). For \(N_{\rm as}=1\), we have chosen \(\beta=7.1,7.0,6.9,6.8,6.7,6.6,\) while for \(N_{\rm as}=2\) we have \(\beta=6.8,6.7,6.6,6.5,6.4,6.2\). For \(N_{\rm as}=3\), the coupling is \(\beta=6.8,6.7,6.6,6.5,6.4,6.2,6.0,5.8\), while for \(N_{\rm as}=5\) we’ve chosen \(\beta=6.6,6.5,6.4,6.3,6.2,6.1,6.0,5.8\). For \(N_{\rm as}=6\), \(\beta=6.4,6.3,6.2,6.1,6.0,5.8\). For \(N_{\rm as}=7\), \(\beta=6.4,6.2,6.1,6.0,5.9,5.8\) and for \(N_{\rm as}=8\), \(\beta=6.3,6.1,6.0,5.9,5.8,5.7\). The results of these studies are shown in Fig. 14, which displays our measurements of the average plaquette, \(\langle P\rangle\), as a function of the bare parameters of the theories. For the pure gauge \(Sp(4)\) theory, we get plaquette values that are in agreement with the ones shown in Ref. [156]. The corresponding upper bound value of the coupling is roughly estimated to be \(\beta_{*}\simeq 7.2\). For theories with dynamical fermions, we vary both the masses and the coupling of the theories. As can be seen from Fig 14, for \(N_{\rm as}=1\) the upper bound is \(\beta_{*}\simeq 6.7\). For \(N_{\rm as}=2\) the upper bound is \(\beta_{*}\simeq 6.7\), and for \(N_{\rm as}=3\) it is \(\beta_{*}\simeq 6.5\), in agreement with the values found in Ref. [2]. At a larger number of fermions species, we obtain progressively smaller values of \(beta\) for the upper bound of the bulk phase \(\beta\): for \(N_{\rm as}=5\), we get \(\beta_{*}\simeq 6.3\). For \(N_{\rm as}=6\), the upper bound coupling is \(\beta_{*}\simeq 6.2\). For \(N_{\rm as}=7\), we get \(\beta_{*}\simeq 6.1\div 6.2\) and for \(N_{\rm as}=8\), \(\beta_{*}\simeq 6.1\). Overall, we notice a trend according to which the more fermion flavors are present in the \(Sp(4)\), the smaller the upper bound value of the coupling we find and the bigger is the corresponding critical bare mass \(am_{0}^{({\rm as})*}\). ## VI Scale setting and topology We return now to the theory with \(N=2\), \(N_{\rm f}=0\), and \(N_{\rm as}=4\). We discuss a scale setting procedure that uses the Wilson flow. We also monitor the evolution of the topological charge, to show that topological freezing was avoided. We focus the discussion on a few representative examples, although we checked that our conclusions have general validity for all choices of parameter relevant to this study. The gradient flow [329], and its discretised counterpart, the Wilson flow [330], are useful for two complementary purposes. On the one hand, the Wilson flow provides a universal, well defined way to set the scale in a lattice theory, that is unambiguously defined irrespectively of the properties of the theory and of model-dependent considerations. On the other hand, the process we will describe momentarily consists of taking gauge configurations and evolving them with a flow equation, which results in the smoothening of such configurations, and the softening of short-distance fluctuations. The former property is beneficial because it allows to compare to one another different theories for which no experimental information is available (yet), and that might have different matter content. The latter characteristic allows, in practical terms, to reduce the short-distance numerical noise and the effects of discretisation in the lattice calculation of observables, such as the topological charge, \(Q\), which are sensitive to fluctuations at all scales. Figure 15: Wilson Flow [329; 330] energy density \(\mathcal{E}(t)\) (left panel) and \(\mathcal{W}(t)\) (right), computed as in Refs. [1; 11], from the standard (pl) and the clover-leaf (cl) plaquette defined in Refs. [331; 332], for the \(Sp(4)\) theory with \(N_{\rm as}=4\) fermions transforming in the 2-index antisymmetric representation. The lattice size is \(\tilde{V}=(12a)^{4}\), and we display two representative choices of bare parameters, with \(\beta=6.8\) or \(6.9\) and common bare mass \(am_{0}^{\rm as}=-0.8\). The time step is \(0.01\), \(t_{max}=4.5\) to reduce finite-size effects. Errors are computed by bootstrapping. We have chosen \(\mathcal{W}_{0}=\frac{1}{2}C_{2}(F)\) for the topological charge. The corresponding values of \(w_{0}\) from the plaquette and the clover-leaf are \(w_{0,pl.}=1.485(3)\) and \(w_{0,cl.}=1.495(2)\) for \(\beta=6.8\) and \(w_{0,pl.}=2.005(2)\) and \(w_{0,cl.}=2.026(2)\) for \(\beta=6.9\). We have set \(a=1\), for notational convenience. We follow Refs. [1; 11] (and references therein). One introduces the flow time, \(t\), as an additional, fifth component of the space-time variables, and solves the defining differential equation \[\frac{\mathrm{d}B_{\mu}(x,\,t)}{\mathrm{d}t}\;=\;D_{\nu}G_{\nu\mu}(x,\,t)\,, \tag{34}\] subject to the boundary conditions \(B_{\mu}(x,\,0)=A_{\mu}(x)\). Here \(A_{\mu}(x)\) are the gauge fields, and the covariant derivatives are \(D_{\mu}\equiv\partial_{\mu}+[B_{\mu},\,\cdot\,]\), and \(G_{\mu\nu}(t)=[D_{\mu},\,D_{\nu}]\). As anticipated, the main action of the flow is to introduce a Gaussian smoothening of the configurations, with mean-square radius \(\sqrt{8t}\). In order to use this object to introduce a scale, one defines the quantities \[\mathcal{E}(t) \equiv \frac{t^{2}}{2}\left\langle\mathrm{Tr}\,\left[G_{\mu\nu}(t)G_{ \mu\nu}(t)\right]\right\rangle\,, \tag{35}\] \[\mathcal{W}(t) \equiv t\frac{d}{dt}\mathcal{E}(t)\,, \tag{36}\] Figure 16: Evolution with the ensemble trajectories of the topological charge \(Q_{L}(t=u_{0}^{2})\equiv\sum_{x}\frac{1}{32\pi^{2}}\varepsilon^{\mu\nu\rho \sigma}\mathrm{Tr}\left[\mathcal{E}_{\mu\nu}(x)\mathcal{C}_{\rho\sigma}(x)\right]\), computed (without rounding) at flow time \(t=w_{0}^{2}\) for the \(Sp(4)\) theory with \(N_{\mathrm{as}}=4\) fermions transforming in the 2-index antisymmetric representation. The lattice size is \(\tilde{V}=(12a)^{4}\). The lattice parameters characterising the ensembles are \(\beta=6.8\) (top panel) and \(\beta=6.9\) (bottom), with bare mass \(am_{0}^{\mathrm{as}}=-0.8\). The histograms of the measurements (right panels) are compatible with a normal distribution centered at zero, with reduced chi-square \(\chi^{2}/N_{\mathrm{d.o.f}}=\tilde{\chi}^{2}=1.1\) for both panels. The integrated autocorrelation time computed using the Madras-Sokal windowing algorithm as in Ref. [11] is \(\tau_{Q}=7.11(64)\) (top) and \(\tau_{Q}=59.58(92)\) (bottom). and introduces a prescription that defines the scale on the basis of a reference value for either of the two. Two common choices in the literature are the scale, \(t_{0}\), defined by setting \[\mathcal{E}(t)|_{t=t_{0}}=\mathcal{E}_{0}\,, \tag{37}\] or the scale, \(w_{0}\), defined implicitly by the condition \[\left.\mathcal{W}(t)\right|_{t=w_{0}^{2}}=\mathcal{W}_{0}\,. \tag{38}\] Both \(\mathcal{E}_{0}\) and \(\mathcal{W}_{0}\) are set on the basis of theoretical considerations. For example, Ref. [11] advocates to set \(\mathcal{W}_{0}=c_{w}C_{2}(F)\), where \(C_{2}(F)=(1+2N)/4\) is the quadratic Casimir operator of the fundamental representation in \(Sp(2N)\) theories, and one sets \(c_{w}=0.5\), though other choices are possible. On the discretised lattice, one replaces the gauge field, \(A_{\mu}(x)\), with the link variable, \(U_{\mu}(x)\), and the flow equation is rewritten by replacing \(B_{\mu}(x,\,t)\) with the new \(V_{\mu}(x,\,t)\) (with \(V_{\mu}(x,\,0)=U_{\mu}(x)\)). There are then at least two ways to replace \(G_{\mu\nu}\) with a discretised variable. We introduced the elementary plaquette \(\mathcal{P}_{\mu\nu}\) when defining the lattice action in Eq. (12). The clover-leaf plaquette operator, \(\mathcal{C}_{\mu\nu}\), provides an alternative to the elementary plaquette, and can be seen as a simple form of improvement. We borrow the definition from Refs. [331, 332], that for generic link variables \(U_{\mu}(x)\) reads: \[\mathcal{C}_{\mu\nu}(x) \equiv \frac{1}{8}\,\Big{\{}\,U_{\mu}(x)U_{\nu}(x+\hat{\mu})U_{\mu}^{ \dagger}(x+\hat{\nu})U_{\nu}^{\dagger}(x)+\] \[+U_{\nu}(x)U_{\mu}^{\dagger}(x+\hat{\nu}-\hat{\mu})U_{\nu}^{ \dagger}(x-\hat{\mu})U_{\mu}(x-\hat{\mu})+\] \[+U_{\mu}^{\dagger}(x-\hat{\mu})U_{\nu}^{\dagger}(x-\hat{\nu}- \hat{\mu})U_{\mu}(x-\hat{\nu}-\hat{\mu})U_{\nu}(x-\hat{\nu})+\] \[+U_{\nu}^{\dagger}(x-\hat{\nu})U_{\mu}(x-\hat{\nu})U_{\nu}(x- \hat{\nu}+\hat{\mu})U_{\mu}^{\dagger}(x)-\text{h.c.}\,\Big{\}}\.\] One would like to set the scale in a way that does not depend crucially on microscopic details. To this purpose, in Fig. 15 we consider the \(Sp(4)\) theory with \(N_{\text{f}}=0\) and \(N_{\text{as}}=4\), for two representative choices of \(\beta\), and a representative choice of volume, \(\tilde{V}\), and bare mass, \(am_{0}^{\text{as}}\), and we show \(\mathcal{E}(t)\) and \(\mathcal{W}(t)\) as functions of the flow time, \(t\), by comparing explicitly the results obtained by adopting either the elementary or the clover-leaf plaquette as defining the lattice regularisation of the action. The plots illustrate the general trend evidenced elsewhere in the literature, according to which the function \(\mathcal{W}(t)\) displays a milder dependence on the short distance regulator. In the following, we set the scale \(w_{0}\) by conventionally setting \(\mathcal{W}_{0}=\frac{1}{2}C_{2}(F)\). The topological charge density is defined as \[q_{L}(x,t)\ \equiv\ \frac{1}{32\pi^{2}}\varepsilon^{\mu\nu\rho\sigma}\text{Tr} \ [\mathcal{C}_{\mu\nu}(x,t)\mathcal{C}_{\rho\sigma}(x,t)]\, \tag{40}\] and the topological charge is \(Q_{L}(t)\equiv\sum_{x}q_{L}(x,t)\), where, again, \(t\) is the flow time. In general, the topological charge on the lattice is not quantised, and in cases where it is the physical quantity of interest--for example because one is working towards a determination of the topological susceptibility, as in Ref. [11] and references therein--one needs to evolve to large \(t\), and introduce a rounding process. For the current purposes, we do not need a discretisation algorithm: what we want to verify is that there is no evidence of topological freezing, and to this purpose we perform three simple tests. In Fig. 16 we display the value of \(Q_{L}(t=w_{0}^{2})\) in the \(Sp(4)\) theory coupled to \(N_{\text{f}}=0\) and \(N_{\text{as}}=4\) fermion species, for two values of the coupling, \(\beta\), and a common value of the bare mass. We show how the topological charge evolves along the trajectories, and supplement it with a histogram displaying its distribution. Both visual tests confirm that there is no evidence of topological freezing. We can make these tests more quantitative by applying the standard Madras-Sokal windowing algorithm [321], and provide estimates of the integrated autocorrelation time \(\tau_{Q}\) of the topological charge, which in both examples, as shown in Fig. 15, turns out to be many orders of magnitude smaller than the number of trajectories. Furthermore, fits of the histograms are compatible with a Gaussian distribution centered at \(\langle Q_{L}(t=w_{0}^{2})\rangle=0\). The main message from this section is that the behaviour of the Wilson flow and of the topological charge, computed using the new software based on Grid, and tested on GPU architecture machines, to examine the properties of the lattice \(Sp(2N)\) gauge theory with \(N=2\), \(N_{\rm f}=0\), and \(N_{\rm as}=4\), provide results that are broadly comparable to those in the literature for related, though different, field theories. This suggests that the implementation of the simulation routines and of the observables are both free from unwanted effects. ## VII Summary and Outlook A number of new physics models based upon \(Sp(2N)\) gauge theories has been proposed in the literature, in such diverse contexts to include Composite Higgs Models, top partial compositeness, dilaton-Higgs models, strongly interacting dark matter models, among others. It is essential to the development of all these new physics ideas to provide model builders and phenomenologists with non-trivial information about the non-perturbative dynamics. The programme of systematic characterisation of \(Sp(2N)\) theories is still in its early stages, though. Prominently, the challenging question of identifying the lower end of the conformal window in these theories coupled to matter fields in various representations of the group requires the non-perturbative instruments of lattice field theory. As a necessary step in this direction, we developed and tested new software, embedded into the Grid environment to take full advantage of its flexibility. In this paper we reported the (positive) results of our tests of the algorithms, that set the stage for future large-scale dedicated studies. We focused particularly on the \(Sp(4)\) theory coupled to \(N_{\rm as}=4\) (Dirac) fermions transforming in the antisymmetric representation, that might be close to the onset of conformality. We performed a long list of non-trivial exercises. We both tested the effectiveness of the algorithm and software implementation, but also provided a first characterisation of lattice theories that had never been studied before--although for present purposes we used comparatively small and coarse lattices. We reported in this paper illustrative examples demonstrating that there are no obvious problems in the software implementation. We computed effectively such observables as the averages of the plaquette and (real) Polyakov loop, the plaquette susceptibility, the Wilson flow, and the topological charge. We catalogued the first measurements of the critical couplings in \(Sp(4)\) lattice theories with \(N_{\rm as}<33/4\)--below the bound imposed by asymptotic freedom--hence identifying the portion of lattice parameter space connected with the continuum theories of interest. This paper, and the software we developed for it, set the stage needed to explore and quantify the extent of the conformal window in these theories. The tools we developed can be used also in the context of the recent literature discussing the spectroscopy of \(Sp(2N)\) theories with various representations [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19], in broad regions of their parameter space, considering both bosonic bound states as well as fermionic ones, relevant for example in \(Sp(2N)\) theories with mixed representations. This effort can be complemented and further extended by applying new techniques based on the spectral densities [333]--see also the applications in Refs. [334; 335; 336; 337; 338; 339; 340; 341; 342]. One can envision many more uses and applications of this powerful and flexible open-source software. ###### Acknowledgements. The work of EB, JL and BL has been funded by the UKRI Science and Technology Facilities Council (STFC) Research Software Engineering Fellowship EP/V052489/1, and by the ExaTEPP project EP/X017168/1. The work of NF has been supported by the STFC Consolidated Grant No. ST/X508834/1. The work of PB was supported in part by US DOE Contract DESC0012704(BNL), and in part by the Scientific Discovery through Advanced Computing (SciDAC) program LAB 22-2580. The work of DKH was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2017R1D1A1B06033701). The work of LDD and AL was supported by the ExaTEPP project EP/X01696X/1. The work of JWL was supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korea government(MSIT) (NRF-2018R1C1B3001379) and by IBS under the project code, IBS-R018-D1. The work of DKH and JWL was further supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2021R1A4A5031460). The work of CJDL is supported by the Taiwanese NSTC grant 109-2112-M-009-006-MY3. The work of BL and MP has been supported in part by the STFC Consolidated Grants No. ST/P00055X/1 and No. ST/T000813/1. BL, MP, AL and LDD received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program under Grant Agreement No. 813942. The work of BL is further supported in part by the EPSRC ExCALIBUR programme ExaTEPP (project EP/X017168/1), by the Royal Society Wolfson Research Merit Award WM170010 and by the Leverhulme Trust Research Fellowship No. RF-2020-4619. LDD is supported by the UK Science and Technology Facility Council (STFC) grant ST/P000630/1. Numerical simulations have been performed on the Swansea SUNBIRD cluster (part of the Supercomputing Wales project) and AccelerateAI A100 GPU system, and on the DiRAC Extreme Scaling service at the University of Edinburgh. Supercomputing Wales and AccelerateAI are part funded by the European Regional Development Fund (ERDF) via Welsh Government. The DiRAC Extreme Scaling service is operated by the Edinburgh Parallel Computing Centre on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by BEIS capital funding via STFC capital grant ST/R00238X/1 and STFC DiRAC Operations grant ST/R001006/1. DiRAC is part of the National e-Infrastructure. **Research Data Access Statement**--The data generated for this manuscript can be downloaded from Ref. [343] and the analysis code from Ref [344]. **Open Access Statement**--For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. ## Appendix A Group-theoretical definitions We denote as \(Sp(2N)\) the subgroup of \(SU(2N)\) preserving the norm induced by the antisymmetric matrix \(\Omega\), \[\Omega=\begin{pmatrix}0&\mathbb{1}_{N}\\ -\mathbb{1}_{N}&0\end{pmatrix}\;, \tag{101}\] where \(\mathbb{1}_{N}\) is the \(N\times N\) identity matrix. This definition can be converted into a constraint on the group element \(U\) \[U\Omega U^{T}=\Omega\;. \tag{102}\] Due to unitarity, the previous condition can be also written as \[U\Omega=\Omega U^{*}\;, \tag{103}\] which implies the following block structure \[U=\begin{pmatrix}A&B\\ -B^{*}&A^{*}\end{pmatrix}\;, \tag{104}\] where Eq. (102) implies, for \(A\) and \(B\), that \[AB^{T}=BA^{T}\;,\;\;\;\;\;AA^{\dagger}+BB^{\dagger}=\mathbb{1}_{N}\;. \tag{105}\] The algebra can be defined by expanding \(U\Omega=\Omega U^{*}\) in terms of the hermitian generators \(t^{a}\), i.e. \(U=\exp(i\omega^{a}t^{a})\) for real parameters \(\omega^{a}\). We arrive at the following condition on the generic element of the algebra \(T=\sum_{a}\omega^{a}t^{a}\) \[T\Omega=-\Omega T^{*}\;, \tag{106}\] which also implies that \[T=\begin{pmatrix}X&Y\\ Y^{*}&-X^{*}\end{pmatrix}\;. \tag{100}\] Hermiticity imposes the conditions \(X=X^{\dagger}\) and \(Y=Y^{T}\). The number of independent degrees of freedom is then \(2N(N+1)\), the dimension of the group. ## Appendix B Generators of the algebra in Grid Let \(t_{\rm f}^{a}\) be the generators of the Lie Algebra of \(Sp(2N)\) in the fundamental representation. They are implemented in Grid as hermitian, meaning that they follow the block structure of Eq. (100). Their normalisation is such that \[{\rm Tr}\left(t_{\rm f}^{a}t_{\rm f}^{b}\right)=\frac{\delta^{ab}}{2}\;. \tag{101}\] The generators \(t_{\rm f}^{a}\), with \(a=1,\ldots,\,2N^{2}+N\), are implemented in Grid according to the following scheme. The \(2N^{2}\) off-diagonal generators are identified by the following six relations among their matrix elements: \[t_{i,j}^{a}=t_{j,i}^{a}=-t_{i+N,j+N}^{a}=-t_{j+N,i+N}^{a}=\frac{1}{2\sqrt{2}} \;,\;\;\;\;\;i=1,\ldots N-1\;,\;\;\;\;\;i<j\leq N\;, \tag{102}\] with \(a=1\ldots N(N-1)/2\), \[t_{i,j}^{a}=-t_{j,i}^{a}=t_{i+N,j+N}^{a}=-t_{j+N,i+N}^{a}=\frac{i}{2\sqrt{2}} \;,\;\;\;\;\;i=1,\ldots,N-1\;,\;\;\;\;\;i<j\leq N\;, \tag{103}\] with \(a=N(N-1)/2+1\ldots N(N-1)\), \[t_{i,j+N}^{a}=t_{j,i+N}^{a}=t_{i+N,j}^{a}=t_{j+N,i}^{a}=\frac{1}{2\sqrt{2}}\;, \;\;\;\;\;i=1,\ldots,N-1,\;\;\;\;\;i<j\leq N-1\;, \tag{104}\] with \(a=N(N-1)+1\ldots 3N(N-1)/2\), \[t_{i,j+N}^{a}=t_{j,i+N}^{a}=-t_{i+N,j}^{a}=-t_{j+N,i}^{a}=\frac{i}{2\sqrt{2}} \;,\;\;\;\;\;i=1,\ldots,N-1\;,\;\;\;\;\;i<j\leq N-1\;, \tag{105}\] with \(a=3N(N-1)/2+1\ldots 2N(N-1)\) \[t_{i,i+N}^{a}=t_{i+N,i}^{a}=\frac{1}{2}\;,\;\;\;\;\;i=1,\ldots,N\;, \tag{106}\] with \(a=2N^{2}-2N+1\,\ldots,\,2N^{2}-N\), \[t_{i,i+N}^{a}=-t_{i+N,i}^{a}=\frac{i}{2}\;,\;\;\;\;\;i=1,\ldots,N\;, \tag{107}\] with \(a=2N^{2}-N+1\),..., \(2N^{2}\). The remaining \(N\) generators in the Cartan subalgebra are \[(t^{a})_{i,i}=-(t^{a})_{i+N,i+N}=\frac{1}{2}\;,\;\;\;\;\;i=1,\ldots N\;, \tag{108}\] with \(a=2N^{2}+1\dots 2N^{2}+N\), the dimension of the group. It is useful to provide an explicit representation for \(2N=4\): \[t_{\text{f}}^{1} =\frac{1}{2\sqrt{2}}\begin{pmatrix}0&1&0&0\\ 1&0&0&0\\ 0&0&0&-1\\ 0&0&-1&0\end{pmatrix}\qquad t_{\text{f}}^{6}=\frac{1}{2}\begin{pmatrix}0&0&0&0 \\ 0&0&0&1\\ 0&0&0&0\\ 0&1&0&0\end{pmatrix}\] \[t_{\text{f}}^{2} =\frac{1}{2\sqrt{2}}\begin{pmatrix}0&i&0&0\\ -i&0&0&0\\ 0&0&0&i\\ 0&0&-i&0\end{pmatrix}\qquad t_{\text{f}}^{7}=\frac{1}{2}\begin{pmatrix}0&0&i&0 \\ 0&0&0&0\\ -i&0&0&0\\ 0&0&0&0\end{pmatrix}\] \[t_{\text{f}}^{3} =\frac{1}{2\sqrt{2}}\begin{pmatrix}0&0&0&1\\ 0&0&1&0\\ 0&1&0&0\\ 1&0&0&0\end{pmatrix}\qquad t_{\text{f}}^{8}=\frac{1}{2}\begin{pmatrix}0&0&0&0 \\ 0&0&0&i\\ 0&0&0&0\\ 0&-i&0&0\end{pmatrix} \tag{103}\] \[t_{\text{f}}^{4} =\frac{1}{2\sqrt{2}}\begin{pmatrix}0&0&0&i\\ 0&0&i&0\\ 0&-i&0&0\\ -i&0&0&0\end{pmatrix}\qquad t_{\text{f}}^{9}=\frac{1}{2}\begin{pmatrix}1&0&0 &0\\ 0&0&0&0\\ 0&0&-1&0\\ 0&0&0&0\end{pmatrix}\] \[t_{\text{f}}^{5} =\frac{1}{2}\begin{pmatrix}0&0&1&0\\ 0&0&0&0\\ 1&0&0&0\\ 0&0&0&0\end{pmatrix}\qquad t_{\text{f}}^{10}=\frac{1}{2}\begin{pmatrix}0&0&0&0 \\ 0&1&0&0\\ 0&0&0&0\\ 0&0&0&-1\end{pmatrix}\,.\]
2304.07652
Learned Interpolation for Better Streaming Quantile Approximation with Worst-Case Guarantees
An $\varepsilon$-approximate quantile sketch over a stream of $n$ inputs approximates the rank of any query point $q$ - that is, the number of input points less than $q$ - up to an additive error of $\varepsilon n$, generally with some probability of at least $1 - 1/\mathrm{poly}(n)$, while consuming $o(n)$ space. While the celebrated KLL sketch of Karnin, Lang, and Liberty achieves a provably optimal quantile approximation algorithm over worst-case streams, the approximations it achieves in practice are often far from optimal. Indeed, the most commonly used technique in practice is Dunning's t-digest, which often achieves much better approximations than KLL on real-world data but is known to have arbitrarily large errors in the worst case. We apply interpolation techniques to the streaming quantiles problem to attempt to achieve better approximations on real-world data sets than KLL while maintaining similar guarantees in the worst case.
Nicholas Schiefer, Justin Y. Chen, Piotr Indyk, Shyam Narayanan, Sandeep Silwal, Tal Wagner
2023-04-15T22:42:35Z
http://arxiv.org/abs/2304.07652v1
# Learned Interpolation for Better Streaming Quantile Approximation with Worst-Case Guarantees ###### Abstract An \(\varepsilon\)-approximate quantile sketch over a stream of \(n\) inputs approximates the rank of any query point \(q\)--that is, the number of input points less than \(q\)--up to an additive error of \(\varepsilon n\), generally with some probability of at least \(1-1/\operatorname{poly}(n)\), while consuming \(o(n)\) space. While the celebrated KLL sketch of Karnin, Lang, and Liberty achieves a provably optimal quantile approximation algorithm over worst-case streams, the approximations it achieves in practice are often far from optimal. Indeed, the most commonly used technique in practice is Dunning's t-digest, which often achieves much better approximations than KLL on real-world data but is known to have arbitrarily large errors in the worst case. We apply interpolation techniques to the streaming quantiles problem to attempt to achieve better approximations on real-world data sets than KLL while maintaining similar guarantees in the worst case. ## 1 Introduction The quantile approximation problem is one of the most fundamental problems in the streaming computational model, and also one of the most important streaming problems in practice. Given a set of items \(x_{1},x_{2},\ldots,x_{n}\) and a query point \(q\), the _rank_ of \(q\), denoted \(R(q)\), is the number of items in \(\{x_{i}\}_{i=1}^{n}\) such that \(x_{i}\leq q\). An \(\varepsilon\)-approximate quantile sketch is a data structure that, given access to a single pass over the stream elements, can approximate the rank of all query points simultaneously with additive error at most \(\varepsilon n\). Given its central importance, the streaming quantiles problem has been studied extensively by both theoreticians and practitioners. Early work by Manku, Rajagopalan, and Lindsay [10] gave a randomized solution that used \(O((1/\varepsilon)\log^{2}(n\varepsilon))\) space; their technique can also be straightforwardly adapted to a deterministic solution that achieves the same bound [14]. Later, Greenwald and Khanna [4] developed a deterministic algorithm that requires only \(O((1/\varepsilon)\log(n\varepsilon))\) space. More recently, Karnin, Lang, and Liberty (KLL) [7] developed the randomized KLL sketch that succeeds at all points with probability \(1-\delta\) and uses \(O((1/\varepsilon)\log\log(1/\delta))\) space and gave a matching lower bound. Meanwhile, streaming quantile estimation is of significant interest to practitioners in databases, computer systems, and data science who have studied the problem as well. Most notably, Dunning [3] introduced the celebrated t-digest, a heuristic quantile estimation technique based on 1-dimensional \(k\)-means clustering that has seen adoption in numerous systems, including Influx, Apache Arrow, and Apache Spark. Although t-digest achieves remarkable accuracy on many real-world data sets, it is known to have arbitrarily bad error in the worst case [2]. To illustrate this core tradeoff, Figure 1 shows the rank function of the books dataset from the SOSD benchmark [8, 11], along with KLL and t-digest approximations that use the same amount of space when the data set is randomly shuffled, and when the same data set is streamed in an adversarial order that we found to induce especially bad performance in t-digest. Recent advances in machine learning have led to the development of _learning-augmented algorithms_ which seek to improve solutions to classical algorithms problems by exploiting empirical properties of the input distribution [12]. Typically, a learning-augmented algorithm retains worst-case guarantees similar to those of classical algorithms while performing better on nicely structured inputs that appear in practical applications. We might hope that a similar technique could be used for quantile estimation. In fact, one of the seminal results in the field studied the related problem of _learned index structures_. An index is a data structure that maps a query point to its rank. Several model families have been tried for this learning problem, including neural networks and the successful recursive model index (RMI) that define a piecewise-linear approximation [9]. Although learned indexes aim to answer rank queries, they do not solve the streaming quantiles estimation problem because they do not operate on the data in a stream. For example, training a neural network or fitting an RMI model require \(O(n)\) of the elements in the stream to be present in memory simultaneously, or require multiple passes over the stream. ### Our contributions. We present an algorithm for the streaming quantiles problem that achieves much lower error on real-world data sets than the KLL sketch while retaining similar worst case guarantees. This algorithm, which we call the linear compact sketch, uses _linear interpolation_ in place of parts of the KLL sketch. Intuitively, this linear interpolation provides a better approximation to the true cumulative density function when that function is relatively smooth, a common property of CDFs of many real world datasets. On the theoretical side, we prove that the linear compact sketch achieves similar worst case error to the KLL sketch. That is, the linear compactor sketch computes an \(\varepsilon\)-approximation for the rank of a single item with probability \(1-\delta\) and space \(O((1/\varepsilon)\log^{2}\log(1/\delta))\). This is within a factor that is poly-log-logarithmic (in \(1/\delta\)) of the known lower bounds and the (rather complex) version of the KLL sketch that matches it [7]. Our proof is a relatively straightforward modification of the analysis of the original KLL sketch, due to the general similarity of the algorithms. In fact, we can view our algorithm as exploiting a place in the KLL sketch analysis that left some "slack" in the algorithm design. In our experiments, we demonstrate that the linear compactor sketch achieves significantly lower error than the KLL sketch on a variety of benchmark data sets from the SOSD benchmark library [8, 11] and for a wide variety of input orders that induce bad behaviour in other algorithms like t-digest. In many cases, the linear compactor sketch achieves a space-error tradeoff that is competitive with t-digest, while also retaining worst-case guarantees. ## 2 Understanding the KLL sketch The complete KLL sketch that achieves optimal space complexity is complex: it involves several different data structures, including a Greenwald-Khanna (GK) sketch that replaces the top \(O(\log\log(1/\delta))\) compactors. Here, we present a simpler version of the KLL sketch that uses \(O((1/\varepsilon)\log^{2}\log(1/\delta))\) space--just a factor of \(O(\log\log(1/\delta))\) away from optimal--and is commonly Figure 1: Ground-truth and approximate rank functions for the SOSD books data set, with approximation by both the KLL and t-digest sketches. For the approximations, the data were presented in both randomly shuffled (top) and adversarial (bottom) order. In the adversarial case that we discovered, t-digest does much worse than KLL, demonstrating the value of worst-case correctness. implemented in practice [5], presented in Theorem 4 of [7]. In the remainder of this paper, we refer to this sketch as the _non-GK KLL sketch_. ### The non-GK KLL sketch The basic KLL sketch is composed of a hierarchy of _compactors_. Each of the \(H\) compactors has a capacity \(k\), which defines the number of items that it can store. Each item is also associated with a (possibly implicit) _weight_ which represents the number of points from the input stream that it represents in the sketch. All points in the same compactor have the same weight. When a compactor reaches its capacity, it is compacted. A compaction begins by sorting the items. Then, either the even or odd elements in the compactor are chosen, and the unchosen items are discarded. The choice to discard the even or odd items is made with equal probability. The chosen items are then placed into the next compactor in the hierarchy and the points are all assigned a weight twice what they began with. This general setup is common to many streaming quantiles sketches [10, 7]. To predict the rank of a query point \(q\), we return the sum of the weights of all points, in all compactors, that are at most \(q\). A key contribution of the KLL sketch is to use different capacities for different compactors. We say that the first compactor where points arrive from the stream has a height of \(0\), and each successive compactor has a height one higher than the compactor below it, so that the top compactor has height \(H-1\). In KLL, the compactor at height \(h\) has capacity \(k_{h}=\max(kc^{H-h},2)\), where \(k\) is a space parameter that defines the capacity of the highest compactor and \(c\) is a scale parameter that is generally set as \(c=2/3\). ### Analysis of the non-GK KLL sketch Here, we give a somewhat simplified--to focus on the essential details--version of the analysis of the non-GK KLL sketch. Consider the non-GK KLL sketch described above that terminates with \(H\) different compactors. The weight of the items at height \(h\) is \(w_{h}=2^{h}\). Let \(m_{h}\) be the number of compaction operations in the compactor at height \(h\). Consider a single compaction operation in the compactor at height \(h\) and a point \(x\) in that compactor at that time. If \(x\) was one of the even elements in the compactor, the total weight to the left of it, which defines its rank, is unchanged by the compaction. If \(x\) is one of the odd elements in the compactor, the total weight either increases by \(w_{h}\) (if the odd items are chosen) or decreases by \(w_{h}\) (if the even items are chosen). For the \(i\)th compaction operation at level \(h\), let \(X_{i,h}\) be \(1\) if the odd items were chosen and \(-1\) if the even items were chosen. Observe that \(\mathrm{E}[X_{i,h}]=0\) and \(|X_{i,h}|\leq 1\). Then the total error introduced by all compactors at level \(h\) is \(\sum_{i=1}^{m_{h}}w_{h}X_{i,h}\). Consider any point \(x\) in the stream. The error in \(R(x)\) introduced by compaction at all levels up to a fixed level \(H^{\prime}\) is therefore \(\sum_{i=0}^{H^{\prime}-1}\sum_{i=1}^{m_{h}}w_{h}X_{i,h}\). Applying a two-tailed Hoeffding bound to this error, we obtain that \[\Pr[\text{error is}>\varepsilon n]\] \[=\Pr\left[\left|\sum_{i=0}^{H^{\prime}-1}\sum_{i=1}^{m_{h}}w_{h} X_{i,h}\right|>\varepsilon n\right]\] \[\leq 2\exp\left(-\frac{\varepsilon^{2}n^{2}}{2\sum_{i=0}^{H^{ \prime}-1}\sum_{i=1}^{m_{h}}w_{h}^{2}}\right).\] This addresses the error introduced by all layers up to \(H^{\prime}\). Notice that if we set \(H^{\prime}=H\), then the error bound is dominated by the weight terms from the highest compactors. To get around this, the non-GK KLL sketch sets the capacity of the final \(s=O(\log\log(1/\delta))\) compactors to a fixed constant \(k\) and analyzes them separately: it is assumed to contribute its worst possible error of \(w_{h}\) for reach compaction. This is the key lemma in the KLL analysis and the point of departure for the linear compactor sketch. ## 3 The linear compactor sketch We propose a streaming quantile approximation algorithm that combines our empirical and theoretical observations about how KLL might be improved. We leave the basic architecture of the non-GK KLL sketch unchanged. Like the optimal KLL sketch, which replaces the top \(O(\log\log(1/\delta))\) compactors with a Greenwald-Khanna sketch, we replace some of these top compactors with another data structure. In our case, we replace the top \(t=O(1)\) compactors with a structure that we call a _linear compactor_. **Linear compactors.** A _linear compactor_ is a sorted list of elements, each of which is a pair of an item from the stream and a weight. As in KLL, the weight represents the number of stream items that the item represents; unlike in KLL, this weight varies between elements in the list and may be an arbitrary floating point number, rather than a power of two. Like a KLL compactor, a linear compactor has a capacity which we fix to \(tk\), the total capacity of the (fixed-size) compactors it replaces. When that capacity is exceeded, it undergoes _compaction_ and only half of its elements are retained. A KLL compactor \(C_{h}\) at height \(h\) implicitly repre sents a piecewise-constant function \(f\): specifically, \[f(q)=\sum_{x\in C_{h}:x\leq q}w_{h}.\] This function is the contribution of this compactor to the approximated rank of a query point \(q\). A linear compactor implicitly represents a _piecewise-linear_ function which also contributes to the rank of \(q\). Given a linear compactor \(L=\{(y_{1},w_{1}),(y_{2},w_{2}),\ldots,(y_{k},w_{k})\}\) with \(y_{1}\leq y_{2}\leq\cdots\leq y_{k}\), the contribution of \(L\) to the the rank of \(q\) is \[f_{L}(q)=\sum_{\begin{subarray}{c}i=1\\ \text{KLL-style term}\end{subarray}}^{i^{*}-1}\!\!w_{i}\quad+\quad\underbrace{ w_{i^{*}}\frac{q-y_{i^{*}-1}}{y_{i^{*}}-y_{i^{*}-1}}}_{\text{interpolation term}} \tag{1}\] where \(i^{*}\) is the smallest index such that \(y_{i^{*}}>q\). In effect, we spread the weight of \(y_{i^{*}}\) over the entire interval between \(y_{i^{*}-1}\) and \(y_{i^{*}}\), with uniform density, rather than treating it as a point mass at \(y_{i^{*}}\) exactly. The resulting contribution \(f_{L}(q)\) is a monotone, piecewise-linear function, as desired. **Adding points to a linear compactor.** Our linear compactor receives points from the last of the KLL-style compactors, each with a fixed weight of \(w_{H-t-1}\). These points and weights cannot be merged by merely concatenating the arrays. To see this, consider adding a single point \(b\) with unit weight to a compactor with two points \(a\) and \(c\) with \(a<b<c\), and where \(c\) has weight \(w\). The weight of \(c\) after the compaction should not be \(w\) since the weight of \(c\) before the addition should be spread uniformly over the entire interval \([a,c]\). Instead, we add a set of new points \(y_{1}<y_{2}<\cdots<y_{m}\) to an existing set of points \(x_{1}<x_{2}<\cdots<x_{n}\) by merging the two lists of points into one list and sorting them into the list \(z_{1}<z_{2}<\cdots<z_{m+n}\). Next, we set \(w(z_{1})\) equal to the weight of \(z_{1}\) in the original list and compute the new weights recursively. Assuming that \(z_{i}=x_{i}\) without loss of generality, we set \[w(z_{i})=w(x_{i})\frac{x_{i}-z_{i-1}}{x_{i}-x_{i-1}}+w(y_{*})\frac{x_{i}-z_{i-1 }}{y_{*}-y_{*-1}}\] where \(y_{*}\) is the first \(y_{i}\) such that \(y_{i}>z_{i}\). Equivalently, we convert each of the weight functions into a rank function using Equation 1, sum those, and then compute the finite differences to obtain the final weight function. **Compacting a linear compactor.** Lastly, we describe the process for compacting a linear compactor. Given a parameter \(\alpha\in[0,1]\) and a linear compactor \(C\) containing \(n\) points, we wish to obtain a new linear compactor \(C^{\prime}\) with \(\alpha n\) points with the following properties: * The points in \(C^{\prime}\) are subset of the points in \(C\). * The total weight of the points in both compactors is the same, so that \(\sum_{x\in C}w(x)=\sum_{x\in C^{\prime}}w(x^{\prime})\). * For every point \(x\in C^{\prime}\), the rank \(f_{C}(x)=f_{C^{\prime}}(x)\). * The "error" introduced by the compaction is as small as possible. That is, for some loss function \(L\), we would like \(\sum_{x\in C}L(f_{C^{\prime}}(x),f_{C}(x))\) to be as small as possible. In this paper, we use \(\alpha=1/2\), although in principle other values could be used. It is important that this procedure can be completed efficiently. In our experiments, we primarily use supremum (\(\ell_{\infty}\)) loss \(L(x,x^{\prime})=\sup_{x}|x-x^{\prime}|\). This can be minimized using a dynamic programming technique introduced by [6]. ## 4 Analysis We give a worst-case analysis of our algorithm that matches the worst-case analysis for the version of the non-GK KLL sketch: **Theorem 4.1**: _The linear compactor sketch described in Section 3 computes an \(\varepsilon\)-approximation for the rank of a single item with probability \(1-\delta\) with space complexity \(O((1/\varepsilon)\log^{2}\log(1/\delta))\)._ Our technique analyzes the error introduced by each compactor, using two techniques. To analyze the error of the KLL-style compactors of the linear compactor sketch, we prove that they introduce precisely the same error as they would in a non-GK KLL sketch run on the same stream. We then apply the two-part analysis of the non-GK KLL sketch, analyzing the first \(H-s\) compactors and the \((H-s)\)th through \((H-s+t)\)th compactors separately. To analyze the error of the linear compactor at the top, we analyze the error introduced per compaction. We then analyze the number of compactions of the linear compactor and therefore the total error introduced by the linear compactor. Consider a stream \(X=x_{1},x_{2},\ldots,x_{n}\). Let \(S(X)\) be a non-GK KLL sketch computed on this stream that terminates with \(H\) compactors and let \(S_{b}(X)\) be the \(b\)th compactor of \(S(x)\). Similarly, let \(S^{\prime}(X)\) be a linear compactor sketch computed on this stream with \(H-t\) levels of KLL-style compactors and one linear compactor at level \(H-t+1\). Let \(S^{\prime}_{b}(X)\) be the \(b\)th compactor \(S^{\prime}(X)\). Following [7], let \(R(S,x,h)\) be the rank of item \(x\) among all points in compactors in the sketch \(S\) at heights at most \(h^{\prime}\leq h\) at the end of the stream. For convenience, we set \(R(x,0)\) to be the true rank of in the input stream. Let \(\operatorname{err}(S,x,h)=R(S,x,h)-R(S,x,h-1)\) be the total change in the approximate rank of \(x\) due to the compactor at level \(h\). The total error decomposes into this error per compactor as \(\sup_{x}|R(x,0)-S^{\prime}(x)|=\sum_{h=1}^{H}\operatorname{err}(S^{\prime},x,h)\). #### Analyzing the KLL compactors In both \(S\) and \(S^{\prime}\), stream elements only move from lower compactors to higher ones, and the compactor at level \(b\) at any point while processing the stream is defined entirely by the compactors at _lower_ levels up to that point. Therefore, for all \(b<H-t\), \(S^{\prime}_{b}(X)=S(X)\). In a KLL sketch, the lowest compactors all have a capacity of exactly 2. As the authors note, a sequence of \(H^{\prime\prime}\) compactors that all have capacity 2 is essentially a sampler: out of every \(2^{H^{\prime\prime}}\) elements they select one uniformly and output it with weight \(2^{H^{\prime\prime}}\). This means that these compactors--in both KLL and linear compactor sketch--can be implemented in \(O(1)\) space. To handle the other KLL compactors, we use a theorem from [7] as a key lemma: (Theorem 3 in [7]) Consider the non-GK KLL sketch \(S(X)\) with height \(H\), and where the compactor at level \(h\) has capacity \(k_{h}\geq kc^{H-h}\). Let \(H^{\prime\prime}\) be the height at which the compactors have size greater than 2 (i.e., where the compactors do not just perform sampling). For any \(H^{\prime}>H^{\prime\prime}\), we have \[\Pr\left[\sum_{h=1}^{H}\operatorname{err}(S,x,h)>2\varepsilon n\right] \leq 2\exp\left(-c\varepsilon^{2}k2^{H-H^{\prime\prime}}/32\right)\] \[+2\exp\left(-C\varepsilon^{2}k^{2}2^{2(H-H^{\prime})}\right).\] #### Analyzing the linear compactor As mentioned, we will analyze the error introduced by the linear compactor compaction-by-compaction. Specifically, we analyze the linear compactor sketch between the end of one compaction and the end of the following compaction. During this interval, a total of \(d\) items of weight \(2^{H-t}\) are added to the linear compactor, where either \(d=tk\) if the linear compactor has never compacted or \(d=tk/2\) if it has. Let \(f\) be the piecewise linear rank function of the full linear compactor right before the compaction with endpoint set \(Z\) comprising \(z_{1}<z_{2}<\dots<z_{tk}\) and weight function \(w\). Let \(f^{\prime}\) be the piecewise linear rank function of the linear compactor immediately after the compaction, with endpoint set \(Z^{\prime}\subset Z\), weight function \(w^{\prime}\), and \(|Z^{\prime}|=|Z|/2\). The linear compactor compaction procedure removes some of the items in the linear compactor. A _run_ is a sequence of removed elements that are adjacent in sorted order. We show that the error introduced by a linear compactor is bounded by the greatest run of displaced weight. **Lemma 4.1**: _Organize \(Z\setminus Z^{\prime}\) into continuous runs of adjacent removed elements, and let \(F_{i}\) be the total weight of the \(i\)th run. Then \(\sup_{z\in Z}|f(z)-f^{\prime}(z)|\leq\max_{i}F_{i}\)._ Fix a run with endpoints \(a\) and \(b\) and let its total weight be \(F=\sum_{i=a+1}^{b-1}w(z_{i})\). Consider any point \(z_{j}\) in that run, so that \(a<j<b\). Its original rank was \(f(z_{j})=\sum_{i=1}^{j}w(z_{i})\) while its new rank is, by construction, \(f^{\prime}(z_{j})=\sum_{i=1}^{a}w(z_{i})+\frac{F+w(z_{i})}{z_{b}-z_{a}}(z_{j}- z_{a})\). Therefore, \[|f(z_{j})-f^{\prime}(z_{j})| =\left|\sum_{i=a+1}^{j}w(z_{i})-\frac{F+w(z_{b})}{z_{b}-z_{a}}(z_ {j}-z_{a})\right|\] \[=\left|\sum_{i=a+1}^{j}w(z_{i})-\sum_{i=a+1}^{b}w(z_{i})\frac{z_ {j}-z_{a}}{z_{b}-z_{a}}\right|\] \[\leq\sum_{i=a+1}^{b-1}w(z_{i})\] \[=F.\qquad\hbox{\vrule width 7.499886pt height 6.49988pt depth 0.0pt}\] Next, we show that the greater error introduced by a linear compaction step occurs _at one of the discarded endpoints_: **Lemma 4.2**: _There is some \(z_{i}\in Z\setminus Z^{\prime}\) such that \(\sup_{x\in[z_{1},z_{tk}]}|f(x)-f^{\prime}(x)|=|f(z_{i})-f^{\prime}(z_{i})|\)._ Consider any point \(x\in[z_{1},z_{d}]\). If \(x\) is one of the endpoints retained after compaction \(z_{j}\in Z^{\prime}\), then by construction \(f(x)=\sum_{i\leq j}w(z_{i})=f^{\prime}(x)\). Our claim does not depend on the error if \(x\) is one of the endpoints in \(Z\setminus Z^{\prime}\). Suppose then that \(x\) is not in the original endpoint set \(Z\). Let \(z_{a}\) and \(z_{b}\) be the left and right neighbours of \(x\) in \(Z\). By the definition of the linear compactor, \[f(z_{a}) =\sum_{i=1}^{a}w(z_{i}),\] \[f(x) =\sum_{i=1}^{a}w(z_{i})+\frac{w(z_{b})}{z_{b}-z_{a}}(x-z_{a}),\] \[f(z_{b}) =\sum_{i=1}^{b}w(z_{b}).\] Let \(z_{a^{\prime}}\) and \(z_{b^{\prime}}\) be the left and right neighbours of \(x\) in \(Z^{\prime}\). By definition, the weight \(W:=w^{\prime}(z_{b^{\prime}})=\sum_{i=a^{\prime}+1}^{b^{\prime}}w(z_{i})\) and so we have \[f^{\prime}(x)=\sum_{i=1}^{a^{\prime}}w(z_{i})+\frac{W}{z_{b^{\prime}}-z_{a^{ \prime}}}(x-z_{a^{\prime}}).\] Therefore, \[f(x)-f(x^{\prime})= \sum_{i=a^{\prime}+1}^{a}w(z_{i})+\] \[\left(\frac{w(z_{b})}{z_{b}-z_{a}}-\frac{W}{z_{b^{\prime}}-z_{a^{ \prime}}}\right)(x-z_{a^{\prime}}).\] Observe that this expression obtains its extremum on the interval \([z_{a},z_{b}]\subset[z_{a^{\prime}},z_{b^{\prime}}]\) at either \(z_{a}\) or \(z_{b}\), depending on the sign of \(D=\frac{w(z_{b})}{z_{b}-z_{a}}-\frac{W}{z_{b^{\prime}}-z_{a^{\prime}}}\). In either case, \(|f(x)-f(x^{\prime})|\) achieve its maximum at one of the endpoints \(z_{a}\) or \(z_{b}\), completing the proof. We use a simple counting argument to bound the size of the majority of the weights in a linear compactor: **Lemma 4.3**: _Consider a linear compactor that has just completed its cth compaction. At least half of the endpoints \(Z\) in the linear compactor have weight at most \((2c+3)2^{H-t}\)._ Every point enters the linear compactor with weight \(2^{H-t}\). After \(c\) compactions, a total of \((2+c)tk/2\) such points have entered the compactor. A compaction operation conserves the total weight of points so the total weight of the compactor is \((2+c)2^{H-t}tk/2\). Suppose that more than half of the \(tk\) points currently in the compactor have weight at most \(T\). These points have a total weight greater than \(Ttk/4\) while the remaining point s each have weight at least \(2^{H-t}\) and so have total weight at least \(2^{H-t}tk/4\). The total weight is therefore \((T+2^{H-t})tk/4\). This weight must not exceed the total conserved weight \((2+c)2^{H-t}tk/2\), and so we have \[\frac{(T+2^{H-t})tk}{4}\leq\frac{(2+c)2^{H-t}tk}{2}.\] Rearranging, we obtain that our result holds for any \(T\leq(2c+3)2^{H-t}\). Combining these lemmas, we obtain a bound on the error introduced during a single compaction step. **Theorem 4.3**: _Suppose that the compaction being studied is the \((c+1)\)th compaction. The error introduced during this compaction step is \(\sup_{x}|f(x)-f^{\prime}(x)|\leq(c+2)2^{H-t+1}\)._ We construct a particular post-compaction distribution of weights as follows. Let \(f^{\prime\prime}\) be the rank function for that post-compaction state. During this interval, there were \(tk/2\) points with weight \(2^{H-t}\) that we added to the linear compactor for the first time. In addition, there were \(tk/2\) points remaining from a previous linear compaction. We sort the \(tk/2\) new points and keep every fourth point, discarding the rest and reallocting their weight to the next highest retained point (of either type). By Lemma 4.3, there exists at least \(tk/4\) of the existing points in the linear compactor with weight at most \(2^{H-t+c}\). We sort these points and discard every other point. In total, we discard the required \(tk/2\) points. Observe that the longest possible run in this compaction consists of one of the existing points and three (out of a sequence of four) of the new points that were discarded. By Lemma 4.1, the error introduced on any of the original endpoints by this compaction is bounded by the sum of the weights of the points in the run: in this case, that sum is \(2^{H-t}+3\cdot(2c+3)2^{H-t}\leq(c+2)2^{H-t+1}\). By Lemma 4.2, we find that the error introduced by \(f^{\prime\prime}\) is \(\sup_{x}|f(x)-f^{\prime\prime}(x)|\leq(c+2)2^{H-t+1}\). We have exhibited a particular feasible solution to the optimization problem in the linear compaction. Our actual algorithm finds, among all such feasible solutions, the one that minimizes this error function; it follows that \[\sup_{x}|f(x)-f^{\prime}(x)| \leq\sup_{x}|f(x)-f^{\prime\prime}(x)|\] \[\leq(c+2)2^{H-t+1}.\qquad\hbox{\vrule width 4.0pt height 0.4pt depth 0.4pt \vrule width 4.0pt height 0.4pt depth 0.4pt}\] Combining KLL and linear compactors.Lastly, we combine our analysis of the KLL and linear compactor to obtain an overall error bound and prove Theorem 4.1. Our analysis closely follows the form of the proof of Theorem 4 in [7]. [Proof of Theorem 4.1] First, we analyze the compactors with height at most \(H-t\), including the sampling compactors. These are all KLL-style compactors; by Theorem 4.2 these compactors will contribute error at most \(\varepsilon n\) with probability \(1-\delta\) so long as \(\varepsilon k2^{s}\geq c^{\prime}\sqrt{\log(2/\delta)}\) for a sufficiently small \(c^{\prime}\). Second, we analyze the top \(s-t\) compactors. The error introduced by these compactors is bounded by the error of the equivalent non-GK KLL sketch where we have a full \(s\) equal-size compactors at the top. This error is in turn bounded by \(\sum_{h=H-s+1}^{h}m_{h}w_{h}=\sum_{h=H-s+1}^{H}n/k=sn/k\), where \(m_{h}\) is the number of times that the KLL compactor at level \(h\) is compacted and \(w_{h}=2^{h}\) is the weight associated with that compactor; this is at most \(\varepsilon n\) so long as \(s\leq k\varepsilon\). Taking \(k=O((1/\varepsilon)\log\log(1/\delta))\) and \(s=O(\log\log(1/\delta))\) as in KLL, we satisfy both of these conditions. Lastly, we analyze the single linear compactor with size \(tk\) that replaces the top \(t<s\) KLL compactors. Let \(M\) be the number of compactions of the linear compact. Observe that since between each compaction of the linear compactor we add \(tk/2\) entries, each with weight \(2^{H-t}\) to the compactor, and so \(M\leq 2n/(tk2^{H-t})\). Applying Theorem 4.3, and summing the error introduced per compaction, the total error is \[\sum_{c=1}^{M}(c+2)2^{H-t+1} =M2^{H-t}+2^{H-t}M(M+1)\] \[\leq 2^{H-t+1}M^{2}\] \[\leq\frac{8n^{2}}{t^{2}k^{2}2^{H-t}}.\] Our compactors are sized at each level in the same way as a non-GK KLL-sketch. As in the KLL analysis, we have \(H\leq\log(n/ck)+2\) for a constant \(0<c<1\). Therefore, our error is bounded by \[\frac{8n^{2}}{t^{2}k^{2}2^{\log(n/ck)+2-t}}\leq\frac{8ckn^{2}}{t^{2}k^{2}n2^{2 -t}}=\frac{cn2^{t+1}}{t^{2}k}.\] For constant \(t\) and any \(k=O((1/\varepsilon)\log\log(1/\delta))\) as in KLL, this is at most \(\varepsilon n\). Therefore, the total error of the sketch is \(O(\varepsilon n)\) as required. Each part of the sketch contributes some space. The KLL compactors increase geometrically in size, so the space used by the KLL portion of the sketch is dominated by the top \(s-t\) compactors and uses \(O(sk)=O((1/\varepsilon)\log^{2}\log(1/\delta))\) space. The linear compactor sketch uses twice as much space per element as a KLL compactor, for a total of \(O(tk)=O(k)\) space, so the total space usage is \(O((1/\varepsilon)\log^{2}\log(1/\delta))\). ## 5 Experiments We wrote a performant implementation of our algorithm and evaluated its empirical error over a wide range of space parameters \(k\) and several linear compactor heights \(t\). Our experiments were conducted on the recent SOSD benchmarking suite [8, 11] for learned index structures. Each SOSD benchmark consists of a large number (generally 200 to 800 million) of 64-bit unsigned integer values. Of particular interest were the books, osm_cellid, and wiki_ts data sets, since the rank functions of these three data sets have distinctly different shapes, as shown in Figure 2. **Parameterization.** The algorithm is parameterized by the KLL space parameter \(k\), which determines the size of the largest compactors and the linear compactor, and \(t\), the number of KLL compactors that are replaced by the linear compactor. Our worst-case bound holds for any constant \(t\) but this bound is exponential in \(t\). In practice, we experimented with a variety of small but non-zero values (\(t=1,2,3\)). We see \(t\) as a parameter that is tunable based on the desired empirical performance and desired worst-case guarantees and expect that it will be selected appropriately on an application-by-application basis. **Implementation details.** We implemented our algorithms in C++ with Python bindings for experiment management and data analysis. Our implementation is reasonably performant: in informal experiments, it achieves a throughput that is only about three times less than that of highly-optimized, production-quality KLL implementations. This performant implementation allowed us to work with the entirety of the SOSD data sets; in our preliminary work, we found that many promising algorithms would only show improvements over KLL on moderately-sized data sets of less than a million points. Our implementation supports any integer \(t\geq 0\): when \(t=0\), our implementation is identical to the commonly implement variant of KLL without the Greenwald-Khanna sketch. **Baselines.** Our algorithm is most naturally compared to KLL since the KLL sketch can be seen as an instance of the linear compactor sketch with no linear compactor. We ran our experiments on our implementation of (non-GK) KLL (by setting \(t=0\)) and validated those results with an open-source implementa Figure 2: The rank functions for the three SOSD data sets used in our experiments. The three data sets have rank functions with distinctive shapes, allowing us to compare the algorithms in a variety of settings. tion from Facebook's Folly library [1]. Like most implemented version of the KLL sketch, neither of these include the final Greenwald-Khanna sketch that is required to achieve space-optimality. In addition to the non-GK KLL sketch, which offers worst-case guarantees, we ran experiments on the t-digest [3], which is commonly used in practice but is known to have arbitrarily bad worst-case performance [2]. We used the C++ implementation of t-digest in the digestible library [13]. **Stream order.** We found that many streaming quantile approximation algorithms without worst-case guarantees achieve very low error compared to the KLL sketch if they are given an input stream in a particular order but high error on other input orders. For example, Figure 1 shows that, even for a fixed set of inputs with a smooth rank function (books), there exists an adversarial order that makes the t-digest approximation have high error. This observation might be of independent interest. We evaluated the linear compactor sketch and the baselines on a variety of input orders for each data set: * Random: the data are shuffled with a fixed seed. * Sorted: the data are presented in a sorted order. * First half sorted, second half reverse-sorted: the first half of the stream has the first half of the sorted data, in that order. The second half of the stream has the second half of the sorted data presented in _reverse-sorted order_. * Flip flop: the stream has the smallest element, then the largest element, then the second-smallest element, then the second-largest element, and so on. This is the adversarial order from Figure 1. ### Experiment results and discussion. Our primary tool for insight into our experiments is the space-error tradeoff curve that shows how the total space needed for the sketch compares to the empirical error between the exact rank function and the approximation defined by the sketch. We obtain these curves for three different data sets from SOSD, four different sort orders, and four different algorithms; these curves are shown in Figure 4. We use average L1 error, defined for a data set \(X\) as \(\sum_{x\in X}|f(x)-f^{\prime}(x)|\). Qualitatively, the linear compactor sketch is never significantly worse than KLL, even on our adversarial input orders like flip flop, and is often competitive with--or even better than--t-digest. The differences are most pronounced on the books dataset, which has a smooth CDF that is extremely well-approximated by the linear compactor sketch's piecewise linear representation. For a more quantitative understanding of the performance of the linear compactor sketch compared to KLL and t-digest, we produced diagrams of the "possible error ratio hulls", shown in Figure 5. To obtain such a hull, we first determined the upper and lower frontiers for the data points (in Figure 4) for each algorithm. These frontiers form an "envelope" or hull that encompasses all of the points for each dataset: an example of such a hull is shown in Figure 3. We then interpolate the envelope to obtain smooth curves (as in Figure 3) and compute the ratios with respect to a another algorithm's envelope (between the upper/lower and lower/upper pairs), producing a hull that shows the range of behaviour between the "worst case" and "best case" performance of the two algorithms. We see that the linear compactor sketch achieves an error that is between \(3\times\) worse and \(10\times\) better than KLL and between \(10\times\) worse and \(20\times\) better than t-digest. ### Acknowledgements Justin Y. Chen was supported by a MathWorks Engineering Fellowship, a GIST-MIT Research Collaboration grant, and NSF award CCF-2006798. Justin Y. Chen, Shyam Narayanan, and Sandeep Silwal were supported by NSF Graduate Research Fellowships under Grant No. 1745302. Nicholas Schiefer, Justin Y. Chen, Piotr Indyk, Shyam Narayanan, and Sandeep Silwal were supported by a Simons Investigator Award. Piotr Indyk was supported by the NSF TRIPODS program (award DMS-2022448). We thank Sylvia Hurlimann, Jessica Balik, and the anonymous reviewers for their helpful suggestions. Figure 3: An example of the “envelope” or hull of the space-error data points for the linear compactor sketch with \(t=2\) (books dataset, random order). It shows the range of errors that a user can expect from the linear compactor sketch given a certain amount of space. Figure 4: Space-error tradeoff curves for the baselines and linear compact sketch on three different data sets from SOSD and four different sort order, described above. Markers indicate individual sketches, while curves indicate the lower frontier of possibilities observed (that is, the lower envelope described above) to highlight the general capabilities of each algorithm. A better algorithm has a curve that is further down and to the left, indicating lower error at a given amount of space. Figure 5: Hulls representing the possible ratios of the error achieved by a reference algorithm (either KLL or t-digest) to the error achieved by the linear compactor sketch with \(t=2\). The filled-in area represents the area ratios consistent with the experiments in Figure 4. We see that the linear compactor sketch always achieve error no worse than \(3\times\) that of KLL, while often achieving and error that is competitive with—and sometimes much lower than—that achieved by the t-digest.
2305.08648
A generalization of Kummer theory to Hopf-Galois extensions
We introduce a condition for Hopf-Galois extensions that generalizes the notion of Kummer Galois extension. Namely, an $H$-Galois extension $L/K$ is $H$-Kummer if $L$ can be generated by adjoining to $K$ a finite set $S$ of eigenvectors for the action of the Hopf algebra $H$ on $L$. This extends the classical Kummer condition for the classical Galois structure. With this new perspective, we shall characterize a class of $H$-Kummer extensions $L/K$ as radical extensions that are linearly disjoint with the $n$-th cyclotomic extension of $K$. This result generalizes the description of Kummer Galois extensions as radical extensions of a field containing the $n$-th roots of the unity. The main tool is the construction of a product Hopf-Galois structure on the compositum of almost classically Galois extensions $L_1/K$, $L_2/K$ such that $L_1\cap M_2=L_2\cap M_1=K$, where $M_i$ is a field such that $L_iM_i=\widetilde{L}_i$, the normal closure of $L_i/K$. When $L/K$ is an extension of number or $p$-adic fields, we shall derive criteria on the freeness of the ring of integers $\mathcal{O}_L$ over its associated order in an almost classically Galois structure on $L/K$.
Daniel Gil-Muñoz
2023-05-15T13:48:33Z
http://arxiv.org/abs/2305.08648v2
# A generalization of Kummer theory to ###### Abstract We introduce a Kummer condition for Hopf-Galois extensions that generalizes the notion of Kummer Galois extension, depending on the elements that are eigenvectors for all elements in a Hopf-Galois structure. We use this new perspective to characterize radical extensions \(L=K(\sqrt[p]{a_{1}},\ldots,\sqrt[p]{a_{k}})\) of a field \(K\) that are \(K\)-linearly disjoint with \(K(\zeta_{n})\), where \(\zeta_{n}\) is a primitive \(n\)-th root of unity. The main tool is the construction of a product Hopf-Galois structure on the compositum of almost classically Galois extensions \(L_{1}/K\), \(L_{2}/K\) under fairly general restrictions from Hopf-Galois structures on those. When \(L/K\) is an extension of number or \(p\)-adic fields, we shall derive criteria on the freeness of \(\mathcal{O}_{L}\) as module over its associated order in an almost classically Galois structure on \(L/K\). _MSC--_ 12F10, 11Z05, 16T05, 11R04, 11R18, 11S15 _Keywords--_ Kummer extension, Hopf-Galois structure, \(H\)-eigenvector ## 1 Introduction Let \(n\) be a positive integer and let \(K\) be a field whose characteristic is coprime to \(n\). Adjoining elements \(\alpha_{1},\ldots,\alpha_{k}\in L\) to \(K\) such that \(\alpha_{i}^{n}\in K\) defines what is usually called a radical extension of \(K\) (if \(n\) is minimal for this property, we will say that \(L/K\) is \(n\)-radical). If \(K\) contains the \(n\)-th roots of the unity, it is a Galois field extension \(L/K\) whose Galois group \(G\) is abelian of exponent dividing \(n\). Kummer proved that actually any such extension arise from the adjunction of \(n\)-th roots of elements in the ground field. These extensions were therefore called Kummer extensions, and the elements \(\alpha_{i}\) are usually referred to as Kummer generators. Among the radical extensions, the ones obtained by adjoining to \(K\) a single \(n\)-th root of an element in \(K\) will be called simple radical. As Kummer extensions, they are just the cyclic extensions of \(K\). A typical limitation to classical Kummer theory is the requirement that the ground field contains a primitive \(n\)-th root of unity. For instance, this implies that the only Kummer extensions of \(\mathbb{Q}\) are quadratic. Some authors have considered generalizations of this theory to further classes of extensions [35, 22, 28, 32]. In this paper we will develop a Kummer theory for the extensions \(L=K(\alpha_{1},\ldots,\alpha_{k})\) with \(\alpha_{i}^{n}\in K\) such that \(L\cap K(\zeta_{n})=K\), where \(\zeta_{n}\) is a primitive \(n\)-th root of the unity. Under this assumption, the normal closure \(\widetilde{L}\) of \(L/K\) is a Kummer extension of \(K(\zeta_{n})\) that share many properties with the extension \(L/K\), even though the latter is not Galois. A suitable setting to visualize \(L/K\) as a generalized Kummer extension is the one provided by the theory of Hopf-Galois extensions. The beginning of Hopf-Galois theory is the notion of Hopf-Galois structure on a finite extension \(L/K\). This is a pair formed by a \(K\)-Hopf algebra \(H\) and a \(K\)-linear action \(H\otimes_{K}L\longrightarrow L\) such that \(L\) is an \(H\)-module algebra and the canonical map \(j\colon L\otimes_{K}H\longrightarrow\operatorname{End}_{K}(L)\) is a \(K\)-linear isomorphism. A Hopf-Galois extension is a finite extension \(L/K\) that admits some Hopf-Galois structure \((H,\cdot)\). We also say that \(L/K\) is \(H\)-Galois. Under this definition, every Galois extension is Hopf-Galois but the converse does not hold in general. The notion of Hopf-Galois structure was introduced for the first time by Chase and Sweedler [8], and it has been proved as a useful tool to study problems from classical Galois theory in a more general setting. The main research lines and outcomes of this theory so far have been summarized in the books [9, 10]. Radical extensions as described above are particular instances of almost classically Galois extensions, a class of Hopf-Galois extensions that are naturally associated to Galois extensions. Concretely, a separable extension of fields \(L/K\) is said to be almost classically Galois if for the normal closure \(\widetilde{L}\) of \(L/K\), the group \(G^{\prime}\coloneqq\operatorname{Gal}(\widetilde{L}/L)\) has some normal complement \(J\) in \(G\coloneqq\operatorname{Gal}(\widetilde{L}/K)\) The fixed subfield \(M=\widetilde{L}^{J}\) will be referred to as the complement of \(L/K\). Associated to \(M\) we can construct a Hopf-Galois structure \(H\) on \(L/K\), which will be referred to as the almost classically Galois structure corresponding to \(M\). In particular, an almost classically Galois extension is Hopf-Galois. Note that the extension \(\widetilde{L}/M\) is Galois with group \(J\), and it has the same degree as \(L/K\). We will say that \(L/K\) is almost abelian (resp. almost cyclic, resp. almost Kummer) if \(\widetilde{L}/M\) is abelian (resp. cyclic, resp. Kummer). In order to generalize the concept of Kummer extension, we introduce the notion of Galois eigenvector: this is an element \(\alpha\) of a Galois extension with group \(G\) with the property that for each \(g\in G\) there is some \(\lambda\in K\) such that \(g(\alpha)=\lambda\alpha\), i.e, \(\alpha\) is an eigenvector of all the automorphisms of \(G\). It turns out that Kummer extensions admit a finite generating set of Galois eigenvectors, namely its Kummer generators. What is more, Kummer extensions are characterized by this property, in the sense that the finite Galois extensions of \(K\) with a finite generating set of Galois eigenvectors are just the Kummer extensions of \(K\). Here by generating set for an extension \(L/K\) we mean a set \(S\subseteq L\) such that \(L=K(S)\). Now, if \(L/K\) is an \(H\)-Galois extension, we can translate these definitions easily. Namely, an element \(\alpha\in L\) is said to be an \(H\)-eigenvector if for each \(h\in H\) there is some \(\lambda\in K\) such that \(h\cdot\alpha=\lambda\alpha\), and an \(H\)-Kummer extension is an \(H\)-Galois extension with some finite generating system of \(H\)-eigenvectors. An \(H\)-Galois extension with a generating system reduced to a single \(H\)-eigenvector will be referred to as \(H\)-cyclic. The analogy with the Galois case arises naturally when we consider almost classically Galois extensions. We will be able to translate the correspondence between Kummer extensions and radical extensions from the classical case to this more general situation. We will need the following notion: two almost classically Galois extensions \(L_{1}\) and \(L_{2}\) of a field \(K\) with complements \(M_{1}\) and \(M_{2}\) are said to be strongly disjoint if \(L_{1}\cap M_{2}=L_{2}\cap M_{1}=K\). An extension that can be written as a compositum of pairwise strongly disjoint almost classically Galois extensions will be called strongly decomposable. **Theorem 1.1**.: _Let \(n\in\mathbb{Z}_{>0}\), let \(K\) be a field with characteristic coprime to \(n\) and let \(M=K(\zeta_{n})\). Let \(L/K\) be a strongly decomposable extension and let \(\alpha_{1},\ldots,\alpha_{k}\in L\) be such that \(L=K(\alpha_{1},\ldots,\alpha_{k})\) and \(K(\alpha_{i})\), \(K(\alpha_{j})\) are strongly disjoint almost classically Galois extensions whenever \(i\neq j\). The following statements are equivalent:_ 1. \(L\cap M=K\)_,_ \(\alpha_{i}^{n}\in K\) _for every_ \(1\leq i\leq k\) _and_ \(n\) _is minimal for this property._ 2. \(L/K\) _is an almost Kummer extension of exponent_ \(n\) _with complement_ \(M\) _and_ \(\alpha_{1},\ldots,\alpha_{k}\) _are_ \(H\)_-eigenvectors of_ \(L\)_, where_ \(H\) _is the almost classically Galois structure on_ \(L/K\) _corresponding to_ \(M\)_._ _In particular, within the strongly decomposable extensions of \(K\), the \(n\)-radical extensions that are linearly disjoint with \(M\) are the almost Kummer extensions of exponent \(n\) with complement \(M\) that are \(H\)-Kummer._ In order to prove Theorem 1.1, we shall show that the tensor product of Hopf-Galois structures on two strongly disjoint almost classically Galois extensions \(L_{1}/K\) and \(L_{2}/K\) is a Hopf-Galois structure on their compositum \(L/K\), which we call the product Hopf-Galois structure of those. Namely, if \(H_{i}\) is a Hopf-Galois structure on \(L_{i}/K\) for \(i\in\{1,2\}\), one can construct a Hopf-Galois structure \(H\) on \(L_{1}L_{2}/K\), in such a way that \(H\cong H_{1}\otimes_{K}H_{2}\) as \(K\) algebras. This notion can be seen as an analogue for almost classically Galois extensions of the induced Hopf-Galois structures introduced by Crespo, Rio and Vela [13]. However, it is not a generalization nor a particular case of those: they apply to different pairs of extensions, except in the case of Galois extensions, for which both notions coincide. What the last sentence of Theorem 1.1 means is that there is a bijective correspondence between radical extensions \(L/K\) with \(L\cap K(\zeta_{n})=K\) and almost Kummer extensions with complement \(K(\zeta_{n})\) that are \(H\)-Kummer. Among these, simple radical extensions \(L/K\) with \(L\cap K(\zeta_{n})=K\) correspond to almost cyclic extensions with complement \(K(\zeta_{n})\) that are \(H\)-cyclic. This can be seen as a direct generalization of the well known correspondence in classical Kummer theory. We will investigate the module structure of \(H\)-Kummer extensions with the point of view of Hopf-Galois theory. For an \(H\)-Galois field extension \(L/K\), we assume that \(K\) is the field of fractions of some Dedekind domain and we write \(\mathcal{O}_{L}\) for the integral closure of \(\mathcal{O}_{K}\) in \(L\) (for instance, this holds when \(L/K\) is an extension of number or \(p\)-adic fields). In short, we will say that \(L/K\) is an extension _with associated rings of integers_. Under this situation, the associated order in \(H\) is defined as the set \(\mathfrak{A}_{H}\) of elements of \(H\) whose action on \(L\) leave the ring of integers \(\mathcal{O}_{L}\) invariant. The associated order is an \(\mathcal{O}_{K}\)-order in \(H\) and \(\mathcal{O}_{L}\) is naturally endowed with \(\mathfrak{A}_{H}\)-module structure. Under the assumption that \(\mathcal{O}_{L}\) is \(\mathcal{O}_{K}\)-free, we have that if \(\mathcal{O}_{L}\) is \(\mathfrak{A}_{H}\)-free, then it has a single generator. If \(\mathfrak{A}\) is an \(\mathcal{O}_{K}\)-order in \(H\) such that \(\mathcal{O}_{L}\) is \(\mathfrak{A}\)-free, then \(\mathfrak{A}=\mathfrak{A}_{H}\). However, in general \(\mathcal{O}_{L}\) is not \(\mathfrak{A}_{H}\)-free, and it is interesting to find criteria in order to determine the \(\mathfrak{A}_{H}\)-freeness of such an extension. The study of these questions is often called Hopf-Galois module theory. This is a natural generalization of the situation of a Galois tamely ramified extension \(L/K\) with group \(G\), for which, in the \(p\)-adic case, having a normal integral basis is equivalent to \(\mathcal{O}_{L}\) being \(\mathcal{O}_{K}[G]\)-free, and for wildly ramified extensions we consider instead the associated order in \(K[G]\). This line has been explored almost exclusively for tamely ramified extensions, both in the Galois case [1, 25, 16, 15] and in the Hopf-Galois one [37]. Rio and the author [21] introduced a method to study this kind of questions for \(H\)-Galois extensions \(L/K\) from the knowledge of the action of \(H\) on an \(\mathcal{O}_{K}\)-basis of \(\mathcal{O}_{L}\), and it can be applied to wildly ramified extensions as well. In our situation, we will obtain the following: **Theorem 1.2**.: _Let \(L/K\) be an \(H\)-Kummer extension with associated rings of integers and assume that \(L/K\) admits some basis of \(H\)-eigenvectors \(B=\{\gamma_{j}\}_{j=1}^{n}\) which is also an \(\mathcal{O}_{K}\)-basis of \(\mathcal{O}_{L}\). Let \(W=\{w_{i}\}_{i=1}^{n}\) be a \(K\)-basis of \(H\). Write \(w_{i}\cdot\gamma_{j}=\lambda_{ij}\gamma_{j}\) with \(\lambda_{ij}\in K\) and let \(\Omega=(\omega_{ij})_{i,j=1}^{n}\) be the inverse of the matrix \(\Lambda=(\lambda_{ij})_{i,j=1}^{n}\). Then:_ 1. _An_ \(\mathcal{O}_{K}\)_-basis of_ \(\mathfrak{A}_{H}\) _is given by the elements_ \(v_{i}=\sum_{l=1}^{n}\omega_{li}w_{l}\)_,_ \(1\leq i\leq n\)_. Moreover, they form a system of primitive pairwise orthogonal idempotents._ 2. \(\mathcal{O}_{L}\) _is_ \(\mathfrak{A}_{H}\)_-free and a generator is any element_ \(\beta=\sum_{j=1}^{n}\beta_{j}\gamma_{j}\) _such that_ \(\beta_{j}\in\mathcal{O}_{K}^{*}\) _for every_ \(1\leq j\leq n\)_._ It is possible to obtain some criteria for the freeness for radical extensions of number or \(p\)-adic fields. We will follow this strategy: From Theorem 1.1 we know how to construct radical extensions \(L/K\) that are \(H\)-Kummer, which have some \(K\)-basis of \(H\)-eigenvectors. Then we will add sufficient conditions so that the existence of an integral \(K\)-basis of \(H\)-eigenvectors is assured, so that we can apply Theorem 1.2. These extra conditions will depend on the nature of the fields involved. If we are working with extensions of number fields, the existence of integral generators that are \(n\)-th roots of elements in \(K\) is enough. This is related with the monogeneity of \(L/K\), which has been widely studied in literature. In the case of \(p\)-adic fields, we will restrict to simple radical extensions, and in that case we will need some uniformizer of \(L\) that is an \(n\)-th root of some element in \(K\). Using these considerations, we will prove that a Hopf-Galois prime degree extension with maximally ramified normal closure accomplishes the freeness property over the associated order. This paper is organized as follows. In Section 3 we will investigate the compositums of almost classically Galois extensions. We will introduce the notion of strong disjointness and define the product Hopf-Galois structure on a compositum of strongly disjoint almost classically Galois extensions. Section 4 will be devoted to a review of the basic results concerning Kummer Galois extensions and their cyclic subextensions, as well as a proof of the characterization of the Kummer condition in terms of the Galois action. In Section 5 we will introduce the notion of \(H\)-eigenvector and \(H\)-Kummer extension. The aim of Section 6 will essentially be to prove Theorem 1.1, and we will extract some consequences. Finally, in Section 7 we will consider the problem of the module structure of \(\mathcal{O}_{L}\) described above, and we will prove Theorem 1.2. ## 2 Preliminaries ### Hopf-Galois structures and Greither-Pareigis theory Let \(L/K\) be a finite extension of fields. A Hopf-Galois structure on \(L/K\) is a pair \((H,\cdot)\) where \(H\) is a finite-dimensional cocommutative \(K\)-Hopf algebra and \(\cdot\) is a \(K\)-linear action of \(H\) on \(L\) such that: * The action \(\cdot\) endows \(L\) with \(H\)-module algebra structure, that is, the following conditions are satisfied for every \(h\in H\): \[h\cdot 1=\epsilon_{H}(h),\] \[h\cdot(xx^{\prime})=\sum_{(h)}(h_{(1)}\cdot x)(h_{(2)}\cdot x^{\prime}),\quad x,x^ {\prime}\in L,\] where \(\epsilon_{H}\colon H\longrightarrow K\) is the counity of \(H\) and the comultiplication \(\Delta_{H}\colon H\longrightarrow H\otimes_{K}H\) of \(H\) satisfies \(\Delta_{H}(h)=\sum_{(h)}h_{(1)}\otimes h_{(2)}\). * The canonical map \(j\colon L\otimes_{K}H\longrightarrow\operatorname{End}_{K}(L)\) defined by \(j(x\otimes h)(y)=x(h\cdot y)\) for every \(y\in L\) is an isomorphism of \(K\)-vector spaces. A Hopf-Galois extension is a finite extension \(L/K\) that admits some Hopf-Galois structure. For the sake of simplicity, we will denote a Hopf-Galois structure simply by \(H\) (so that the action \(\cdot\) is implicit in the context). We will usually say that \(L/K\) is \(H\)-Galois. If \(L/K\) is a Galois extension with group \(G\), then \(K[G]\) together with its Galois action on \(L\) (extended by \(K\)-linearity) is a Hopf-Galois structure on \(L/K\), commonly referred to as the classical Galois structure. There are many Hopf-Galois extensions that are not Galois, for instance \(\mathbb{Q}\left(\sqrt[3]{2}\right)/\mathbb{Q}\). A single Hopf-Galois extension may admit different Hopf-Galois structures. In the case that the extension \(L/K\) is separable, Greither and Pareigis [23, Theorem 2.1] found a characterization of the Hopf-Galois structures in terms of group theory. Concretely, let \(\widetilde{L}\) be the normal closure of \(L/K\), \(G=\operatorname{Gal}(\widetilde{L}/K)\), \(G^{\prime}=\operatorname{Gal}(\widetilde{L}/L)\) and \(X=G/G^{\prime}\). **Theorem 2.1** (Greither-Pareigis theorem).: _The Hopf-Galois structures on \(L/K\) are in bijective correspondence with the regular subgroups of \(\operatorname{Perm}(X)\) normalized by \(\lambda(G)\)._ In this statement, we say that a subgroup \(N\) of \(\operatorname{Perm}(X)\) is: * Regular, if its action on \(X\) by evaluation is simply transitive. * Normalized by \(\lambda(G)\) if \(\lambda(g)\eta\lambda(g)^{-1}\in N\) for every \(g\in G\), where \(\lambda\colon G\longrightarrow\operatorname{Perm}(X)\), defined by \(\lambda(g)(hG^{\prime})=ghG^{\prime}\), is the left translation map of \(L/K\). Moreover, there is an isomorphism of \(\widetilde{L}\)-algebras \(\widetilde{L}\otimes_{K}H\longrightarrow\widetilde{L}[N]\), and by descent theory this gives \[H=\widetilde{L}[N]^{G}=\{h\in\widetilde{L}[N]\,|\,g(h)=h\text{ for all }g\in G\},\] where \(G\) acts by evaluation on \(\widetilde{L}\) and by conjugation by \(\lambda(G)\) on \(N\). The action of \(H\) on \(L\) is as follows: if \(h=\sum_{i=1}^{n}h_{i}\eta_{i}\in H\) with \(h_{i}\in\widetilde{L}\) and \(\alpha\in L\), \[h\cdot\alpha=\sum_{i=1}^{n}h_{i}\eta_{i}^{-1}(1_{G}G^{\prime})(\alpha).\] In the case that \(L/K\) is Galois, we have that \(G^{\prime}\) is trivial, so the left translation map becomes \(\lambda\colon G\longrightarrow\operatorname{Perm}(G)\), the left regular representation of \(G\) in \(\operatorname{Perm}(G)\). Let \(\rho\colon G\longrightarrow\operatorname{Perm}(G)\) be defined as \(\rho(g)(g^{\prime})=g^{\prime}g^{-1}\). Then \(\rho(G)\) and \(\lambda(G)\) are regular subgroups both normalized by \(\lambda(G)\), giving rise to Hopf-Galois structures on \(L/K\). The one given by \(\rho(G)\) is the classical Galois structure on \(L/K\) (see [9, (6.10)]). #### 2.1.1 Induced Hopf-Galois structures The notion of induced Hopf-Galois structure was originally introduced by Crespo, Rio and Vela [13]. Let \(L/K\) be a Galois extension with group of the form \(G=J\rtimes G^{\prime}\) with \(J\) normal in \(G\), and let \(E=L^{G^{\prime}}\). The induction theorem essentially states that under these hypothesis, we can construct a Hopf-Galois structure on \(E/K\) from Hopf-Galois structures on \(E/K\) and \(L/E\) by carrying out the direct product of the corresponding permutation subgroups under the Greither-Pareigis correspondence (see [13, Theorem 3]). We present here the reformulation of this notion by Rio and the second author [21, Section 5]. First of all, by Greither-Pareigis theorem, the Hopf-Galois structures on \(L/E\) are in one-to-one correspondence with the regular subgroups of \(\operatorname{Perm}(G^{\prime})\) normalized by the image of the left regular representation \(\lambda^{\prime}\colon G^{\prime}\longrightarrow\operatorname{Perm}(G^{ \prime})\). On the other hand, the extension \(E/K\) is not Galois, and it can be proved that we can use Greither-Pareigis theorem to describe its Hopf-Galois structures as if \(L=\widetilde{E}\), even if it is not (in general, \(\widetilde{E}\subseteq L\)). Hence, the Hopf-Galois structures on \(E/K\) correspond bijectively to the regular subgroups of \(\operatorname{Perm}(G/G^{\prime})\) normalized by \(\overline{\lambda}\colon G\longrightarrow\operatorname{Perm}(G/G^{\prime})\). Now, since \(G=J\rtimes G^{\prime}\), \(J\) is a transversal of \(G/G^{\prime}\), and then \(\operatorname{Perm}(G/G^{\prime})\cong\operatorname{Perm}(J)\), which yields a map \(\lambda_{c}\colon G\longrightarrow\operatorname{Perm}(J)\). On this way, the Hopf-Galois structures on \(E/K\) correspond bijectively to the regular subgroups of \(\operatorname{Perm}(J)\) normalized by \(\lambda_{c}(G)\). Finally, the Hopf-Galois structures on \(L/K\) are in bijective correspondence with the regular subgroups of \(\operatorname{Perm}(G)\) normalized by the image of the left regular representation \(\lambda\colon G\longrightarrow\operatorname{Perm}(G)\). Now, it can be checked that \(\lambda=\iota\circ\chi\), where \(\iota\) and \(\chi\) are the group homomorphisms given by \[\begin{array}{ccccc}\chi\colon&G&\longrightarrow&\operatorname{Perm}(J) \times\operatorname{Perm}(G^{\prime})\\ &\sigma\tau&\longmapsto&(\lambda_{c}(\sigma\tau),\lambda^{\prime}(\tau)),\\ \iota\colon&\operatorname{Perm}(J)\times\operatorname{Perm}(G^{\prime})& \longrightarrow&\operatorname{Perm}(G)\\ &(\varphi,\psi)&\longmapsto&\sigma\tau\mapsto\varphi(\sigma)\psi(\tau).\end{array} \tag{1}\] Now, induced Hopf-Galois structures are introduced as follows: **Proposition 2.2**.: _If \(N_{1}\leq\operatorname{Perm}(J)\) gives \(L/K\) a Hopf-Galois structure and \(N_{2}\leq\operatorname{Perm}(G^{\prime})\) gives \(L/E\) a Hopf-Galois structure, then \(N=\iota(N_{1}\times N_{2})\) gives \(E/K\) a Hopf-Galois structure, which is called induced._ Actually, the Hopf-Galois structures on \(L/E\) are in bijective correspondence with the ones of \(F/K\) (see [21, Proposition 5.3]), so an induced Hopf-Galois structure on \(L/K\) can be built equivalently from Hopf-Galois structures on \(E/K\) and \(F/K\). This point of view is more convenient in order to study the underlying Hopf algebra and the underlying action on the induced Hopf-Galois structure. **Proposition 2.3**.: _Let \(H\) be an induced Hopf-Galois structure on \(L/K\) from Hopf-Galois structures \(H_{1}\) on \(E/K\) and \(H_{2}\) on \(F/K\). Then:_ 1. _[_21_, Proposition 5.5]_ \(H\cong H_{1}\otimes_{K}H_{2}\) _as_ \(K\)_-algebras._ 2. _[_21_, Proposition 5.8]_ _If_ \(h_{i}\in H_{i}\) _and_ \(\alpha_{i}\in L_{i}\) _for_ \(i\in\{1,2\}\)_, then_ \((h_{1}h_{2})\cdot(\alpha_{1}\alpha_{2})=(h_{1}\cdot\alpha_{1})(h_{2}\cdot \alpha_{2})\)_._ ### Almost classically Galois extensions Almost classically Galois extensions were introduced also by Greither and Pareigis [23, Section 4]. **Theorem 2.4**.: _[_23_, Proposition 4.1]_ _Let \(L/K\) be a separable extension and let \(\widetilde{L}\) be its normal closure. Let \(G=\operatorname{Gal}(L/K)\), \(G^{\prime}=\operatorname{Gal}(\widetilde{L}/L)\) and \(X=G/G^{\prime}\). Then, the following statements are equivalent:_ 1. _There is some Galois extension_ \(M/K\) _such that_ \(L\otimes_{K}M\) _is a field that contains_ \(\widetilde{L}\)_._ 2. _There is some Galois extension_ \(M/K\) _such that_ \(L\otimes_{K}M=\widetilde{L}\)_._ 3. _There is some normal complement_ \(J\) _of_ \(G^{\prime}\) _in_ \(G\)_._ 4. _There is a regular subgroup_ \(N\) _of_ \(\operatorname{Perm}(X)\) _normalized by_ \(\lambda(G)\) _such that_ \(N\subset\lambda(G)\)_._ **Definition 2.5**.: _Let \(L/K\) be a separable extension. We say that \(L/K\) is **almost classically Galois** if it satisfies some of the equivalent conditions of Theorem 2.4. An extension \(M/K\) as in 2 will be called a Galois complement for \(L/K\). A normal complement for \(L/K\) will be a subgroup \(J\) of \(G\) as in 3._ It follows from Theorem 2.4 4 and Greither-Pareigis theorem 2.1 that every almost classically Galois extension is Hopf-Galois. Also, every Galois extension is almost classically Galois: In that case, we have that \(G^{\prime}\) is trivial, and hence it has the full Galois group as normal complement. **Remark 2.6**.: The relationship between the objects involved in the statements of Theorem 2.4 is as follows. If a subgroup \(J\) of \(G\) is a normal complement of \(G^{\prime}\), then \(N=\lambda(J)\) is as in 4. However, there might be a subgroup \(J\) for which \(\lambda(J)\) is as in 4 but \(J\) is not a normal complement of \(G^{\prime}\). On the other hand, a subgroup \(J\) is a normal complement of \(G^{\prime}\) if and only if for \(M=L^{J}\), \(M/K\) is a Galois complement for \(L/K\). Let \(L/K\) be an almost classically Galois extension with normal complement \(J\) and Galois complement \(M\). Then \(\widetilde{L}/M\) is a Galois extension with group \(J\), and it has the same degree as the extension \(L/K\). We can define properties of Galois extensions in this setting by means of the extension \(\widetilde{L}/M\). **Definition 2.7**.: _Let \(L/K\) be an almost classically Galois extension with normal complement \(J\) and let \(n\in\mathbb{Z}_{>0}\)._ 1. _We say that_ \(L/K\) _is almost cyclic (resp. abelian) if_ \(J\) _is cyclic (resp. abelian)._ 2. _We say that_ \(L/K\) _has exponent_ \(n\) _if the group_ \(J\) _has exponent_ \(n\)_._ 3. _We say that_ \(L/K\) _is almost Kummer with respect to_ \(n\) _if it is almost abelian with exponent dividing_ \(n\)_._ The notion of almost cyclic extension had already been introduced in [6, Definition 3.1]. It is also possible to define what we understand by an almost classical Galois structure. For a group \((N,\cdot)\), we write \(N^{\mathrm{opp}}\) for its opposite group, i.e. the group with the same underlying set \(N\) and operation \(\cdot^{\prime}\) given by \(a\cdot^{\prime}b=b\cdot a\). If \(N\) is abelian, we simply have that \(N^{\mathrm{opp}}=N\). Now, assume that \(N\) is a regular subgroup of \(\mathrm{Perm}(X)\). Then \(N^{\mathrm{opp}}\) is isomorphic to the centralizer of \(N\) in \(\mathrm{Perm}(G/G^{\prime})\), and as such, it is also a regular subgroup of \(\mathrm{Perm}(X)\) (see [23, Lemma 2.4.2]). Let us identify these groups. Then, the following definition makes sense. **Definition 2.8**.: _Let \(L/K\) be an almost classically Galois extension. Let \(H\) be a Hopf-Galois structure on \(L/K\) and let \(N\) be the corresponding regular subgroup of \(\mathrm{Perm}(X)\) normalized by \(\lambda(G)\). We say that \(H\) is an **almost classically Galois structure** if \(N^{\mathrm{opp}}\subset\lambda(G)\). When a subgroup giving an almost classically Galois structure \(H\) is of the form \(\lambda(J)^{\mathrm{opp}}\) for a normal complement \(J\) of \(L/K\), we will say that \(H\) corresponds to \(M\coloneqq\widetilde{L}^{J}\)._ As highlighted in Remark 2.6, not all almost classically Galois structures are as in the second part of Definition 2.8. However, they are especially well behaved. If \(H\) is the Hopf algebra in an almost classically Galois structure on \(L/K\), we have that \(H=(\widetilde{L}[N]^{J})^{G/J}\) (see [27, Proof of Theorem 3.1]). Now, if \(N=\lambda(J)^{\mathrm{opp}}\) for a normal complement \(J\) of \(L/K\), the action of \(J\) on \(N\) by conjugation by \(\lambda(J)\) is trivial. Then, we obtain that \(H=M[N]^{G^{\prime}}\). Identifying \(\sigma\) with \(\lambda(\sigma)\) (which defines an isomorphism of groups \(J\cong\lambda(J)\)), we conclude the following. **Proposition 2.9**.: _Let \(L/K\) be an almost classically Galois extension of fields, and let \(J\) be a normal complement as in Theorem 2.4. Let \(H\) be the almost classically Galois structure on \(L/K\) corresponding to \(J\). Then, \(H=M[J^{\mathrm{opp}}]^{G^{\prime}}\). If in addition \(J\) is abelian, then \(H=M[J]^{G^{\prime}}\)._ Let \(L/K\) be a Galois extension which we regard as an almost classically Galois extension. As aforesaid, the normal complement of \(L/K\) for this case is \(J=G\), and accordingly the Galois complement is \(M=K\). Then, the almost classically Galois structure on \(L/K\) corresponding to \(M\) is the one given by the permutation subgroup \(N=\lambda(G)^{\mathrm{opp}}\). Now, it is easily checked that \(\lambda(G)\) normalizes \(\rho(G)\), and since both are regular subgroups, we obtain that \(N=\rho(G)\). In other words, the almost classically Galois structure on a Galois extension corresponding to its Galois complement is just its classical Galois structure. ### Linearly disjoint extensions Let \(K\) be a field and let \(L_{1}\) and \(L_{2}\) be field extensions of \(K\) contained in the algebraic closure \(\overline{K}\) of \(K\). Then the morphism \[L_{1}\otimes_{K}L_{2}\longrightarrow L_{1}L_{2}\] defined by \(x_{1}\otimes x_{2}\mapsto x_{1}x_{2}\) and extended by \(K\)-linearity is always surjective. We say that \(L_{1}\) and \(L_{2}\) are \(K\)-linearly disjoint, or that \(L_{1}/K\) and \(L_{2}/K\) are linearly disjoint, if the map above is bijective, or equivalently, if \(L_{1}\otimes_{K}L_{2}\) is a field. It is easy to see that if \(L_{1}\) and \(L_{2}\) are \(K\)-linearly disjoint, then \(L_{1}\cap L_{2}=K\). The converse in general does not hold, but it is true under fairly general restrictions on \(L_{1}/K\) and \(L_{2}/K\). **Proposition 2.10**.: _[_11_, Theorem 5.5]_ _Let \(L_{1}/K\) and \(L_{2}/K\) be finite extensions of fields such that one of them is normal and one (possibly the same) is separable. Then \(L_{1}/K\) and \(L_{2}/K\) are linearly disjoint if and only if \(L_{1}\cap L_{2}=K\)._ In the case that none of the extensions is normal, proving the linear disjointness may be a tricky problem. The following result gives a sufficient condition for specific families of extensions (see [31, Theorem]). **Proposition 2.11**.: _Let \(n_{1},\dots,n_{k}\in\mathbb{Z}_{\geqslant 0}\) and let \(K\) be a number field. Let \(L\) be an extension of \(K\) generated by elements \(\alpha_{1},\dots,\alpha_{k}\in\overline{K}\) such that \(\alpha_{i}^{n_{i}}\in K\) for all \(1\leq i\leq k\) and \(\alpha_{i}^{k_{i}}\notin K\) for all \(1<k_{i}<n_{i}\). Assume that \(K\) is totally real or \(\zeta_{n_{i}}\in K\) for all \(1\leq i\leq k\). Then the fields \(K(\alpha_{1}),\dots,K(\alpha_{k})\) are pairwise \(K\)-linearly disjoint._ As for the relation between linear disjointness and almost classically Galois extensions, the following result is immediately deduced from Theorem 2.4: **Proposition 2.12**.: _Let \(L/K\) and \(M/K\) be separable extensions with \(M/K\) Galois. Then \(L/K\) is almost classically Galois with Galois complement \(M\) if and only if \(L/K\) and \(M/K\) are linearly disjoint and \(\widetilde{L}=LM\)._ **Remark 2.13**.: Two almost classically Galois extensions \(L_{1}/K\) and \(L_{2}/K\) such that \(\gcd([L_{1}:K],[L_{2}:K])=1\) are linearly disjoint, but the converse does not hold in general. For instance, the fields \(\mathbb{Q}(\sqrt[4]{2})\) and \(\mathbb{Q}(i)\) are \(\mathbb{Q}\)-linearly disjoint almost classically Galois extensions. In the case of extensions of number or \(p\)-adic fields, it is possible to consider a stronger notion than linear disjointness. **Proposition 2.14**.: _Two extensions of number or \(p\)-adic fields with the same ground field are said to be arithmetically disjoint if they are linearly disjoint and have coprime discriminants._ It is known that if \(L_{1}/K\) and \(L_{2}/K\) are arithmetically disjoint, then \(\mathcal{O}_{L_{1}L_{2}}=\mathcal{O}_{L_{1}}\otimes_{\mathcal{O}_{K}} \mathcal{O}_{L_{2}}\) (see [17, (2.13)]). ### Hopf-Galois module theory Let \(K\) be the fraction field of a Dedekind domain \(\mathcal{O}_{K}\), let \(L\) be a degree \(n\) Hopf-Galois extension of \(K\) and let \(\mathcal{O}_{L}\) be the integral closure of \(\mathcal{O}_{K}\) in \(L\). Let \((H,\cdot)\) be a Hopf-Galois structure on \(L/K\). The associated order of \(\mathcal{O}_{L}\) in \(H\) is defined as \[\mathfrak{A}_{H}=\{h\in H\,|\,h\cdot\alpha\in\mathcal{O}_{L}\text{ for every }\alpha\in\mathcal{O}_{L}\}.\] This is an \(\mathcal{O}_{K}\)-order in \(H\), and it is \(\mathcal{O}_{K}\)-free of rank \(n\), under the assumption that \(\mathcal{O}_{L}\) is \(\mathcal{O}_{K}\)-free. Moreover, it is known that if \(\mathfrak{A}\) is an \(\mathcal{O}_{K}\)-order in \(H\) such that \(\mathcal{O}_{L}\) is \(\mathfrak{A}\)-free, then \(\mathfrak{A}=\mathfrak{A}_{H}\). If \(L/K\) is Galois, we will write \(\mathfrak{A}_{L/K}\) for the associated order in the classical Galois structure on \(L/K\). When \(\mathcal{O}_{K}\) is a PID, Rio and the author [21, Section 3] established a constructive method to determine an \(\mathcal{O}_{K}\)-basis of \(\mathfrak{A}_{H}\). We summarize here the main lines. Let us fix \(K\)-bases \(W=\{w_{i}\}_{i=1}^{n}\) and \(B=\{\gamma_{j}\}_{j=1}^{n}\) of \(H\) and \(L\). Then, we can define a \(K\)-basis \(\Phi=\{\varphi_{i}\}_{i=1}^{n^{2}}\) of \(\operatorname{End}_{K}(L)\) as follows: For every \(1\leq i\leq n^{2}\), there are \(1\leq k,j\leq n\) such that \(i=k+(j-1)n\). Let \(\varphi_{i}\) be the map that sends \(\gamma_{j}\) to \(\gamma_{k}\) and the other \(\gamma_{l}\) to \(0\). The matrix of the action of \(H\) on \(L\) with respect to the bases \(W\) and \(B\) is the matrix \(M(H_{W},L_{B})\) of the linear map \(\rho_{H}\colon H\longrightarrow\operatorname{End}_{K}(L)\) arising from the choice of the basis \(W\) in \(H\) and the basis \(\Phi\) in \(\operatorname{End}_{K}(L)\). Equivalently, \[M(H,L)=\begin{pmatrix}M_{1}(H,L)\\ \hline\cdots\\ \hline M_{n}(H,L)\end{pmatrix}\in\mathcal{M}_{n^{2}\times n}(K),\] where \[M_{j}(H,L)\coloneqq\begin{pmatrix}|&|&\dots&|\\ (w_{1}\cdot\gamma_{j})_{B}&(w_{2}\cdot\gamma_{j})_{B}&\dots&(w_{n}\cdot\gamma_ {j})_{B}\\ |&|&\dots&|\end{pmatrix}\in\mathcal{M}_{n}(K) \tag{2}\] for every \(1\leq j\leq n\). Now, there is a matrix \(D\in\mathcal{M}_{n}(K)\) and a unimodular matrix \(U\in\operatorname{GL}_{n^{2}}(\mathcal{O}_{L})\) with the property that \[UM(H,L)=\begin{pmatrix}D\\ \hline O\end{pmatrix},\] where \(O\) is the zero matrix of \(\mathcal{M}_{(m-n)\times n}(K)\) (see [20, Theorem 2.3]). We will refer to such a matrix \(D\) as a **reduced matrix**. Note that the injectivity of \(\rho_{H}\) implies that \(M(H,L)\) has rank \(n\), and therefore any reduced matrix of \(M(H,L)\) is invertible. Then, an \(\mathcal{O}_{K}\)-basis of \(\mathfrak{A}_{H}\) is determined as follows. **Proposition 2.15**.: _Let \(W\) be a \(K\)-basis of \(H\) and let \(B\) be a \(K\)-integral basis of \(L\) (i.e, an \(\mathcal{O}_{K}\)-basis of \(\mathcal{O}_{L}\)). Let \(D\) be a reduced matrix for \(M(H,L)\). Then, the elements of \(H\) whose coordinates with respect to \(W\) are the columns of the matrix \(D^{-1}\) form an \(\mathcal{O}_{K}\)-basis of \(\mathfrak{A}_{H}\)._ A proof can be found in [21, Theorem 3.5]. In that reference the result is stated for a concrete reduced matrix, the Hermite normal form of \(M(H,L)\) (or more accurately, of the matrix obtained from \(M(H,L)\) by dropping out the denominators of its entries, which has coefficients in \(\mathcal{O}_{K}\)), but this does not make any difference, since any two reduced matrices differ by multiplication of an invertible matrix in \(\mathcal{M}_{n}(\mathcal{O}_{K})\). Let \(L/K\) be an extension with Galois group of the form \(G=J\rtimes G^{\prime}\). When \(L^{J}/K\) and \(L^{G^{\prime}}/K\) are arithmetically disjoint, it is possible to establish a relation between the associated order in an induced Hopf-Galois structure on \(L/K\) and the ones in the inducing Hopf-Galois structures, as well as the freeness of the rings of integers over the corresponding associated orders. **Proposition 2.16**.: _Let \(K\) be the fraction field of a PID \(\mathcal{O}_{K}\). Let \(H\) be an induced Hopf-Galois structure on \(L/K\) from Hopf-Galois structures \(H_{1}\) on \(E/K\) and \(H_{2}\) on \(F/K\). Then:_ 1. _[_21_, Theorem 5.11]_ \(\mathfrak{A}_{H}=\mathfrak{A}_{H_{1}}\otimes_{\mathcal{O}_{K}}\mathfrak{A}_{H_{ 2}}\)_._ 2. _[_21_, Theorem 5.16]_ _If_ \(\mathcal{O}_{E}\) _is_ \(\mathfrak{A}_{H_{1}}\)_-free and_ \(\mathcal{O}_{F}\) _is_ \(\mathfrak{A}_{H_{2}}\)_-free, then_ \(\mathcal{O}_{L}\) _is_ \(\mathfrak{A}_{H}\)_-free._ ## 3 Products of Hopf-Galois structures on almost classically Galois extensions In this section we consider the compositum \(L\) of almost classically Galois extensions \(L_{1}/K\), \(L_{2}/K\), which is isomorphic to their tensor product \(L_{1}\otimes_{K}L_{2}\) when \(L_{1}\) and \(L_{2}\) are \(K\)-linearly disjoint. We are interested in finding conditions for \(L_{1}\) and \(L_{2}\) in order to assure that \(L/K\) is almost classically Galois. This phenomenon is already known for Galois extensions by means of the following result, whose proof is straightforward. **Proposition 3.1**.: _Let \(E/K\) and \(F/K\) be Galois extensions of fields with Galois groups \(G_{E}\) and \(G_{F}\), respectively. Write \(L=EF\) for the compositum of \(E\) and \(F\). Then:_ 1. \(L/K\) _is Galois._ 2. _The Galois group_ \(G\) _of_ \(L/K\) _is such that the map_ \(f\colon G\longrightarrow G_{E}\times G_{F}\) _defined by_ \(f(\sigma)=(\sigma|_{E},\sigma|_{F})\) _is injective._ 3. _The map_ \(f\) _is bijective if and only if_ \(E/K\) _and_ \(F/K\) _are linearly disjoint._ The second statement means that the Galois group \(G\) of \(EF/K\) can be embedded in the direct product \(G_{E}\times G_{F}\). For each \(\sigma\in G\), there are unique \(\sigma_{E}\in G_{E}\) and \(\sigma_{F}\in G_{F}\) such that \(\sigma=\sigma_{E}\sigma_{F}\). Moreover, the action of \(G\) on \(L\) is the product of the actions of \(G_{E}\) and \(G_{F}\), meaning that for every \(\alpha_{E}\in L_{E}\) and \(\alpha_{F}\in L_{F}\) we have \[(\sigma_{E}\sigma_{F})(\alpha_{E}\alpha_{F})=\sigma_{E}(\alpha_{E})\sigma_{F}( \alpha_{F}),\] as \(\sigma_{E}=\sigma\mid_{L_{E}}\) and \(\sigma_{F}=\sigma\mid_{L_{F}}\). ### The compositum of almost classically Galois extensions Let \(L_{1}/K\) and \(L_{2}/K\) be two almost classically Galois extensions with Galois complements \(M_{1}\) and \(M_{2}\) respectively. Assume that \(L_{1}\) and \(L_{2}\) are \(K\)-linearly disjoint; in particular \(L_{1}\cap L_{2}=K\). We introduce the following terminology: **Definition 3.2**.: _Let \(L_{1}/K\) and \(L_{2}/K\) be almost classically Galois extensions with Galois complements \(M_{1}\) and \(M_{2}\) respectively. Assume that \(L_{1}\) and \(L_{2}\) are \(K\)-linearly disjoint. We say that \(L_{1}/K\) and \(L_{2}/K\) are **strongly disjoint** if \(L_{1}\cap M_{2}=L_{2}\cap M_{1}=K\). Any extension that can be written as the compositum of strongly disjoint extensions will be called **strongly decomposable**._ Write \(L=L_{1}L_{2}\) for the compositum of \(L_{1}\) and \(L_{2}\). Recall that we want to show that \(L/K\) is an almost classically Galois extension and build a Hopf-Galois structure on \(L/K\) from Hopf-Galois structures on \(L_{1}/K\) and \(L_{2}/K\). For \(i\in\{1,2\}\), we introduce the following notation: * \(\widetilde{L_{i}}\) is the normal closure of \(L_{i}/K\). * \(G_{i}\coloneqq\operatorname{Gal}(\widetilde{L_{i}}/K)\), \(G^{\prime}_{i}\coloneqq\operatorname{Gal}(\widetilde{L_{i}}/L_{i})\) and \(J_{i}\coloneqq\operatorname{Gal}(\widetilde{L_{i}}/M_{i})\). * \(\lambda_{i}\colon G_{i}\longrightarrow\operatorname{Perm}(G_{i}/G^{\prime}_ {i})\) is the left translation map for \(L_{i}/K\). * \(N_{i}\) is a regular subgroup of \(\operatorname{Perm}(G_{i}/G^{\prime}_{i})\) normalized by \(\lambda_{i}(G_{i})\) (therefore, giving a Hopf-Galois structure on \(L_{i}/K\)). Note that the normal closure of \(L/K\) is \(\widetilde{L}=\widetilde{L_{1}L_{2}}\). Indeed, if \(N/K\) is a normal extension such that \(L\subseteq N\), then \(L_{1},L_{2}\subseteq N\), and the normality of \(N\) gives that \(\widetilde{L_{1}},\widetilde{L_{2}}\subseteq N\), so \(\widetilde{L_{1}}\widetilde{L_{2}}\subseteq N\). **Proposition 3.3**.: _Let \(L_{1}/K\) and \(L_{2}/K\) be strongly disjoint almost classically Galois extensions with complements \(M_{1}\), \(M_{2}\), and call \(L=L_{1}L_{2}\). Then \(L/K\) is an almost classically Galois extension with Galois complement \(M=M_{1}M_{2}\). Consequently, any strongly decomposable extension is almost classically Galois._ Proof.: Since \(M_{1}/K\) and \(M_{2}/K\) are Galois, by Proposition 3.1, \(M/K\) is also Galois. Moreover, by definition of Galois complement, \(\widetilde{L_{i}}\cong L_{i}\otimes_{K}M_{i}\) for \(i\in\{1,2\}\). Now, we have that \[L\otimes_{K}M=(L_{1}\otimes_{K}M_{1})(L_{1}\otimes_{K}M_{2})(L_{2}\otimes_{K} M_{1})(L_{2}\otimes_{K}M_{2})\cong\widetilde{L}(L_{1}\otimes_{K}M_{2})(L_{2} \otimes_{K}M_{1}).\] On the other hand, the strong disjointness together with the fact that \(L_{1},L_{2},M_{1},M_{2}\subseteq\widetilde{L}\) allows us to apply Proposition 2.10, obtaining that \(L_{i}\otimes_{K}M_{j}\cong L_{i}M_{j}\) for every \(1\leq i,j\leq 2\). Then \(L_{1}\otimes_{K}M_{2}\) and \(L_{2}\otimes_{K}M_{1}\) can be embedded in \(\widetilde{L}\), proving that \(L\otimes_{K}M\cong\widetilde{L}\). Let us call \(G\coloneqq\operatorname{Gal}(\widetilde{L}/K)\), \(J\coloneqq\operatorname{Gal}(\widetilde{L}/M)\) and \(G^{\prime}\coloneqq\operatorname{Gal}(\widetilde{L}/L)\). Applying Proposition 3.1 to the extensions \(\widetilde{L_{1}}/K\), \(\widetilde{L_{2}}/K\), we obtain that there is a monomorphism \(G\hookrightarrow G_{1}\times G_{2}\). Therefore, for each \(g\in G\) there are unique \(g_{1}\in G_{1}\), \(g_{2}\in G_{2}\) such that \(g=g_{1}g_{2}\). **Lemma 3.4**.: _Under the embedding \(G\hookrightarrow G_{1}\times G_{2}\), we have that:_ 1. \(G^{\prime}\) _is embedded in_ \(G^{\prime}_{1}\times G^{\prime}_{2}\)_._ 2. \(J\) _is isomorphic to_ \(J_{1}\times J_{2}\)_._ Proof.: If \(\{i,j\}=\{1,2\}\), using the \(K\)-linear disjointness of \(L_{i}\) with \(L_{j}\) and \(M_{i}\), we have that \(\widetilde{L_{i}}\cap L=(L_{i}M_{i})\cap(L_{i}L_{j})=L_{i}(L_{j}\cap M_{i})=K\), the last equality due to \(L_{1}/K\) and \(L_{2}/K\) being strongly disjoint. Hence, every \(g\in G^{\prime}\) satisfies \(g|_{\widetilde{L_{i}}}\in\operatorname{Gal}(\widetilde{L_{i}}/L_{i})\). Then, by means of the monomorphism \(G\hookrightarrow G_{1}\times G_{2}\), \(G^{\prime}\) is embedded in \(G^{\prime}_{1}\times G^{\prime}_{2}\). Likewise, we have that \(\widetilde{L_{i}}\cap M=M_{i}\) for \(i\in\{1,2\}\), and \(J\) is embedded in \(J_{1}\times J_{2}\). Now, call \(n_{i}=[L_{i}:K]\) for \(i\in\{1,2\}\) and \(n=[L:K]\). Since \(L_{1}/K\) and \(L_{2}/K\) are linearly disjoint, \(n=n_{1}n_{2}\). Moreover, the \(K\)-linear disjointness of \(L_{i}/K\) and \(M_{i}/K\) gives that \(n_{i}=[\widetilde{L}_{i}:M_{i}]=|J_{i}|\), and similarly, the \(K\)-linear disjointness of \(L/K\) and \(M/K\) yields \(n=|J|\). We deduce that \(|J|=|J_{1}\times J_{2}|\), and (2) follows. **Remark 3.5**.: In the proof of Lemma 3.4 we have not used the strong disjointness. In fact, we have proved that it holds simply by assuming that \(L_{1}/K\) and \(L_{2}/K\) are linearly disjoint. Let \(\lambda\colon G\longrightarrow\operatorname{Perm}(G/G^{\prime})\) be the left translation map of \(L/K\), and let \(\lambda_{i}\colon G_{i}\longrightarrow\operatorname{Perm}(G_{i}/G^{\prime}_ {i})\) be the left translation map of \(L_{i}/K\) for \(i\in\{1,2\}\). We will prove that the definition of \(\lambda\) can be recovered from the ones of \(\lambda_{1}\) and \(\lambda_{2}\). Since \(G^{\prime}\) is embedded in \(G^{\prime}_{1}\times G^{\prime}_{2}\), there is a well defined map \[\begin{array}{rcl}\psi\colon&G/G^{\prime}&\longrightarrow&G_{1}/G^{\prime}_{1} \times G_{2}/G^{\prime}_{2}\\ &g_{1}g_{2}G^{\prime}&\longmapsto&(g_{1}G^{\prime}_{1},g_{2}G^{\prime}_{2})\\ \end{array}\] **Lemma 3.6**.: _The map \(\psi\) is a bijection._ Proof.: First, we will check that \(\psi\) is surjective. Let \((g_{1}G_{1}^{\prime},g_{2}G_{2}^{\prime})\in G_{1}/G_{1}^{\prime}\times G_{2}/G_{2} ^{\prime}\). We know that the elements of \(J_{i}\) form a transversal for \(G_{i}/G_{i}^{\prime}\), that is: if \(J_{i}=\{\sigma_{1}^{(i)},\ldots,\sigma_{n_{i}}^{(i)}\}\), then \(G_{i}/G_{i}^{\prime}=\{\sigma_{1}^{(i)}G_{i}^{\prime},\ldots,\sigma_{n_{i}}^{( i)}G_{i}^{\prime}\}\). Then for each \(i\in\{1,2\}\) there are a unique \(j_{i}\in\{1,\ldots,n_{i}\}\) and some \(g_{i}^{\prime}\in G_{i}^{\prime}\) such that \(g_{i}=\sigma_{j_{i}}^{(i)}g_{i}^{\prime}\). Now, using the operation of the direct product \(G_{1}\times G_{2}\), \[g_{1}g_{2}=\sigma_{j_{1}}^{(1)}g_{1}^{\prime}\sigma_{j_{2}}^{(2)}g_{2}^{\prime }=\sigma_{j_{1}}^{(1)}\sigma_{j_{2}}^{(2)}g_{1}^{\prime}g_{2}^{\prime},\] and since \(J\) is isomorphic to \(J_{1}\times J_{2}\) under the embedding \(G\hookrightarrow G_{1}\times G_{2}\), we have that \(\sigma_{j_{1}}^{(1)}\sigma_{j_{2}}^{(2)}\in J\subseteq G\). Now, \[\psi(\sigma_{j_{1}}^{(1)}\sigma_{j_{2}}^{(2)}G^{\prime})=(\sigma_{j_{1}}^{(1)} G_{1}^{\prime},\sigma_{j_{2}}^{(2)}G_{2}^{\prime})=(g_{1}G_{1}^{\prime},g_{2}G_{2} ^{\prime}),\] and the surjectivity follows. The \(K\)-linear disjointness of \(L_{1}\) and \(L_{2}\) gives that the domain and codomain have the same (finite) number of elements, so \(\psi\) is a bijection. Let us identify \(G/G^{\prime}\) with \(G_{1}/G_{1}^{\prime}\times G_{2}/G_{2}^{\prime}\) by writing \(g_{1}g_{2}G^{\prime}=g_{1}G_{1}^{\prime}g_{2}G_{2}^{\prime}\) for \(g_{1}\in G_{1},g_{2}\in G_{2}\). **Lemma 3.7**.: _With the notation above, \(\lambda=\iota\circ\chi\), where_ \[\begin{array}{cccc}\chi\colon&G&\longrightarrow&\operatorname{Perm}(G_{1}/ G_{1}^{\prime})\times\operatorname{Perm}(G_{2}/G_{2}^{\prime}),\\ g_{1}g_{2}&\longmapsto&(\lambda_{1}(g_{1}),\lambda_{2}(g_{2})),\\ \iota\colon&\operatorname{Perm}(G_{1}/G_{1}^{\prime})\times\operatorname{ Perm}(G_{2}/G_{2}^{\prime})&\longrightarrow&\operatorname{Perm}(G/G^{ \prime}),\\ (\varphi_{1},\varphi_{2})&\longmapsto&g_{1}g_{2}G^{\prime}\mapsto\varphi_{1}( g_{1}G_{1}^{\prime})\varphi_{2}(g_{2}G_{2}^{\prime}).\end{array}\] _Moreover, both \(\chi\) and \(\iota\) are group monomorphisms._ **Remark 3.8**.: If we remove the identification of \(G/G^{\prime}\) with its image by \(\psi\), then the definition of \(\iota(\varphi_{1},\varphi_{2})\) for \(\varphi_{i}\in\operatorname{Perm}(G_{i}/G_{i}^{\prime})\) is \(\iota(\varphi_{1},\varphi_{2})(g_{1}g_{2}G^{\prime})=\psi^{-1}(\varphi_{1}(g_ {1}G_{1}^{\prime}),\varphi_{2}(g_{2}G_{2}^{\prime}))\), which is well defined because of the bijectivity of \(\psi\). Proof.: Fix \(g=g_{1}g_{2}\in G\) for \(g_{i}\in G_{i}\). Given \(hG^{\prime}\in G/G^{\prime}\), write \(h=h_{1}h_{2}\) with \(h_{i}\in G_{i}\), so that \(h_{1}h_{2}G^{\prime}=h_{1}G_{1}^{\prime}h_{2}G_{2}^{\prime}\). Then we have \[\iota\circ\chi(g)(hG^{\prime}) =\iota(\lambda_{1}(g_{1}),\lambda_{2}(g_{2}))(h_{1}h_{2}G^{\prime})\] \[=\lambda_{1}(g_{1})(h_{1}G_{1}^{\prime})\lambda_{2}(g_{2})(h_{2}G_ {2}^{\prime})\] \[=(g_{1}h_{1}G_{1}^{\prime})(g_{2}h_{2}G_{2}^{\prime})=g_{1}g_{2} h_{1}h_{2}G^{\prime}=\lambda(g)(hG^{\prime}).\] That \(\chi\) is a monomorphism follows immediately from the fact that so are \(\lambda_{1}\) and \(\lambda_{2}\). As for \(\iota\), let \((\varphi_{1},\varphi_{2})\in\operatorname{Ker}(\iota)\), so that \(\iota(\varphi_{1},\varphi_{2})=\operatorname{Id}_{G/G^{\prime}}\). Given \((g_{1},g_{2})\in G_{1}\times G_{2}\), we know from Lemma 3.6 that there is \(g\in G\) such that \(\psi(gG^{\prime})=(g_{1}G_{1}^{\prime},g_{2}G_{2}^{\prime})\), that is \(gG^{\prime}=g_{1}G_{1}^{\prime}g_{2}G_{2}^{\prime}\). Then \(\iota(\varphi_{1},\varphi_{2})(gG^{\prime})=\varphi_{1}(g_{1}G_{1}^{\prime}) \varphi_{2}(g_{2}G_{2}^{\prime})\), but also \(\iota(\varphi_{1},\varphi_{2})(gG^{\prime})=gG^{\prime}=g_{1}G_{1}^{\prime}g_{2 }G_{2}^{\prime}\). Now, the injectivity of \(\psi\) gives \(\varphi_{1}(g_{1}G_{1}^{\prime})=g_{1}G_{1}^{\prime}\) and \(\varphi_{2}(g_{2}G_{2}^{\prime})=g_{2}G_{2}^{\prime}\). Since \(g_{1}\) and \(g_{2}\) are arbitrary, we conclude that \(\varphi_{i}=\operatorname{Id}_{G_{i}/G_{i}^{\prime}}\). **Proposition 3.9**.: _If \(N_{i}\) is a regular subgroup of \(\operatorname{Perm}(G_{i}/G_{i}^{\prime})\) normalized by \(\lambda_{i}(G_{i})\) for \(i\in\{1,2\}\), then \(N=\iota(N_{1}\times N_{2})\) is a regular subgroup of \(\operatorname{Perm}(G/G^{\prime})\) normalized by \(\lambda(G)\)._ Proof.: First, we prove that \(N\) is regular. Since \(N_{1}\) and \(N_{2}\) are regular, we have that \(|N|=|N_{1}||N_{2}|=|G_{1}/G_{1}^{\prime}||G_{2}/G_{2}^{\prime}|=|G/G^{\prime}|\), so it is enough to check that the action of \(N\) on \(G/G^{\prime}\) is transitive. Let \(gG\), \(hG^{\prime}\in G/G^{\prime}\) and write \(g=g_{1}g_{2}\), \(h=h_{1}h_{2}\) with \(g_{i},h_{i}\in G_{i}\). We know that \(N_{i}\) acts transitively on \(G_{i}/G_{i}^{\prime}\), so there is \(\eta_{i}\in N_{i}\) such that \(\eta_{i}(g_{i}G_{i}^{\prime})=h_{i}G_{i}^{\prime}\) for \(i\in\{1,2\}\). Therefore, \(\iota(\eta_{1},\eta_{2})(gG^{\prime})=\eta_{1}(g_{1}G_{1}^{\prime})\eta_{2}(g_{2} G_{2}^{\prime})=(h_{1}G_{1}^{\prime})(h_{2}G_{2}^{\prime})=hG^{\prime}\). Now, let us prove that \(N\) is normalized by \(\lambda(G)\). If \((\eta,\mu)\in N_{1}\times N_{2}\) and \(g=g_{1}g_{2}\in G\), we have that \[\chi(g)(\eta,\mu)\chi(g)^{-1}=(\lambda_{1}(g_{1})\eta\lambda(g_{1})^{-1}, \lambda_{2}(g_{2})\mu\lambda_{2}(g_{2})^{-1})\in N_{1}\times N_{2}.\] Thus, since \(\lambda=\iota\circ\chi\), we have \[\lambda(g)\iota(\eta,\mu)\lambda(g)^{-1}=\iota(\lambda(g)(\eta,\mu)\lambda(g)^{ -1})\in N.\] ### The product Hopf-Galois structure Applying the correspondence provided by Greither-Pareigis theorem to the statement of Proposition 3.9, we obtain a Hopf-Galois structure on \(L/K\) from Hopf-Galois structures on \(L_{1}/K\) and \(L_{2}/K\). **Definition 3.10**.: _Let \(L_{1}/K\) and \(L_{2}/K\) be strongly disjoint almost classically Galois extensions and, for \(i\in\{1,2\}\), let \(H_{i}\) be the Hopf-Galois structure on \(L_{i}/K\) given by a permutation subgroup \(N_{i}\). For \(L=L_{1}L_{2}\), the Hopf-Galois structure on \(L/K\) given by \(N=\iota(N_{1}\times N_{2})\) will be referred to as the **product Hopf-Galois structure** on \(L/K\) from \(H_{1}\) and \(H_{2}\)._ The name for this construction will be justified after Proposition 3.14 below, where we will prove that the product Hopf-Galois structure \(H\) from \(H_{1}\) and \(H_{2}\) can be seen as the tensor product of \(H_{1}\) and \(H_{2}\), both at the level of Hopf algebras and at the level of actions. We now consider the product of almost classically Galois structures. **Proposition 3.11**.: _Let \(L_{1}/K\) and \(L_{2}/K\) be strongly disjoint almost classically Galois extensions with complements \(M_{1}\) and \(M_{2}\), and call \(L=L_{1}L_{2}\) and \(M=M_{1}M_{2}\). If \(H_{i}\) is an almost classically Galois structure on \(L_{i}/K\), then the product Hopf-Galois structure \(H\) of \(H_{1}\) and \(H_{2}\) on \(L/K\) is almost classically Galois. Moreover, if \(H_{i}\) is the almost classically Galois structure corresponding to \(M_{i}\) for each \(i\in\{1,2\}\), then \(H\) is the almost classically Galois structure on \(L/K\) corresponding to \(M\)._ Proof.: For each \(i\in\{1,2\}\), let \(J_{i}=\operatorname{Gal}(\widetilde{L_{i}}/M_{i})\), \(G^{\prime}_{i}=\operatorname{Gal}(\widetilde{L_{i}}/L_{i})\) and \(\lambda_{i}\colon G_{i}\longrightarrow\operatorname{Perm}(G_{i}/G^{\prime}_{ i})\) be the left translation map for \(L_{i}/K\). Let \(N_{i}\) be the subgroup of \(\operatorname{Perm}(G_{i}/G^{\prime}_{i})\) giving an almost classically Galois structure \(H_{i}\) on \(L_{i}/K\), so that \(N^{\operatorname{opp}}_{i}\subset\lambda_{i}(G_{i})\). Let \(N=\iota(N_{1}\times N_{2})\), which is the subgroup of \(\operatorname{Perm}(G/G^{\prime})\) giving the product Hopf-Galois structure \(H\) on \(L/K\) from \(H_{1}\) and \(H_{2}\). We need to check that \(N^{\operatorname{opp}}\subset\lambda(G)\). First, note that \(N^{\operatorname{opp}}=\iota(N^{\operatorname{opp}}_{1}\times N^{ \operatorname{opp}}_{2})\). Indeed, each element in the right side member is centralized by \(N\), so \(\iota(N^{\operatorname{opp}}_{1}\times N^{\operatorname{opp}}_{2})\subseteq N ^{\operatorname{opp}}\). Since \(N^{\operatorname{opp}}_{i}\) is a regular subgroup of \(\operatorname{Perm}(G_{i}/G^{\prime}_{i})\), Proposition 3.9 gives that \(\iota(N^{\operatorname{opp}}_{1}\times N^{\operatorname{opp}}_{2})\) is a regular subgroup of \(\operatorname{Perm}(G/G^{\prime})\). But \(N^{\operatorname{opp}}\) also is, so they have the same order and then the equality holds. Since \(H_{i}\) is almost classically Galois, \(N^{\operatorname{opp}}_{i}\subset\lambda_{i}(G_{i})\). Then, given \(\eta\in N^{\operatorname{opp}}\), there are \(g_{i}\in G_{i}\) such that \(\eta=\iota(\lambda_{1}(g_{1}),\lambda_{2}(g_{2}))\). Now, given \(h_{i}\in G_{i}\), we have that \[\eta(h_{1}h_{2}G^{\prime})=\lambda(g_{1})(h_{1}G^{\prime}_{1})\lambda(g_{2})(h _{2}G^{\prime}_{2})=(g_{1}h_{1}G^{\prime}_{1})(g_{2}h_{2}G^{\prime}_{2})=g_{1}g _{2}h_{1}h_{2}G^{\prime}=\lambda(g_{1}g_{2})(h_{1}h_{2}G^{\prime}).\] Then \(\eta=\lambda(g_{1}g_{2})\in\lambda(G)\). We conclude that \(N^{\operatorname{opp}}\subset\lambda(G)\). Now, let us assume that \(H_{i}\) corresponds to \(M_{i}\), so \(N_{i}=\lambda_{i}(J_{i})^{\operatorname{opp}}\). Then \(N=\iota(\lambda_{1}(J_{1})^{\operatorname{opp}}\times\lambda_{2}(J_{2})^{ \operatorname{opp}})\). Let us write \(J=\operatorname{Gal}(\widetilde{L}/M)\), \(G^{\prime}=\operatorname{Gal}(\widetilde{L}/L)\) and \(\lambda\colon G\longrightarrow\operatorname{Perm}(G/G^{\prime})\). It is easy to check that \(\lambda(J)=\iota(\lambda_{1}(J_{1})\times\lambda_{2}(J_{2}))\), so \(N=\lambda(J)^{\operatorname{opp}}\). Hence \(H\) is the almost classically Galois structure on \(L/K\) corresponding to \(M\). As noticed at the end of Section 2.2, the almost classically Galois structure corresponding to a complement in a Galois extension is its classical Galois structure. Thus, we obtain the following. **Corollary 3.12**.: _Let \(L_{1}/K\) and \(L_{2}/K\) be two linearly disjoint Galois extensions. The product Hopf-Galois structure on \(L/K\) of the classical Galois structures on \(L_{1}/K\) and \(L_{2}/K\) is the classical Galois structure on \(L/K\)._ #### The likeness with induced Hopf-Galois structures In this section we compare the notion of product Hopf-Galois structure with the one of induced Hopf-Galois structure that we discussed in Section 2.1.1. Let \(L/K\) be a Galois extension with group \(G=J\rtimes G^{\prime}\), and let \(E=L^{G^{\prime}}\), \(F=L^{J}\). The extension \(E/K\) is almost classically Galois because its Galois closure \(\widetilde{E}\) satisfies \(\dot{\widetilde{E}}\subseteq L\cong E\otimes_{K}F\) and we apply Theorem 2.4 (1). The Galois complement of \(E/K\) is necessarily contained in \(F\). Hence, the extensions \(E/K\) and \(F/K\) are not strongly disjoint unless the extension \(L/K\) is Galois. Then, it only makes sense to consider both product Hopf-Galois structures and induced Hopf-Galois structures in compositums of linearly disjoint Galois extensions. In that case, both notions are the same. **Proposition 3.13**.: _Let \(E/K\) and \(F/K\) be Galois extensions with group \(J\) and \(G^{\prime}\), and let \(L=EF\). Then the induced Hopf-Galois structures on \(L/K\) are the product Hopf-Galois structures on \(L/K\) from Hopf-Galois structures on \(E/K\) and \(F/K\)._ Proof.: By Proposition 3.1, the extension \(L/K\) is Galois with Galois group \(G\) isomorphic to the direct product \(J\times G^{\prime}\). In order to build a Hopf-Galois structure on \(L/K\), we notice that \(J\) is a transversal for \(G/G^{\prime}\) and then Hopf-Galois structures on \(E/K\) are in one-to-one correspondence with regular subgroups of \(\operatorname{Perm}(J)\) normalized by \(\lambda_{c}\colon J\longrightarrow\operatorname{Perm}(J)\). In this case, this is just the left regular representation of \(G\). On the other hand, the Hopf-Galois structures on \(F/K\) are in one-to-one correspondence with regular subgroups of \(\operatorname{Perm}(G^{\prime})\) normalized by \(\lambda^{\prime}\colon G^{\prime}\longrightarrow\operatorname{Perm}(G^{ \prime})\). Let \(\lambda\colon G\longrightarrow\operatorname{Perm}(G)\) be the left regular representation of \(G\). It is checked that the decompositions of \(\lambda\) in (1) and Lemma 3.7 are exactly the same. Then, induced Hopf-Galois structures and product Hopf-Galois structures on \(L/K\) are built in the same way. From the previous discussion it holds that as soon as some of the extensions we consider is not Galois, at most one of the two notions apply. However, there are still some similarities. Namely, we have an analogue of Proposition 2.3 for product Hopf-Galois structures. **Proposition 3.14**.: _Let \(L/K\) be the compositum of two strongly disjoint almost classically Galois extensions \(L_{1}/K\) and \(L_{2}/K\). Let \(H_{i}\) be a Hopf-Galois structure on \(L_{i}/K\) and let \(H\) be the product Hopf-Galois structure on \(L/K\) from \(H_{1}\) and \(H_{2}\). Then:_ 1. \(H\cong H_{1}\otimes_{K}H_{2}\) _as_ \(K\)_-algebras._ 2. _If_ \(h_{i}\in H_{i}\) _and_ \(\alpha_{i}\in L_{i}\) _for_ \(i\in\{1,2\}\)_, then_ \((h_{1}h_{2})\cdot(\alpha_{1}\alpha_{2})=(h_{1}\cdot\alpha_{1})(h_{2}\cdot \alpha_{2})\)_._ Proof.: Let \(N_{i}\) be the permutation subgroup corresponding to \(H_{i}\) under the Greither-Pareigis correspondence, so that \(H_{i}=\widetilde{L}_{i}[N_{i}]^{G_{i}}\). Then \(H=\widetilde{L}_{i}[\iota(N_{1}\times N_{2})]^{G}\). 1. First of all, given \(g=g_{1}g_{2}\in G\) with \(g_{1}\in G_{1}\) and \(g_{2}\in G_{2}\), \(\eta_{1}\in N_{1}\) and \(\eta_{2}\in N_{2}\), we have that \[g\cdot\iota(\eta_{1},\eta_{2})=\lambda(g)\iota(\eta_{1},\eta_{2})\lambda(g^{- 1})=\iota(\chi(g)(\eta_{1},\eta_{2})\chi(g^{-1})),\] and since \(\chi(g)=(\lambda_{1}(g_{1}),\lambda_{2}(g_{2}))\), we have \[\chi(g)(\eta_{1},\eta_{2})\chi(g^{-1})=(\lambda_{1}(g_{1})\eta_{1}\lambda_{1}( g_{1}^{-1}),\lambda_{2}(g_{2})\eta_{2}\lambda_{2}(g_{2}^{-1}))=(g_{1}\cdot \eta_{1},g_{2}\cdot\eta_{2}).\] We conclude that \[g\cdot\iota(\eta_{1},\eta_{2})=\iota(g_{1}\cdot\eta_{1},g_{2}\cdot\eta_{2}).\] Let us consider the map \(f\colon H_{1}\otimes_{K}H_{2}\longrightarrow\widetilde{L}[N]\) defined by \[f(h_{1}\otimes h_{2})=\sum_{i=1}^{n_{1}}\sum_{j=1}^{n_{2}}h_{i}^{(1)}h_{j}^{(2) }\iota(\eta_{i}^{(1)},\eta_{j}^{(2)}),\] where \(h_{1}=\sum_{i=1}^{n_{1}}h_{i}^{(1)}\eta_{i}^{(1)}\in H_{1}\) and \(h_{2}=\sum_{j=1}^{n_{2}}h_{j}^{(2)}\eta_{j}^{(2)}\in H_{2}\), with \(h_{i}^{(1)}\in\widetilde{L}_{1}\) and \(h_{j}^{(2)}\in\widetilde{L}_{2}\). It is clear that \(f\) is a morphism of \(K\)-algebras. We will prove that \(f(H_{1}\otimes_{K}H_{2})=H\) and that it is an isomorphism over the image. Let \(h_{1}\in H_{1}\) and \(h_{2}\in H_{2}\) be as before and let \(g=g_{1}g_{2}\in G\). Then \[g\cdot(f(h_{1},h_{2})) =\sum_{i=1}^{n_{1}}\sum_{j=1}^{n_{2}}g_{1}(h_{i}^{(1)})g_{2}(h_{j}^ {(2)})\iota(g_{1}\cdot\eta_{i}^{(1)},g_{2}\cdot\eta_{j}^{(2)})\] \[=f(g_{1}\cdot h_{1},g_{2}\cdot h_{2})=f(h_{1},h_{2}),\] which proves that \(f(H_{1}\otimes_{K}H_{2})=H\). Now, it is immediate to check that \(\dim_{K}(H)=\dim_{K}(H_{1}\otimes_{K}H_{2})\), so \(f\colon H_{1}\otimes_{K}H_{2}\longrightarrow H\) is an isomorphism of \(K\)-algebras. 2. Note that in this statement we have identified \(H\) with \(H_{1}\otimes_{K}H_{2}\) via the isomorphism \(f\) in the first statement and \(L\) with \(L_{1}\otimes_{K}L_{2}\) via the canonical map (since \(L_{1}\) and \(L_{2}\) are \(K\)-linearly disjoint). Write \(h_{1}\) and \(h_{2}\) as in the previous proof. We have that \[f(h_{1},h_{2})\cdot(\alpha_{1}\alpha_{2}) =\sum_{i=1}^{n_{1}}\sum_{j=1}^{n_{2}}h_{i}^{(1)}h_{j}^{(2)}(\eta_{i }^{(1)},\eta_{j}^{(2)})^{-1}(1_{G}G^{\prime})(\alpha_{1}\alpha_{2})\] \[=\sum_{i=1}^{n_{1}}\sum_{j=1}^{n_{2}}h_{i}^{(1)}h_{j}^{(2)}(\eta_{ i}^{(1)})^{-1}(1_{G_{1}}G_{1}^{\prime})(\eta_{j}^{(2)})^{-1}(1_{G_{2}}G_{2}^{ \prime})(\alpha_{1}\alpha_{2})\] \[=\sum_{i=1}^{n_{1}}\sum_{j=1}^{n_{2}}h_{i}^{(1)}h_{j}^{(2)}(\eta_ {i}^{(1)})^{-1}(1_{G_{1}}G_{1}^{\prime})(\alpha_{1})(\eta_{j}^{(2)})^{-1}(1_{ G_{2}}G_{2}^{\prime})(\alpha_{2})\] \[=\Big{(}\sum_{i=1}^{n_{1}}h_{i}^{(1)}(\eta_{i}^{(1)})^{-1}(1_{G_{1 }}G_{1}^{\prime})(\alpha_{1})\Big{)}\Big{(}\sum_{j=1}^{n_{2}}h_{j}^{(2)}(\eta_ {j}^{(2)})^{-1}(1_{G_{2}}G_{2}^{\prime})(\alpha_{2})\Big{)}\] \[=(h_{1}\cdot\alpha_{1})(h_{2}\cdot\alpha_{2}).\] From now on, we identify \(H\) with \(H_{1}\otimes_{K}H_{2}\) by means of \(f\) and write \(f(h_{1}\otimes h_{2})=h_{1}h_{2}\) for \(h_{1}\in H_{1}\) and \(h_{2}\in H_{2}\). Hence, for each \(h\in H\) there are unique elements \(h_{1}\in H_{1}\) and \(h_{2}\in H_{2}\) such that \(h=h_{1}h_{2}\). Now, we want to find a relation between the involved matrices of the action (see Section 2.4). In [21, Theorem 5.10], it was shown that the matrix of the action of an induced Hopf-Galois structure is the Kronecker product of the matrices of the Hopf-Galois structures from which it is built, up to permutation of rows. In the proof of that result, we only use the fact from Proposition 2.3 that an induced Hopf-Galois structure is isomorphic to the tensor product of the inducing Hopf-Galois structures. Since the same is true for product Hopf-Galois structures from Proposition 3.14, we deduce the following. **Proposition 3.15**.: _Let \(L_{1}/K\) and \(L_{2}/K\) be strongly disjoint almost classically Galois extensions. For \(i\in\{1,2\}\), let \(H_{i}\) be a Hopf-Galois structure on \(L_{i}/K\) and let \(H\) be the product Hopf-Galois structure of these on \(L/K\). Then, there is a permutation matrix \(P\in\operatorname{GL}_{n^{2}}(\mathcal{O}_{K})\) such that_ \[PM(H,L)=M(H_{1},L_{1})\otimes M(H_{2},L_{2}).\] Let us consider the setting in Section 2.4. Using the method described therein, we can use the relation between the matrices of the action to obtain an analogue of Proposition 2.16 for product Hopf-Galois structures. **Corollary 3.16**.: _Let \(K\) be the fraction field of a PID \(\mathcal{O}_{K}\). Let \(L_{1}/K\) and \(L_{2}/K\) be strongly disjoint almost classically Galois extensions, and assume that they are also arithmetically disjoint. For \(i\in\{1,2\}\), let \(H_{i}\) be a Hopf-Galois structure on \(L_{i}/K\) and let \(H\) be the product Hopf-Galois structure of these on \(L/K\), where \(L=L_{1}L_{2}\)._ 1. \(\mathfrak{A}_{H}=\mathfrak{A}_{H_{1}}\otimes_{\mathcal{O}_{K}}\mathfrak{A}_{H_ {2}}\)_._ 2. _If_ \(\mathcal{O}_{L_{i}}\) _is_ \(\mathfrak{A}_{H_{i}}\)_-free for_ \(i\in\{1,2\}\)_, then_ \(\mathcal{O}_{L}\) _is_ \(\mathfrak{A}_{H}\)_-free._ Proof.: Let \(B_{i}\) be an \(\mathcal{O}_{K}\)-basis of \(\mathcal{O}_{L_{i}}\) for \(i\in\{1,2\}\). The hypothesis that \(L_{1}/K\) and \(L_{2}/K\) are arithmetically disjoint gives that \(\mathcal{O}_{L}=\mathcal{O}_{L_{1}}\otimes_{\mathcal{O}_{K}}\mathcal{O}_{L_{2}}\), so the product of the bases \(B_{1}\) and \(B_{2}\), consisting in all the possible products of an element of \(B_{1}\) and an element of \(B_{2}\), is an \(\mathcal{O}_{K}\)-basis of \(\mathcal{O}_{L}\). 1. For \(i\in\{1,2\}\), let \(D_{i}\) be a reduced matrix of \(M(H_{i},L_{i})\). By definition, there is a unimodular matrix \(U_{i}\in\operatorname{GL}_{n_{i}^{2}}(\mathcal{O}_{K})\) such that \(U_{i}M(H_{i},L_{i})=\left(\begin{matrix}D_{i}\\ O\end{matrix}\right)\), where \(M(H_{i},L_{i})\) is written by fixing the basis \(B_{i}\) in \(L_{i}\) for \(i\in\{1,2\}\). Now, the Kronecker product \(U_{1}\otimes U_{2}\in\operatorname{GL}_{n^{2}}(\mathcal{O}_{K})\) is a unimodular matrix that satisfies \[(U_{1}\otimes U_{2})(M(H_{1},L_{1})\otimes M(H_{2},L_{2}))=(U_{1} \otimes M(H_{1},L_{1}))(U_{2}\otimes M(H_{2},L_{2}))=\] \[\left(\begin{matrix}D_{1}\\ O\end{matrix}\right)\otimes\left(\begin{matrix}D_{2}\\ O\end{matrix}\right)=\left(\begin{matrix}D_{1}\otimes D_{2}\\ O\end{matrix}\right).\] By Proposition 3.15, there is a unimodular matrix \(P\in\operatorname{GL}_{n^{2}}(\mathcal{O}_{K})\) such that \(PM(H,L)=M(H_{1},L_{1})\otimes M(H_{2},L_{2})\), where \(M(H,L)\) is written by fixing the basis \(B\) in \(L\). Then, \[[(U_{1}\otimes U_{2})P^{-1}]M(H,L)=\left(\frac{D_{1}\otimes D_{2}}{O}\right),\] and the matrix \((U_{1}\otimes U_{2})P^{-1}\) is unimodular. Moreover, \(B\) is an integral basis of \(L\). Then, \(D_{1}\otimes D_{2}\) is a reduced matrix of \(M(H,L)\). Now, the result is obtained by applying Proposition 2.15. 2. For \(i\in\{1,2\}\), let \(V_{i}=\{v_{i_{j}}\}_{j=1}^{n_{i}}\) be an \(\mathcal{O}_{K}\)-basis of \(\mathcal{O}_{L_{i}}\) and let \(\gamma_{i}\in\mathcal{O}_{L_{i}}\) be an \(\mathfrak{A}_{H_{i}}\)-generator of \(\mathcal{O}_{L_{i}}\). Then \(\{v_{j_{i}}\cdot\gamma_{i}\}_{j=1}^{n_{i}}\) is an \(\mathcal{O}_{K}\)-basis of \(\mathcal{O}_{L}\) for \(i\in\{1,2\}\). Since \(L_{1}/K\) and \(L_{2}/K\) are arithmetically disjoint, the product of these basis is an \(\mathcal{O}_{K}\)-basis of \(\mathcal{O}_{L}\). Now, this is formed by the elements of the product \(V_{1}\) and \(V_{2}\) acting on \(\gamma=\gamma_{1}\gamma_{2}\in\mathcal{O}_{L}\), and by the previous part that product is an \(\mathcal{O}_{K}\)-basis of \(\mathcal{O}_{L}\). Therefore, \(\gamma\) is an \(\mathfrak{A}_{H}\)-free generator of \(\mathcal{O}_{L}\) and in particular \(\mathcal{O}_{L}\) is \(\mathfrak{A}_{H}\)-free. For almost classically Galois extensions with associated rings of integers, the conditions of arithmetic and strong disjointness are not redundant. Clearly, two extensions that are strongly disjoint need not be arithmetically disjoint: a pair of linearly disjoint Galois extensions are automatically strongly disjoint, and there are many examples of such pairs that are not arithmetically disjoint. On the other hand, in the following, we will exhibit two arithmetically disjoint extensions that are not strongly disjoint. **Example 3.17**.: Let \(L_{1}=\mathbb{Q}_{3}(\alpha)\), where \(\alpha\) is a root of \(f_{1}(x)=x^{3}+3x^{2}+3\), and let \(L_{2}=\mathbb{Q}_{3}(\sqrt{2})\). At the LMFDB database, the extension \(L_{1}/\mathbb{Q}_{3}\) corresponds to p-adic field 3.3.4.4, while the extension \(L_{2}/\mathbb{Q}_{3}\) corresponds to p-adic field 3.2.0.1. Since the extensions \(L_{1}/\mathbb{Q}_{3}\) and \(L_{2}/\mathbb{Q}_{3}\) have coprime degrees, they are linearly disjoint. Moreover, since \(L_{1}/\mathbb{Q}_{3}\) is ramified and \(L_{2}/\mathbb{Q}_{3}\) is unramified, they are arithmetically disjoint. The extension \(L_{1}/\mathbb{Q}_{3}\) is almost classically Galois with complement \(M_{1}=\mathbb{Q}_{3}(\sqrt{-1})=\mathbb{Q}_{3}(\sqrt{2})\) (see [19, Example 3.4]), and \(L_{2}\cap M_{1}\neq\mathbb{Q}_{3}\). We conclude that \(L_{1}/\mathbb{Q}_{3}\) and \(L_{2}/\mathbb{Q}_{3}\) are not strongly disjoint. ## 4 Kummer theory for Galois extensions From now on, we establish the following conventions. For each positive integer \(m\), we fix a primitive \(m\)-th root of unity \(\zeta_{m}\) such that if \(m_{1}\mid m_{2}\), then \(\zeta_{m_{2}}^{\frac{m_{2}}{m_{2}}}=\zeta_{m_{1}}\). Moreover, we write \(n\) for a positive integer number and \(K\) for a field whose characteristic is coprime to \(n\). Let us fix an algebraic closure \(\overline{K}\) of \(K\), so that \(\zeta_{n}\in\overline{K}\). Given \(a\in K\), the polynomial \(x^{n}-a\) has \(n\) different roots in \(\overline{K}\), namely \(\zeta_{n}^{i}\alpha\) with \(0\leq i\leq n-1\). Let us assume that the polynomial \(x^{n}-a\) is irreducible over \(K\). Then, any of its roots generates a degree \(n\) extension of \(K\), which we denote by \(L=K(\sqrt[n]{a})\), or \(L=K(\alpha)\) with \(\alpha^{n}=a\in K\). The field \(L\) is determined possibly up to \(K\)-isomorphism. From now on, each time we consider an element \(\alpha\in\overline{L}\) with \(\alpha^{n}=a\in K\), we will assume that the polynomial \(x^{n}-a\) is irreducible over \(K\). Now, assume that \(K\) contains the \(n\)-th roots of unity (equivalently \(\zeta_{n}\in K\)), so that \(L=K(\sqrt[n]{a})\) does not depend on the choice of the root of \(x^{n}-a\). Then \(L/K\) is Galois. Assume in addition that \(\alpha^{k}\notin K\) for every \(k<n\) and any such a root \(\alpha\). If \(G\) is the Galois group of \(L/K\), the automorphism \(\sigma\in G\) such that \(\sigma(\alpha)=\zeta_{n}\alpha\) has clearly order \(n\), and therefore \(L/K\) is cyclic. Conversely, if \(L/K\) is cyclic with Galois group \(G=\langle\sigma\rangle\), we know from Hilbert theorem 90 that there is some \(\alpha\in L\) such that \(\sigma(\alpha)=\zeta_{n}\alpha\). Since \(\sigma\) is a generator of \(G\), it follows that \(\alpha^{n}\in K\) and no smaller power of \(\alpha\) is in \(K\). In summary, we have the following well known result. **Proposition 4.1**.: _Assume that \(\zeta_{n}\in K\). Let \(L/K\) be a Galois extension of degree \(n\) with Galois group \(G\). The following statements are equivalent:_ 1. \(L/K\) _is cyclic._ 2. \(L=K(\alpha)\) _for some_ \(\alpha\in L\) _such that_ \(\alpha^{n}\in K\) _and no smaller power of_ \(\alpha\) _is in_ \(K\) This provides a complete characterization of the cyclic extensions of the field \(K\) in terms of the adjunction of \(n\)-th roots of unity. It is possible to extend this result to a characterization of all finite abelian extensions of \(K\). It is well known that any finite abelian group is a direct product of cyclic subgroups, and its exponent is the least common multiple of the orders of the cyclic subgroups. This motivates the notion of Kummer extension. **Definition 4.2**.: _Let \(L/K\) be an extension of fields with characteristic coprime to \(n\). Assume that \(K\) contains the \(n\)-th roots of unity. We say that \(L/K\) is Kummer with respect to \(n\) if \(L/K\) is abelian and its Galois group has exponent dividing \(n\)._ That is, the Galois group of a Kummer extension with respect to \(n\) is a direct product of cyclic groups with exponent dividing \(n\). Using the fundamental theorem of Galois theory, the following is proved (see for instance [30, Theorem 11.4]). **Proposition 4.3**.: _Assume that \(\zeta_{n}\in K\). Let \(L/K\) be a Galois extension with Galois group \(G\). The following statements are equivalent:_ 1. \(L/K\) _is Kummer with exponent_ \(n\)_._ 2. \(L=K(\alpha_{1},\dots,\alpha_{k})\) _for some_ \(\alpha_{1},\dots,\alpha_{k}\in L\) _such that_ \(\alpha_{i}^{n}\in K\) _for every_ \(1\leq i\leq k\) _and no positive integer smaller than_ \(n\) _has this property._ Note that (ii) can be restated by: Given \(a_{1},\dots,a_{k}\in K\), \(n\) is the minimal integer number with the property that \(L=K(\sqrt[n]{a_{1}},\dots,\sqrt[n]{a_{k}})\). Call \(L_{i}=K(\sqrt[n]{a_{i}})\), \(n_{i}=[L_{i}:K]\), and let \(\alpha_{i}\in L_{i}\) with \(\alpha_{i}^{n}=a_{i}\). Then there is some \(0\leq k\leq n\) such that \(\zeta_{n}^{k}\alpha_{i}^{n_{i}}\in K\) (see [2, Lemma 3.5]), whence \(\alpha_{i}^{n_{i}}\in K\). Since in addition no smaller power of \(\alpha_{i}\) belongs to \(K\), each \(L_{i}/K\) is a cyclic extension of degree exactly \(n_{i}\). If \(G_{i}:=\operatorname{Gal}(L_{i}/K)\) for each \(1\leq i\leq k\), it is shown that \(G=\prod_{i=1}^{k}G_{i}\) and \(G_{i}\cong G/H_{i}\), where \(H_{i}=\prod_{j=1,j\neq i}^{k}G_{i}\). We can assume without loss of generality that \(\alpha_{1},\dots,\alpha_{k}\) is a minimal set of generators. In that case, \(i\neq j\) implies that \(L_{i}\cap L_{j}=L^{H_{i}H_{j}}=L^{G}=K\), so the extensions \(L_{i}/K\) and \(L_{j}/K\) are linearly disjoint. Then, Kummer extensions with exponent \(n\) are the compositums of linearly disjoint cyclic extensions with degree dividing \(n\). Each extension \(L/K\) as in (ii) gives rise to a unique finitely generated subgroup of the multiplicative group \(K^{*}/(K^{*})^{n}\). Namely, if \(L=K(\sqrt[n]{a_{1}},\dots,\sqrt[n]{a_{k}})\), then \(\langle a_{1},\dots,a_{n}\rangle\) is a subgroup of \(K^{*}\). Now, multiplying any \(a_{i}\) by an \(n\)-th power of an element in \(K^{*}\) does not vary the extension \(L/K\). Thus, the projection of such a subgroup onto \(K^{*}/(K^{*})^{n}\) is completely determined by \(L\). Conversely, each subgroup \(B\) of \(K^{*}/(K^{*})^{n}\) is assigned to the extension \(L=K(\sqrt[n]{B})\), where \[\sqrt[n]{B}=\{\alpha\in L\,|\,\alpha^{n}\in B\}.\] Note that each \(\alpha\in\overline{K}\) such that \(\alpha^{n}\in B\) belongs to \(L\) because of the assumption that \(\zeta_{n}\in K\). The assignation described above is bijective because two extensions of the form \(K(\sqrt[n]{a})\), \(K(\sqrt[n]{b})\) with \(a,b\in K\) are the same if and only if there is some \(c\in K^{*}\) and \(r\in\mathbb{Z}_{>0}\) coprime to \(n\) such that \(a=b^{r}c^{n}\) (see [26, Chapter III, Lemma 3]). Therefore, from Proposition 4.3 we recover the following well known result: **Corollary 4.4**.: _Assume that \(\zeta_{n}\in K\). Let \(L/K\) be a Galois extension with Galois group \(G\). There is a bijective correspondence between:_ 1. _Finitely generated subgroups of the multiplicative group_ \(K^{*}/(K^{*})^{n}\)_._ 2. _Finite abelian extensions of_ \(K\) _with exponent_ \(n\)_._ _Within this correspondence, degree \(n\) cyclic extensions of \(K\) correspond bijectively to cyclic subgroups of \(K^{*}/(K^{*})^{n}\)._ ### A characterization of Kummer Galois extensions In this section we will rewrite the Kummer condition for a Galois extension \(L/K\) in terms of the action of its Galois group. We will start with the cyclic case. We know from Proposition 4.1 that such an extension is of the form \(L=K(\alpha)\) with \(\alpha^{n}\in K\) whenever \(\zeta_{n}\in K\), since it has cyclic Galois group \(G\). Now, note that for every \(\sigma\in G\) there is a unique \(0\leq i_{\sigma}\leq n-1\) such that \(\sigma(\alpha)=\zeta_{n}^{i_{\sigma}}\alpha\). This means that the element \(\alpha\) is an eigenvector of all the elements \(\sigma\in G\), where these are regarded as \(K\)-endomorphisms of \(L\). This property is also a characterization of cyclic extensions of \(K\). **Proposition 4.5**.: _Let \(n\in\mathbb{Z}_{>0}\) and let \(K\) be a field with characteristic coprime to \(n\). Let \(L=K(\alpha)\) be a Galois extension of \(K\) with group \(G\). The following statements are equivalent:_ 1. \(\zeta_{n}\in K\)_,_ \(\alpha^{n}\in K\) _and no smaller power of_ \(\alpha\) _is in_ \(K\)_._ 2. \(\alpha\) _is an eigenvector of each automorphism_ \(\sigma\in G\) _and_ \(|G|=n\)_._ Proof.: We have already seen that (1) implies (2). Conversely, let us assume the situation in (2), so that \(|G|=n\) and for each \(\sigma\in G\) there is an element \(\lambda_{\sigma}\in K\) such that \(\sigma(\alpha)=\lambda_{\sigma}\alpha\). Since \(\alpha\) is a primitive element, all the elements \(\sigma(\alpha)\) are distinct as \(\sigma\) runs through \(G\), so the norm of \(\alpha\) is \(N(\alpha)=\prod_{\sigma\in G}\sigma(\alpha)=\prod_{\sigma\in G}\lambda_{ \sigma}\alpha^{n}\), which obviously belongs to \(K\). Since the \(\lambda_{\sigma}\) also do, we obtain that \(\alpha^{n}\in K\). In other words, \(L=K(\alpha)\) for \(\alpha^{n}\in K\). Moreover, the condition that \(|G|=n\) ensures that the minimal polynomial of \(\alpha\) has degree \(n\), so no smaller power of \(\alpha\) belongs to \(K\). Let us check that \(\zeta_{n}\in K\). Indeed, the minimal polynomial of \(\alpha\) over \(K\) is \(f(x)=x^{n}-a\) with \(a\coloneqq\alpha^{n}\), and hence the conjugates of \(\alpha\) are \(\alpha\), \(\zeta_{n}\alpha,\ldots,\zeta_{n}^{n-1}\alpha\). Thus, there is a unique \(0\leq i_{\sigma}\leq n-1\) such that \(\sigma(\alpha)=\zeta_{n}^{i_{\sigma}}\alpha\). But \(\sigma(\alpha)=\lambda_{\sigma}\alpha\), so \(\zeta_{n}^{i_{\sigma}}=\lambda_{\sigma}\in K\) for every \(\sigma\in G\). In particular, \(\zeta_{n}\in K\). We will refer to a primitive element \(\alpha\) as in Proposition 4.5 as a \(G\)-eigenvector. From the proof of this result, we see that the eigenvalues are \(n\)-th roots of unity. Now, we use the same idea to characterize arbitrary Kummer extensions in terms of the Galois action. We have seen that a Kummer extension is a product of cyclic extensions, each of which has a Galois eigenvector as a primitive element. Accordingly, we will prove that Kummer extensions are those with a finite generating set of Galois eigenvectors. **Lemma 4.6**.: _Let \(L_{1}/K\) and \(L_{2}/K\) be Galois extensions with Galois groups \(G_{1}\) and \(G_{2}\). Let \(L=L_{1}L_{2}\) and let \(G=\operatorname{Gal}(L/K)\). If \(\alpha_{k}\in L_{k}\) is a \(G_{k}\)-eigenvector for \(k\in\{1,2\}\), then \(\alpha_{1}\alpha_{2}\) is a \(G\)-eigenvector in \(L\). Consequently, the product of generating sets of \(G_{k}\)-eigenvectors for \(L_{k}/K\) is a generating set of \(G\)-eigenvectors for \(L/K\)._ Proof.: By assumption we have that for each \(\sigma_{k}\in G_{k}\) there is \(\lambda_{\sigma_{k}}\in K\) such that \(\sigma_{k}(\alpha_{k})=\lambda_{\sigma_{k}}\alpha_{k}\). From Proposition 3.1, there are unique elements \(\sigma_{1}\in G_{1}\) and \(\sigma_{2}\in G_{2}\) such that \(\sigma=\sigma_{1}\sigma_{2}\in G\). Now, \[\sigma(\alpha_{1}\alpha_{2})=\sigma(\alpha_{1})\sigma(\alpha_{2})=\lambda_{ \sigma_{1}}\lambda_{\sigma_{2}}\alpha_{1}\alpha_{2}\] with \(\lambda_{\sigma_{1}}\lambda_{\sigma_{2}}\in K\). The last sentence follows from an easy induction. **Theorem 4.7**.: _Let \(n\in\mathbb{Z}_{>0}\) and let \(K\) be a field with characteristic coprime to \(n\). Let \(L=K(\alpha_{1},\ldots,\alpha_{k})\) be a finite Galois extension of \(K\) with group \(G\). The following statements are equivalent:_ 1. \(\zeta_{n}\in K\)_,_ \(\alpha_{i}^{n}\in K\) _for all_ \(1\leq i\leq k\) _and_ \(n\) _is minimal for this property._ 2. \(\{\alpha_{1},\ldots,\alpha_{k}\}\) _is a generating set of_ \(G\)_-eigenvectors for_ \(L/K\) _and_ \(\exp(G)=n\)_._ Proof.: Call \(L_{i}\coloneqq K(\alpha_{i})\) and \(n_{i}\coloneqq[L_{i}:K]\) for each \(1\leq i\leq k\). Assume the situation in (1). Then Proposition 4.3 gives that \(L/K\) is Kummer with exponent \(n\). Moreover, under this assumption we have that the extensions \(L_{i}/K\) are cyclic, and \(G=\prod_{i=1}^{k}G_{i}\) where \(G_{i}\coloneqq\operatorname{Gal}(L_{i}/K)\). Using Proposition 4.5, we obtain that \(\alpha_{i}\) is a \(G_{i}\)-eigenvector for each \(1\leq i\leq k\). By Lemma 4.6, \(\{\alpha_{1},\ldots,\alpha_{k}\}\) is a generating set of \(G\)-eigenvectors for \(L/K\). Now, suppose that \(L/K\) satisfies (2). Given \(1\leq i\leq k\), for each \(\sigma\in G\) there is a unique \(\lambda_{\sigma,i}\in K\) such that \(\sigma(\alpha_{i})=\lambda_{\sigma,i}\alpha_{i}\). Then each \(L_{i}/K\) is a Galois extension whose Galois group acts on \(L_{i}\) by restriction of the action of \(G\) on \(L\). Therefore, given \(1\leq i\leq k\), we have that \(\sigma_{i}(\alpha_{i})=\lambda_{\sigma_{i},i}\alpha_{i}\) for every \(\sigma_{i}\in G_{i}\coloneqq\operatorname{Gal}(L_{i}/K)\). Applying Proposition 4.5 with each \(L_{i}/K\), we have that \(\zeta_{ni_{i}}\in K\), \(\alpha_{i}^{n_{i}}\in K\) and no smaller power of \(\alpha_{i}\) belongs to \(K\), that is, \(L_{i}/K\) is cyclic of degree \(n_{i}\). Since \(L=\prod_{i=1}^{k}L_{i}\), Proposition 3.1 gives that \(G\) is a direct product of cyclic subgroups of order dividing \(n\). From the paragraph after Proposition 4.3 we conclude that \(L/K\) is Kummer with exponent \(n\), and hence \(\alpha_{1},\ldots,\alpha_{k}\) are as in (1). Let us prove that \(\zeta_{n}\in K\). Since \(G\) has exponent \(n\), \(n\) is the least common multiple of the numbers \(n_{i}=|G_{i}|\). Let \(\zeta=\zeta_{n_{1}}\ldots\zeta_{n_{k}}\in K\). Since \(\zeta^{n}=1\), \(\zeta^{n_{i}}\neq 1\) for every \(1\leq i\leq k\) and \(n\) is the least common multiple of the numbers \(n_{i}\), we have that \(\zeta=\zeta_{n}\in K\) **Remark 4.8**.: If \(L/K\) is a Galois extension with some \(K\)-basis of \(G\)-eigenvectors \(\{\alpha_{1},\ldots,\alpha_{k}\}\), then the eigenvalues of the elements \(\alpha_{i}\) under the action of \(G\) are necessarily \(n\)-th roots of unity. Indeed, we have seen in the proof of Theorem 4.7 that \(G=G_{1}\times\cdots\times G_{k}\) for \(G_{i}=\operatorname{Gal}(K(\alpha_{i})/K)\), and each \(\alpha_{i}\) is a \(G_{i}\)-eigenvector with an \(n\)-th root of unity as eigenvalue. By Lemma 4.6, a \(G\)-eigenvalue of \(L\) is the product of those. ## 5 A Kummer condition for Hopf-Galois extensions Let \(K\) be a field with characteristic coprime to \(n\in\mathbb{Z}_{>0}\). In Section 4 we have introduced the notion of Galois eigenvector and we have showed that it can be used to characterize the Kummer condition for Galois extensions. This notion only depends on the Galois group and its Galois action, so it makes sense to define an analogous concept for any Hopf-Galois structure on a given Hopf-Galois extension. Let \(L/K\) be an \(H\)-Galois extension of fields and let \(\cdot\) be the action of \(H\) on \(L\). Then the map \[\rho_{H}\colon H\longrightarrow\operatorname{End}_{K}(L)\] defined as \(\rho_{H}(h)(x)=h\cdot x\) is a \(K\)-linear monomorphism. Indeed, we have that \(\rho_{H}=j\circ\iota\), where \(\iota\colon H\longrightarrow L\otimes_{K}H\) is the canonical inclusion defined by \(\iota(h)=1\otimes h\) and \(j\colon L\otimes_{K}H\longrightarrow\operatorname{End}_{K}(L)\) is the map from Section 2.1, which is a \(K\)-linear isomorphism by definition of Hopf-Galois structure. **Definition 5.1**.: _Let \(L/K\) be an \(H\)-Galois extension of fields. We say that an element \(\alpha\in L\) is an eigenvector of the action of \(H\), or an \(H\)**-eigenvector**, if for every \(h\in H\) there exists some \(\lambda(h)\in K\) such that_ \[h\cdot\alpha=\lambda(h)\alpha,\] _or equivalently, \(\alpha\) is an eigenvector of the \(K\)-endomorphism \(\rho_{H}(h)\) for every \(h\in H\). The element \(\lambda(h)\) is called an \(H\)**-eigenvalue**._ If \(L/K\) is Galois with group \(G\) and we write \(H_{c}\) for its classical Galois structure, the notion of \(G\)-eigenvector in Section 4 is just the one of \(H_{c}\)-eigenvector according to Definition 5.1. There are some immediate remarks from Definition 5.1. The element \(\alpha=0\) is always an \(H\)-eigenvector in \(L\). Indeed, from the injectivity of \(\rho_{H}\) it follows that \(h\cdot 0=0\). Another trivial example of eigenvector is \(\alpha=1\). In this case, we have that \(h\cdot 1=\epsilon_{H}(h)\), where \(\epsilon_{H}\) is the counity of \(H\) as a \(K\)-Hopf algebra. Note that if \(\alpha\neq 0\), the element \(\lambda(h)\) is completely determined by \(h\). In the sequel we will always assume that \(H\)-eigenvectors are not zero. In order to check that an element \(\alpha\) is an \(H\)-eigenvector, it is enough to consider a \(K\)-basis of \(H\). Indeed, if \(W=\{w_{i}\}_{i=1}^{n}\) is a \(K\)-basis of \(H\), for \(h=\sum_{i=1}^{n}h_{i}w_{i}\in H\) we have that \[h\cdot\alpha=\sum_{i=1}^{n}h_{i}w_{i}\cdot\alpha=\Big{(}\sum_{i=1}^{n}h_{i} \lambda(w_{i})\Big{)}\alpha=\lambda(h)\alpha,\] where \(\lambda(h)=\sum_{i=1}^{n}h_{i}\lambda(w_{i})\in K\). Under this terminology, Theorem 4.7 states that when \(\zeta_{n}\in K\), a Galois extension \(L/K\) is Kummer if and only if it admits some finite generating set of eigenvectors under the action of its Galois group, and therefore under the action of the classical Galois structure on \(L/K\). This motivates the following definition. **Definition 5.2**.: _Let \(L/K\) be a degree \(n\)\(H\)-Galois extension of fields. We say that \(L/K\) is \(H\)**-Kummer** if it admits some finite generating set of \(H\)-eigenvectors._ With this definition, any Kummer extension in the classical sense is Kummer with respect to its classical Galois structure. Sometimes it will be more convenient to work with \(K\)-bases of \(L\) rather than generating sets for \(L/K\). Actually, the existence of such a basis is equivalent to the \(H\)-Kummer property, due to the following result: **Proposition 5.3**.: _Let \(L/K\) be an \(H\)-Galois extension with some \(H\)-eigenvector. Then the product of \(H\)-eigenvectors is also an \(H\)-eigenvector._ Proof.: Let \(\alpha_{1},\alpha_{2}\in L\) be \(H\)-eigenvectors and let \(h\in H\). Then there are unique elements \(\lambda_{\alpha_{1}}(h),\lambda_{\alpha_{2}}(h)\in K\) such that \(h\cdot\alpha_{1}=\lambda_{\alpha_{1}}\alpha_{1}\) and \(h\cdot\alpha_{2}=\lambda_{\alpha_{2}}\alpha_{2}\). Using the Sweedler's notation for \(h\), \[h\cdot(\alpha_{1}\alpha_{2})=\sum_{(h)}(h_{(1)}\cdot\alpha_{1})(h_{(2)}\cdot \alpha_{2})=\sum_{(h)}\lambda(h_{(1)})\lambda(h_{(2)})\alpha_{1}\alpha_{2}= \lambda(h)\alpha_{1}\alpha_{2},\] where \(\lambda(h)=\sum_{(h)}\lambda(h_{(1)})\lambda(h_{(2)})\in K\). **Remark 5.4**.: In general, the sum of \(H\)-eigenvectors is not an \(H\)-eigenvector. For example, let \(L=\mathbb{Q}(\alpha)\) with \(\alpha^{3}=2\), which admits a unique Hopf-Galois structure \(H\) by Byott uniqueness theorem [7]. It is described as follows: Let \(G=\operatorname{Gal}(\widetilde{L}/\mathbb{Q})\), where \(\widetilde{L}=\mathbb{Q}(\alpha,\zeta_{3})\) is the normal closure of \(L/\mathbb{Q}\). Let \(\sigma\in G\) be defined by \(\sigma(\alpha)=\zeta_{3}\alpha\) and \(\sigma(\zeta_{3})=\zeta_{3}\), and let \(\tau\in G\) be the automorphism fixing \(\alpha\) and taking \(\zeta_{3}\) to its inverse, so that \(G\) is generated by \(\sigma\) and \(\tau\). In the notation of Section 2.1, let us identify \(\sigma\) with \(\lambda(\sigma)\). Then we can use Greither-Pareigis theorem to show that \(H=\mathbb{Q}[w]\) where \(w=\sqrt{-3}(\sigma-\sigma^{2})\) acts on \(L\) by means of the Galois action of \(\sigma\) on \(L\) (this is a straightforward calculation, see for instance [12, Section 1.2]). Now, \(\alpha\) is an \(H\)-eigenvector since \(w\cdot\alpha=-3\alpha\) and \(w^{2}\cdot\alpha=9\alpha\), and by Proposition 5.3, so is \(\alpha^{2}\). However, \(w\cdot(\alpha+\alpha^{2})=3(-\alpha+\alpha^{2})\), and there is no \(\lambda\in\mathbb{Q}\) such that \(w\cdot(\alpha+\alpha^{2})=\lambda(\alpha+\alpha^{2})\). **Corollary 5.5**.: _Let \(L/K\) be an \(H\)-Galois extension. Assume that there is some primitive element \(\alpha\) of \(L/K\) which is an \(H\)-eigenvector. Then \(L/K\) admits some \(K\)-basis of \(H\)-eigenvectors of \(L\)._ Proof.: Let \(n=[L:K]\). An easy induction on Proposition 5.3 shows that if \(\alpha\in L\) is an \(H\)-eigenvector, so is \(\alpha^{i}\) for every positive integer \(i\). Thus, when \(\alpha\) is in addition a primitive element, we have that \(\{1,\alpha,\ldots,\alpha^{n-1}\}\) is a \(K\)-basis of \(H\)-eigenvectors for \(L\). **Corollary 5.6**.: _Let \(L/K\) be an \(H\)-Galois extension. Then \(L/K\) is \(H\)-Kummer if and only if \(L\) has some \(K\)-basis of \(H\)-eigenvectors._ Proof.: If \(L\) has a \(K\)-basis of \(H\)-eigenvectors, then this is a generating set of \(H\)-eigenvectors for \(L/K\), so \(L/K\) is \(H\)-Kummer. Conversely, suppose that \(L/K\) is \(H\)-Kummer, and let \(\{\alpha_{1},\ldots,\alpha_{k}\}\) be a generating set of \(H\)-eigenvectors for \(L/K\). By Proposition 5.3, the powers of the elements \(\alpha_{i}\) are also \(H\)-eigenvectors. Now, since \(L\) is the compositum of the fields \(K(\alpha_{i})\), the elements \(\alpha_{i}\) and their powers form a system of generators for \(L\) as a \(K\)-vector space that are \(H\)-eigenvectors. Hence, \(L\) contains some \(K\)-basis of \(H\)-eigenvectors. ## 6 The correspondence with radical extensions In Section 4 we established a correspondence between Kummer Galois extensions and radical extensions of a field \(K\), and under this correspondence cyclic extensions correspond to simple radical ones. Moreover, we proved that this characterization can be rewritten in terms of the existence of a finite generating system of Galois eigenvectors. In this section we will generalize the results in Section 4 to the Hopf-Galois setting by means of the notion of \(H\)-eigenvector introduced in Section 5. As a consequence, we will establish a correspondence between \(H\)-Kummer extensions and radical extensions of a field \(K\) with characteristic coprime to \(n\in\mathbb{Z}_{>0}\). This correspondence will not include all \(H\)-Kummer extensions; in fact it will be defined for a subclass of almost classically Galois extensions of \(K\). ### The case of simple radical extensions We will consider first simple radical extensions of a field \(K\), i.e. those that are generated by a single \(n\)-th root of some element in \(K\). In the case that \(K\) contains the \(n\)-th roots of unity, this extension is cyclic and by Proposition 4.5 corresponds to an extension generated by a single Galois eigenvector. Accordingly, we introduce the following notion. **Definition 6.1**.: _Let \(L/K\) be an \(H\)-Galois extension. We say that \(L/K\) is \(H\)-cyclic if it has some primitive element which in addition is an \(H\)-eigenvector._ In this part we will prove Theorem 1.1 for \(k=1\), which corresponds to simple radical extensions. Namely: **Theorem 6.2**.: _Let \(n\in\mathbb{Z}_{>0}\), let \(K\) be a field with characteristic coprime to \(n\) and let \(M=K(\zeta_{n})\). Let \(L=K(\alpha)\) be a finite extension of \(K\). The following statements are equivalent:_ 1. \(L\cap M=K\)_,_ \(\alpha^{n}\in K\) _and_ \(n\) _is minimal for this property._ 2. \(L/K\) _is a degree_ \(n\) _almost cyclic extension with Galois complement_ \(M\) _and_ \(\alpha\) _is an_ \(H\)_-eigenvector of_ \(L\)_, where_ \(H\) _is the almost classically Galois structure on_ \(L/K\) _corresponding to_ \(M\)_._ _In particular, the simple radical degree \(n\) extensions of \(K\) that are linearly disjoint with \(M\) are the degree \(n\) almost cyclic extensions of \(K\) that are \(H\)-cyclic._ Note that if in Theorem 6.2 we impose that \(\zeta_{n}\in K\), then \(M=K\) and the condition in (2) becomes that \(L/K\) is Galois with some primitive element as eigenvector of its Galois action, recovering the statement of Proposition 4.5. In Theorem 6.2, that (2) implies (1) is an immediate consequence of the following. **Proposition 6.3**.: _Let \(n\in\mathbb{Z}_{>0}\) and let \(K\) be a field with characteristic coprime to \(n\). Let \(L/K\) be a degree \(n\) almost abelian extension and let \(H\) be an almost classically Galois structure corresponding to some Galois complement \(M\). Assume that \(L/K\) is \(H\)-cyclic and let \(\alpha\) be a primitive element of \(L/K\) which is an \(H\)-eigenvector of \(L\). Then \(M=K(\zeta_{n})\), \(\alpha^{n}\in K\) and no smaller power of \(\alpha\) is in \(K\). In particular, \(L/K\) is an almost cyclic extension._ Proof.: Since \(M\) is the Galois complement of \(L/K\), we know that \(L\) and \(M\) are \(K\)-linearly disjoint and \(\widetilde{L}\cong L\otimes_{K}M\). On the other hand, that \(\alpha\) is an \(H\)-eigenvector means that for each \(h\in H\) there is a unique \(\lambda(h)\in K\) such that \(h\cdot\alpha=\lambda(h)\alpha\). Call \(J=\operatorname{Gal}(\widetilde{L}/M)\), which is abelian by hypothesis. From Proposition 2.9 we obtain that \(H=M[J]^{G^{\prime}}\), so \(M\otimes_{K}H=M[J]\), and this is the classical Galois structure on \(\widetilde{L}/M\). Since elements in \(M\otimes_{K}H\) are \(M\)-linear combinations of elements in \(H\) and \(h\cdot\alpha=\lambda(h)\alpha\) for every \(h\in H\), we have that \(\alpha\) is an eigenvector under the action of \(M\otimes_{K}H\). Therefore, \(\alpha\) is an eigenvector with respect to the classical Galois structure on \(\widetilde{L}/M\), i.e. a \(J\)-eigenvector. Since \(L/K\) has degree \(n\), we know that \(|J|=n\). Applying Proposition 4.5, we obtain that \(\zeta_{n}\in M\), \(\alpha^{n}\in M\) and no smaller power of \(\alpha\) is in \(M\). Thus \(\alpha^{n}\in L\cap M=K\), and if \(\alpha^{k}\in K\) with \(1<k\leq n\), the fact that \(\alpha^{k}\in M\) implies that \(k=n\). Then the minimal polynomial of \(\alpha\) over \(K\) is \(x^{n}-a\) with \(a\coloneqq\alpha^{n}\in K\), so the conjugates of \(\alpha\) are of the form \(\zeta_{n}^{k}\alpha\), \(0\leq k\leq n-1\). We deduce that \(\widetilde{L}=L(\zeta_{n})=LK(\zeta_{n})\). Since in addition \(\widetilde{L}=LM\) with \(L,M\)\(K\)-linearly disjoint and \(K(\zeta_{n})\subseteq M\), we conclude that \(M=K(\zeta_{n})\). It is remarkable that unlike in Theorem 6.2 (2), in Proposition 6.3 we do not need to assume that the extension \(L/K\) is almost cyclic and that its complement is \(K(\zeta_{n})\). Instead, this is obtained as a consequence of the assumption that the almost classically Galois extension is almost abelian and \(H\)-cyclic, where \(H\) is the almost classically Galois structure corresponding to the fixed complement. Next, we prove that the first statement of Theorem 6.2 implies the second one. **Proposition 6.4**.: _Let \(n\in\mathbb{Z}_{>0}\), let \(K\) be a field with characteristic coprime to \(n\) and let \(M=K(\zeta_{n})\). Let \(L=K(\alpha)\) with \(\alpha^{n}\in K\) and such that \(n\) is minimal for this property, and assume that \(L\cap M=K\). Then \(L/K\) is an almost cyclic extension and \(\alpha\) is an \(H\)-eigenvector of \(L\)._ Proof.: The conjugates of \(\alpha\) are \(\zeta_{n}^{i}\alpha\) with \(0\leq i\leq n\), so we have that \(\widetilde{L}=LM\). In addition \(L\cap M=K\) and \(M/K\) is Galois, so \(L\) and \(M\) are \(K\)-linearly disjoint. Therefore \(L/K\) is almost classically Galois with complement \(M\). On the other hand, we have that the normal closure of \(L\) is \(\widetilde{L}=M(\alpha)\) with \(\alpha^{n}\in M\). If \(0<k\leq n\) and \(\alpha^{k}\in M\), then \(\alpha^{k}\in L\cap M=K\), so \(k=n\). Hence no power of \(\alpha\) smaller than \(n\) belongs to \(M\), and \(\zeta_{n}\in M\). This proves that \(\widetilde{L}/M\) is cyclic, that is, \(L/K\) is almost cyclic. On the other hand, from Proposition 4.5 we see that for each \(\sigma\in J\) there is a unique \(\lambda_{\sigma}\in M\) such that \(\sigma(\alpha)=\lambda_{\sigma}\alpha\). Now, Proposition 2.9 gives that \(H=M[J]^{G^{\prime}}\). Hence for each \(h\in H\) there are \(h_{1},\dots,h_{n}\in M\) such that \(h=\sum_{i=1}^{n}h_{i}\sigma_{i}\). Then \[h\cdot\alpha=\Big{(}\sum_{i=1}^{n}h_{i}\sigma_{i}\Big{)}\cdot\alpha=\Big{(}\sum _{i=1}^{n}h_{i}\lambda_{\sigma_{i}}\Big{)}\alpha.\] Write \(\lambda(h)=\sum_{i=1}^{n}h_{i}\lambda_{\sigma_{i}}\in M\). Then \(h\cdot\alpha=\lambda(h)\alpha\in L\), so \(\lambda(h)\in L\cap M=K\) and \(\alpha\) is an \(H\)-eigenvector of \(L\) The proof of Theorem 6.2 is immediate from Propositions 6.3 and 6.4. If \(n\) is a Burnside number (i.e, with the property that \(\gcd(n,\varphi(n))=1\)), then the condition \(L\cap M=K\) is always fulfilled, as \([M:K]\) is always a divisor of \(\varphi(n)\), where \(\varphi\) is the Euler totient function, and hence \([L:K]\) and \([M:K]\) are coprime. In that case, the almost classically Galois structure \(H\) on \(L/K\) corresponding to \(K(\zeta_{n})\) is the only one (see [7, Theorem 2]), and all simple radical degree \(n\) extensions of \(K\) are the degree \(n\) almost cyclic \(H\)-cyclic extensions of \(K\). It is possible to use Theorem 6.2 to derive an injective correspondence from a subset of degree \(n\) almost cyclic extensions of \(K\) to cyclic subgroups of \(K^{*}/(K*)^{n}\). We follow the same idea as in Corollary 4.4: a simple radical extension \(K(\sqrt[n]{a})\) is determines a cyclic subgroup \(\langle a\rangle\) of \(K^{*}/(K^{*})^{n}\). The following result is a generalization of [26, Chapter III, Lemma 3] in which the assumption \(\zeta_{n}\in K\) is removed. **Lemma 6.5**.: _Let \(n\in\mathbb{Z}_{>0}\) and let \(K\) be a field with characteristic coprime to \(n\). Two degree \(n\) almost cyclic extensions \(K(\sqrt[n]{a})/K\) and \(K(\sqrt[n]{b})/K\) with \(a,b\in K\) and complement \(M=K(\zeta_{n})\) are \(K\)-isomorphic if and only if \(a=b^{\prime}c^{n}\) with \(r\in\mathbb{Z}_{\geq 0}\) coprime to \(n\)._ Proof.: We can assume without loss of generality that \(a,b\notin(K^{*})^{k}\) for every \(1<k\leq n\). Then the normal closures of \(K(\sqrt[n]{a})/K\) and \(K(\sqrt[n]{b})/K\) are obtained by adjoining the primitive \(n\)-th root \(\zeta_{n}\). Thus, for \(M=K(\zeta_{n})\), it is enough to notice that the fields \(M(\sqrt[n]{a})\) and \(M(\sqrt[n]{b})\) are in the conditions of [26, Chapter III, Lemma 3] since \(\zeta_{n}\in M\). The main difference with respect to the Galois case is that if \(\zeta_{n}\notin K\), the label \(K(\sqrt[n]{a})\) does not determine a unique extension of \(K\), but a \(K\)-isomorphism class of these. Hence, Lemma 6.5 means that \(a\) and \(b\) generate a rank \(2\) subgroup of \(K^{*}/(K^{*})^{n}\) if and only if they generate extensions of \(K\) lying in different \(K\)-isomorphism classes. Thus, an extension \(L=K(\alpha)\) with \(\alpha^{n}=a\in K\) is assigned to the projection of the cyclic subgroup \(\langle a\rangle\) in \(K^{*}/(K^{*})^{n}\), but the extensions generated by the conjugates of \(\alpha\) are also sent to the same subgroup. In order to obtain an injective correspondence, we need to identify all such extensions. On the other hand, recall that when a simple radical extension \(K(\sqrt[n]{a})\) is linearly disjoint with \(K(\zeta_{n})\), then by Theorem 6.2, it uniquely determines an almost cyclic extension of \(K\) that is \(H\)-cyclic, where \(H\) corresponds to \(K(\zeta_{n})\). A cyclic subgroup \(\langle a\rangle\) of \(K^{*}/(K^{*})^{n}\) with that property will be said to be coprime with \(\zeta_{n}\). All together, we obtain the following: **Corollary 6.6**.: _Let \(n\in\mathbb{Z}_{>0}\) and let \(K\) be a field with characteristic coprime to \(n\). There is an injective correspondence from the \(K\)-isomorphism classes of degree \(n\) almost cyclic extensions \(L/K\) that are \(H\)-cyclic, where \(H\) is the almost classically Galois structure on \(L/K\) corresponding to \(K(\zeta_{n})\), to the cyclic subgroups of \(K^{*}/(K^{*})^{n}\). Moreover, a cyclic subgroup \(\langle a\rangle\) of \(K^{*}/(K^{*})^{n}\) lies in the image of this correspondence if and only if \(K(\sqrt[n]{a})\cap K(\zeta_{n})=K\)._ ### The general case: radical extensions In this part we will provide a complete proof of Theorem 1.1. We want to apply the construction in Section 3 to classes of Kummer extensions of \(K\), either Galois or Hopf-Galois, to obtain Hopf-Galois structures in the compositum of those. We have already characterized almost cyclic extensions of \(K\) as \(H\)-Kummer extensions with some primitive element as \(H\)-eigenvector. The following result can be seen as a generalization of Proposition 6.3 for almost classically Galois extensions that are \(H\)-Kummer. **Proposition 6.7**.: _Let \(n\in\mathbb{Z}_{>0}\) and let \(K\) be a field with characteristic coprime to \(n\). Let \(L/K\) be an almost abelian extension of exponent \(n\) and let \(H\) be an almost classically Galois structure corresponding to some Galois complement \(M\). Assume that \(L/K\) is \(H\)-Kummer and let \(\{\alpha_{1},\dots,\alpha_{k}\}\) be a generating set of \(H\)-eigenvectors for \(L/K\). Then \(M=K(\zeta_{n})\), \(\alpha_{i}^{n}\in K\) for all \(1\leq i\leq k\), and \(n\) is minimal for this property. In particular, \(L/K\) is almost Kummer._ Proof.: By definition we have that \(L\cap M=K\) and \(\widetilde{L}\cong L\otimes_{K}M\). Then we have that \(\widetilde{L}=M(\alpha_{1},\dots,\alpha_{k})\). Let \(J=\operatorname{Gal}(\widetilde{L}/M)\), so that \(H=M[J]^{G^{\prime}}\) and \(M\otimes_{K}H=M[J]\), which is the classical Galois structure on the Galois extension \(\widetilde{L}/M\). Then \(\{\alpha_{1},\dots,\alpha_{k}\}\) is a generating set of \(J\)-eigenvectors for \(\widetilde{L}/M\). Moreover, by assumption, the group \(J\) has exponent \(n\). Using Theorem 4.7, we obtain that \(\zeta_{n}\in M\) and \(\widetilde{L}/M\) is Kummer with generators \(\alpha_{1},\dots,\alpha_{k}\). Thus \(\alpha_{i}^{n}\in M\) for all \(1\leq i\leq k\) and \(n\) is minimal for this property, meaning that for each \(1<l<n\) there is some \(1\leq i\leq k\) such that \(\alpha_{i}^{l}\notin M\). Hence \(\alpha_{i}^{n}\in L\cap M=K\) and if there is some \(1<l<n\) such that for every \(1\leq i\leq k\) we have that \(\alpha_{i}^{l}\in K\), then \(\alpha_{i}^{l}\in M\), which is a contradiction. Finally, we have that \(\widetilde{L}=LK(\zeta_{n})=LM\) with \(K(\zeta_{n})\subseteq M\), so necessarily \(M=K(\zeta_{n})\). As an immediate consequence, in Theorem 1.1, (2) implies (1). Moreover, as in the case of simple radical extensions, it is not necessary to assume that the extension is almost Kummer and that \(M=K(\zeta_{n})\), as these are implied by the hypothesis that the almost classically Galois extension is almost abelian and \(H\)-Kummer. As for the converse, we will need the following natural generalization of Lemma 4.6 to this setting. **Lemma 6.8**.: _Let \(L_{1}/K\) and \(L_{2}/K\) be strongly disjoint almost classically Galois extensions. For each \(i\in\{1,2\}\), let \(H_{i}\) be a Hopf-Galois structure on \(L_{i}/K\) and let \(\alpha_{i}\in L_{i}\) be an \(H_{i}\)-eigenvector. Let \(L=L_{1}L_{2}\) and let \(H\) be the product Hopf-Galois structure of \(H_{1}\) and \(H_{2}\) on \(L\). Then \(\alpha_{1}\alpha_{2}\) is an \(H\)-eigenvector of \(L\). In particular, the union of generating sets of \(H_{i}\)-eigenvectors for \(L_{i}\) with \(i\in\{1,2\}\) is a generating set of \(H\)-eigenvectors for \(L\)._ Proof.: Given \(h\in H\), there are unique \(h_{1}\in H_{1}\) and \(h_{2}\in H_{2}\) such that \(h=h_{1}h_{2}\). Since \(\alpha_{i}\) is an \(H_{i}\)-eigenvector of \(L_{i}\), there are \(\lambda_{1}(h_{1}),\lambda_{2}(h_{2})\in K\) such that \(h_{1}\cdot\alpha_{1}=\lambda_{1}(h_{1})\alpha_{1}\) and \(h_{2}\cdot\alpha_{2}=\lambda_{2}(h_{2})\alpha_{2}\). Now, from Proposition 3.14 we obtain that \[h\cdot(\alpha_{1}\alpha_{2})=(h_{1}\cdot\alpha_{1})(h_{2}\cdot\alpha_{2})= \lambda_{1}(h_{1})\lambda_{2}(h_{2})\alpha_{1}\alpha_{2}=\lambda(h)\alpha_{1} \alpha_{2},\] where \(\lambda(h)=\lambda_{1}(h_{1})\lambda_{2}(h_{2})\in K\). For the last sentence, note that since \(1\) is an eigenvector for a Hopf-Galois structure on any extension, any \(H_{i}\)-eigenvector of \(L_{i}\) is also an \(H\)-eigenvector of \(L\). Now, the other implication is proved. **Proposition 6.9**.: _Let \(n\in\mathbb{Z}_{>0}\), let \(K\) be a field with characteristic coprime to \(n\) and let \(M=K(\zeta_{n})\). Let \(L/K\) be a strongly decomposable extension and let \(\alpha_{1},\ldots,\alpha_{k}\in L\) be such that \(L=K(\alpha_{1},\ldots,\alpha_{k})\) and \(K(\alpha_{i})\), \(K(\alpha_{j})\) are strongly disjoint whenever \(i\neq j\). Assume that \(L\cap M=K\), \(\alpha_{i}^{n}\in K\) for all \(1\leq i\leq k\), and \(n\) is minimal for this property. Then \(L/K\) is almost Kummer of exponent \(n\) with complement \(M\) and \(\alpha_{1},\ldots,\alpha_{k}\) are \(H\)-eigenvectors, where \(H\) is the almost classically Galois structure corresponding to \(M\)._ Proof.: Let us call \(L_{i}=K(\alpha_{i})\) and \(n_{i}=[L_{i}:K]\) for every \(1\leq i\leq k\). Let us fix some such an \(i\). By [2, Lemma 3.5] there is some \(m\) such that \(\zeta_{n}^{m}\alpha_{i}^{n_{i}}\in K\). Since \(\alpha_{i}^{n_{i}}\in L\) and \(\zeta_{n}^{m}\in M\) with \(L\) and \(M\)\(K\)-linearly disjoint, necessarily \(\alpha_{i}^{n_{i}}\in K\). Moreover, it is immediate that no smaller power of \(\alpha_{i}\) belongs to \(K\). In addition to this, we have that \(M_{i}\coloneqq K(\zeta_{n_{i}})\subseteq M\) and \(L\cap M=K\), whence \(L_{i}\cap M_{i}=K\). Therefore, we can apply Theorem 6.2, which gives that \(L_{i}/K\) is an almost cyclic extension with complement \(M_{i}\) and \(\alpha_{i}\) is an \(H_{i}\)-eigenvector, where \(H_{i}\) is the almost classically Galois structure on \(L_{i}/K\) corresponding to \(M_{i}\). On the other hand, since \(n\) is the minimal integer such that \(\alpha_{i}^{n}\in K\) for all \(1\leq i\leq k\), we have that \(n\) is the least common multiple of \(n_{1},\ldots,n_{k}\). Arguing as in the proof of (2) implies (1) in Theorem 4.3, we prove that \(\zeta_{n}=\zeta_{n_{1}}\ldots\zeta_{n_{k}}\), so \(M=\prod_{i=1}^{k}M_{i}\). In particular, \(\widetilde{L}=LM\). By the assumption, \(L_{i}/K\) and \(L_{j}/K\) are strongly disjoint whenever \(i\neq j\). Applying successively Proposition 3.11 and Lemma 6.8, we obtain that \(L/K\) is almost classically Galois with complement \(M\) and \(\{\alpha_{1},\ldots,\alpha_{k}\}\) is a generating system of \(H\)-eigenvectors for \(L/K\). Finally, by applying successively Lemma 3.4, the group \(J=\operatorname{Gal}(\widetilde{L}/K)\) is isomorphic to the direct product of the groups \(J_{i}=\operatorname{Gal}(\widetilde{L_{i}}/K)\), and hence abelian of exponent \(n\). Therefore, the extension \(\widetilde{L}/M\) is Kummer, so \(L/K\) is almost Kummer. Theorem 1.1 follows immediately from Propositions 6.7 and 6.9. If in the statement of Theorem 1.1 we choose \(k=1\), we recover Theorem 6.2. On the other hand, if we assume that \(\zeta_{n}\in K\), then \(M=K\) and \(L\cap M=K\), so the strong disjointness condition translates just to linear disjointness, while (1) is just that \(L/K\) is Kummer with exponent \(n\). Moreover, (2) means that \(L/K\) is almost classically Galois with some \(K\)-basis of \(G\)-eigenvectors, so we recover Theorem 4.7. Next, we discuss how to extend the correspondence on Corollary 6.6 to an injective correspondence from a subset of almost Kummer extensions of \(K\) to finitely generated subgroups of \(K^{*}/(K*)^{n}\). First of all, an \(n\)-radical extension \(L=K\left(\sqrt[n]{a_{1}},\ldots,\sqrt[n]{a_{k}}\right)\) gives rise to a finitely generated subgroup \(\langle a_{1},\ldots,a_{k}\rangle\) of \(K^{*}/(K^{*})^{n}\). But Theorem 1.1 only applies to strongly decomposable extensions, so we would need that \(K(\sqrt[q]{a_{i}})\) and \(K(\sqrt[q]{a_{j}})\) are strongly disjoint when \(i\neq j\). Moreover, if \(L\cap K(\zeta_{n})=K\), \(L/K\) is a strongly decomposable almost Kummer extension that is \(H\)-Kummer, where \(H\) corresponds to \(K(\zeta_{n})\). We obtain: **Corollary 6.10**.: _Let \(n\in\mathbb{Z}_{>0}\) and let \(K\) be a field with characteristic coprime to \(n\). There is an injective correspondence from the \(K\)-isomorphism classes of strongly decomposable almost Kummer extensions \(L/K\) of exponent \(n\) that are \(H\)-Kummer, where \(H\) is the almost classically Galois structure on \(L/K\) corresponding to \(K(\zeta_{n})\), to the finitely generated subgroups of \(K^{*}/(K^{*})^{n}\). Moreover, a finitely generated subgroup \(B\) of \(K^{*}/(K^{*})^{n}\) is in the image of this correspondence if and only if \(K(\sqrt[q]{B})\) is strongly decomposable and \(L\cap K(\sqrt[q]{B})=K\)._ Finally, let us discuss the applicability of Theorem 1.1 to classes of extensions \(L/K\). First of all, we need that any pair of intermediate fields of \(L/K\) are \(K\)-linearly disjoint. To this end, we will use Proposition 2.11. For \(k=2\), this says that two simple radical extensions \(L_{1}/K\), \(L_{2}/K\) of degrees \(n_{1}\), \(n_{2}\) respectively are linearly disjoint if \(K\) is totally real or \(\zeta_{n_{1}},\zeta_{n_{2}}\in K\). The case that \(\zeta_{n_{i}}\in K\) for \(i\in\{1,2\}\) does not represent any improvement with respect to Galois theory, as they imply that the extension \(L=K(\alpha_{1},\alpha_{2})\) is Galois over \(K\) and therefore is in the conditions of Theorem 4.7. In the other case of Proposition 2.11, we have strong disjointness whenever we consider positive numbers. **Proposition 6.11**.: _Let \(K\) be a totally real number field and let \(L_{i}=K(\sqrt[q]{a_{i}})\), \(i\in\{1,2\}\), with \(a_{1},a_{2}\in K\), \(a_{1},a_{2}>0\), such that \(\langle a_{1},a_{2}\rangle\) generates a rank \(2\) subgroup of \(K^{*}/(K^{*})^{n}\). Then, \(L_{1}/K\) and \(L_{2}/K\) are strongly disjoint._ Proof.: First of all, the assumption that \(\langle a_{1},a_{2}\rangle\) has rank \(2\) ensures that \(L_{1}\) and \(L_{2}\) are \(K\)-linearly disjoint. Since \(a_{1},a_{2}>0\), we can assume without loss of generality that \(L_{i}=K(\alpha_{i})\) for the real root \(\alpha_{i}\) of \(x^{n}-a_{i}\), \(i\in\{1,2\}\). Indeed, for each \(i\in\{1,2\}\) the extensions defining \(L_{i}\) are \(K\)-isomorphic to \(K(\alpha_{i})\), and the strong disjointness property does not depend on this choice. Then, we have that \(L_{1},L_{2}\subset\mathbb{R}\). Let \(M_{1}\) and \(M_{2}\) be the Galois complements of \(L_{1}/K\) and \(L_{2}/K\), respectively. From Lemma 6.5, the hypothesis that \(\langle a_{1},a_{2}\rangle\) generates a rank \(2\) subgroup of \(K^{*}/(K^{*})^{n}\) implies that \(L_{1}\) and \(L_{2}\) are not \(K\)-isomorphic. Moreover, \(M_{i}=K(\zeta_{n_{i}})\), where \(n_{i}\coloneqq[L_{i}:K]\) for \(i\in\{1,2\}\). Then \(M_{1},M_{2}\not\subset\mathbb{R}\) while \(L_{1},L_{2}\subset\mathbb{R}\) and \(K\) is totally real. Necessarily, \(L_{1}\cap M_{2}=L_{2}\cap M_{1}=K\). **Remark 6.12**.: If we drop the assumption that \(a_{1},a_{2}>0\), the result is not true. For instance, for \(L_{1}=\mathbb{Q}(\sqrt[q]{2})\) and \(L_{2}=\mathbb{Q}(\sqrt[q]{-3})\), we have that \(\langle 2,-3\rangle=\langle 2,3\rangle\) is a rank \(2\) subgroup of \(\mathbb{Q}^{*}/(\mathbb{Q}^{*})^{3}\), but the Galois complement of \(L_{1}/\mathbb{Q}\) is \(M_{1}=\mathbb{Q}(\sqrt{-3})\subset L_{2}\), and hence \(L_{2}\cap M_{1}\neq\mathbb{Q}\). ## 7 Module structure of the ring of integers From now on, we work with the following setting: \(K\) will be the fraction field of a Dedekind domain \(\mathcal{O}_{K}\), \(L\) will be an \(H\)-Galois extension of \(K\) and \(\mathcal{O}_{L}\) will be the integral closure of \(\mathcal{O}_{K}\) in \(L\). We also assume that \(\mathcal{O}_{L}\) is \(\mathcal{O}_{K}\)-free. This is not always the case, see for example [29]. However, it is implied for instance under the condition that \(\mathcal{O}_{K}\) is a PID. This includes extensions of \(p\)-adic fields and many extensions of number fields. For the latter ones, \(\mathcal{O}_{K}\) is a PID if and only if \(K\) has class number one. In the case that \(L/K\) is \(H\)-Kummer, we are interested in finding an \(\mathcal{O}_{K}\)-basis of \(\mathfrak{A}_{H}\) and studying the freeness of \(\mathcal{O}_{L}\) as module over the associated order \(\mathfrak{A}_{H}\). **Definition 7.1**.: _Let \(L/K\) be an \(H\)-Galois extension and fix a \(K\)-basis \(W=\{w_{i}\}_{i=1}^{n}\) of \(H\). Assume that \(B=\{\gamma_{j}\}_{j=1}^{n}\) is a \(K\)-basis of \(H\)-eigenvectors, so that for each \(1\leq i,j\leq n\) there is a unique \(\lambda_{ij}\in K\) such that \(w_{i}\cdot\gamma_{j}=\lambda_{ij}\gamma_{j}\). The matrix \(\Lambda_{W}=(\lambda_{ji})_{i,j=1}^{n}\) is called the **matrix of \(H\)-eigenvalues** with respect to \(W\)._ It is easily checked that the matrix of eigenvalues \(\Lambda_{W}\) is obtained from removing the zero rows of the matrix \(M(H,L)\) introduced in Section 2.4, where we fix the \(K\)-basis \(W\) at \(H\) and the basis of \(H\)-eigenvectors \(B\) at \(L\). Hence, we can describe completely the effect of the change of basis of \(H\) on the matrix of \(H\)-eigenvalues as follows: if \(W^{\prime}=\{w_{i}^{\prime}\}_{i=1}^{n}\) is another \(K\)-basis of \(H\), then \(\Lambda_{W^{\prime}}=\Lambda_{W}P_{W}^{W^{\prime}}\), where \(P_{W}^{W^{\prime}}\) is the matrix whose columns are the coordinates of the elements of \(W^{\prime}\) with respect to \(W\). Due to the simplicity of the action, following the idea of the method in Section 2.4, we can prove Theorem 1.2. Proof.: _(of Theorem 1.2)_ Let \(W=\{w_{i}\}_{i=1}^{n}\) be any \(K\)-basis of \(H\), and write \(w_{i}\cdot\gamma_{j}=\lambda_{ij}\gamma_{j}\), \(\lambda_{ij}\in K\), for every \(1\leq i,j\leq n\), so that \(\Lambda_{W}=\Lambda=(\lambda_{ij})_{i,j=1}^{n}\). 1. Let us call \(V=\{v_{i}\}_{i=1}^{n}\). Clearly, since \(W\) is a \(K\)-basis of \(H\), so is \(V\). Let \(P_{W}^{V}\) be the change basis matrix whose columns are the coordinates of elements of \(W\) with respect to \(V\). By definition of the elements \(v_{i}\) we have that \(P_{W}^{V}=\Omega\), so \(\Lambda_{V}=\Lambda\Omega=\operatorname{Id}_{n}\). This means that \(v_{i}\cdot\gamma_{j}=\delta_{ij}\gamma_{j}\) for every \(1\leq i,j\leq n\). Since \(B\) is an integral basis for \(L/K\), \(v_{i}\in\mathfrak{A}_{H}\) for every \(1\leq i\leq n\). Let us check that the elements \(v_{i}\) form a \(K\)-basis of \(L\). For a given \(h\in H\), write \(h=\sum_{i=1}^{n}h_{i}v_{i}\), \(h_{i}\in K\). Then \(h\cdot\gamma_{j}=\sum_{i=1}^{n}h_{i}\delta_{ij}\gamma_{j}=h_{j}\gamma_{j}\). Now, we have that \[h\in\mathfrak{A}_{H}\Longleftrightarrow h\cdot x\in\mathcal{O}_{L}\text{ for all }x\in\mathcal{O}_{L},\] \[\Longleftrightarrow h\cdot\gamma_{j}\in\mathcal{O}_{L}\text{ for all }1\leq j\leq n,\] \[\Longleftrightarrow h_{j}\gamma_{j}\in\mathcal{O}_{L}\text{ for all }1\leq j\leq n,\] \[\Longleftrightarrow h_{j}\in\mathcal{O}_{L}\cap K=\mathcal{O}_{K}\text{ for all }1\leq j\leq n.\] Hence \(\{v_{i}\}_{i=1}^{n}\) is an \(\mathcal{O}_{K}\)-basis of \(\mathfrak{A}_{H}\). Now, we check that these are pairwise orthogonal idempotents (and since \(V=\{v_{i}\}_{i=1}^{n}\) is a basis we will then obtain that it is a primitive system of idempotents). Let \(1\leq i,j\leq n\). Then, for every \(1\leq k\leq n\), \[(v_{i}v_{j})\cdot\gamma_{k}=v_{i}\cdot(v_{j}\cdot\gamma_{k})=v_{i}\cdot(\delta _{jk}\gamma_{k})=\delta_{jk}v_{i}\cdot\gamma_{k}=\delta_{ik}\delta_{jk}\gamma_{ k}.\] If \(i=j\), this says that \(v_{i}^{2}\cdot\gamma_{k}=\delta_{ik}\gamma_{k}=v_{i}\cdot\gamma_{k}\) for every \(k\), and since \(\rho_{H}\) is injective (because \(L/K\) is \(H\)-Galois), \(v_{i}^{2}=v_{i}\). Otherwise, if \(i\neq j\), \((v_{i}v_{j})\cdot\gamma_{k}=0\) for every \(k\), so again the injectivity of \(\rho_{H}\) gives that \(v_{i}v_{j}=0\). 2. Let \(\beta=\sum_{j=1}^{n}\beta_{j}\gamma_{j}\in\mathcal{O}_{L}\). Since the elements \(v_{i}\) form an \(\mathcal{O}_{K}\)-basis of \(\mathfrak{A}_{H}\), \(\beta\) is an \(\mathfrak{A}_{H}\)-free generator of \(\mathcal{O}_{L}\) if and only if \(\{v_{i}\cdot\beta\}_{i=1}^{n}\) is an \(\mathcal{O}_{K}\)-basis of \(\mathcal{O}_{L}\). Now, we have that \[v_{i}\cdot\beta=\sum_{j=1}^{n}\beta_{j}\delta_{ij}\gamma_{j}=\beta_{i}\gamma_ {i}\] for every \(1\leq i\leq n\). Since the elements \(\gamma_{i}\) form an \(\mathcal{O}_{K}\)-basis of \(\mathcal{O}_{L}\), this will happen if and only if \(\beta_{i}\in\mathcal{O}_{K}^{*}\) for every \(1\leq i\leq n\). Hence any \(\mathfrak{A}_{H}\)-free generator of \(\mathcal{O}_{L}\) is an element \(\beta=\sum_{j=1}^{n}\beta_{j}\gamma_{j}\) such that \(\beta_{j}\in\mathcal{O}_{K}^{*}\) for every \(1\leq j\leq n\). In particular, \(\mathcal{O}_{L}\) is \(\mathfrak{A}_{H}\)-free. **Remark 7.2**.: If \(L/K\) is \(H\)-Kummer but does not admit any integral basis of \(H\)-eigenvectors, \(\mathcal{O}_{L}\) is not necessarily \(\mathfrak{A}_{H}\)-free. For instance, if \(L/K\) is a cyclic degree \(p\) extension of \(p\)-adic fields and \(\mathfrak{A}_{L/K}\) is the associated order in its classical Galois structure, it is known that \(\mathcal{O}_{L}\) is not in general \(\mathfrak{A}_{L/K}\)-free (in fact, complete criteria for the \(\mathfrak{A}_{L/K}\)-freeness is known, see [5, 4]), while \(L/K\) is Kummer. As a first application of Theorem 1.2, we can find a sufficient condition for the freeness of the ring of integers in almost cyclic extensions of \(K\) as in Theorem 6.2. **Proposition 7.3**.: _Let \(K\) be the fraction field of a Dedekind domain \(\mathcal{O}_{K}\). Let \(L=K(\alpha)\) with \(\alpha\in L\) such that \(\alpha^{n}\in K\) and no smaller power of \(\alpha\) belongs to \(K\). Assume that \(L\cap K(\zeta_{n})=K\) and that \(\mathcal{O}_{L}=\mathcal{O}_{K}[\alpha]\) (in particular, \(\mathcal{O}_{L}\) is \(\mathcal{O}_{K}\)-free). Then \(\mathcal{O}_{L}\) is \(\mathfrak{A}_{H}\)-free, where \(H\) is the almost classical Galois structure corresponding to \(K(\zeta_{n})\)._ Proof.: Since \(L\cap K(\zeta_{n})=K\), from Theorem 6.2 we obtain that \(L/K\) is almost cyclic with Galois complement \(K(\zeta_{n})\) and that \(\alpha\) is an \(H\)-eigenvector, where \(H\) is the almost classically Galois structure corresponding to \(K(\zeta_{n})\). Now, from Corollary 5.5 the powers of \(\alpha\) are also \(H\)-eigenvectors; in particular \(\{1,\alpha,\ldots,\alpha^{n-1}\}\) is a \(K\)-basis of \(H\)-eigenvectors of \(L\). Now, the condition that \(\mathcal{O}_{L}=\mathcal{O}_{K}[\alpha]\) ensures that this is also an \(\mathcal{O}_{K}\)-basis of \(\mathcal{O}_{L}\). Hence, the statement follows by applying Theorem 1.2. The condition that \(L\cap K(\zeta_{n})=K\) is in particular satisfied when \(\zeta_{n}\in K\), which corresponds to classical Kummer extensions. In that case, Proposition 7.3 becomes: **Corollary 7.4**.: _Let \(K\) be the fraction field of a Dedekind domain \(\mathcal{O}_{K}\). Let \(L=K(\alpha)\) with \(\alpha\in L\) such that \(\alpha^{n}\in K\) and no smaller power of \(\alpha\) belongs to \(K\). Assume that \(\zeta_{n}\in K\) and that \(\mathcal{O}_{L}=\mathcal{O}_{K}[\alpha]\). Then \(\mathcal{O}_{L}\) is \(\mathfrak{A}_{L/K}\)-free._ We know that \([K(\zeta_{n}):K]\) is a divisor of \(\varphi(n)\), where \(\varphi\) is the Euler totient function. Hence, when \(n\) is a Burnside number, by definition \(n\) and \(\varphi(n)\) are coprime, and \(L\cap K(\zeta_{n})=K\). Another sufficient condition implying this one is that \(K\) is a totally real number field (to see this we can follow the idea in the proof of Proposition 6.11). These considerations lead to the following. **Corollary 7.5**.: _Let \(K\) be the fraction field of a Dedekind domain \(\mathcal{O}_{K}\). Let \(L=K(\alpha)\) with \(\alpha\in L\) such that \(\alpha^{n}\in K\) and no smaller power of \(\alpha\) belongs to \(K\). Assume that either \(n\) is a Burnside number or that \(K\) is a totally real number field, and that \(\mathcal{O}_{L}=\mathcal{O}_{K}[\alpha]\). Then \(\mathcal{O}_{L}\) is \(\mathfrak{A}_{H}\)-free, where \(H\) is the almost classical Galois structure corresponding to \(K(\zeta_{n})\)._ In the case that \(n\) is Burnside, by Byott's uniqueness theorem [7, Theorem 2], the almost classically Galois structure \(H\) on \(L/K\) is its unique Hopf-Galois structure. Next, we would like to obtain an analogue of Proposition 7.3 for radical extensions, which by definition are a compositum of simple radical extensions. In addition to the restrictions in Theorem 1.1, we will assume that the simple radical extensions involved are arithmetically disjoint, so that the freeness property lifts to the almost classically Galois structure on the radical extension. This means that if \(L_{1},\ldots,L_{k}\) are simple radical extensions of \(K\) such that \(\mathcal{O}_{L_{i}}\) is \(\mathfrak{A}_{H_{i}}\)-free for \(i\in\{1,\ldots,k\}\), then \(\mathcal{O}_{L_{i}\ldots L_{k}}\) is \(\mathfrak{A}_{H}\)-free, where \(H\) is the product Hopf-Galois structure on the compositum from \(H_{1},\ldots,H_{k}\) as defined in Definition 3.10. **Proposition 7.6**.: _Let \(K\) be the fraction field of a PID \(\mathcal{O}_{K}\) such that \(\operatorname{char}(K)\) is coprime to \(n\) and let \(L=K(\sqrt[n]{a_{1}},\ldots,\sqrt[n]{a_{k}})\), where \(a_{1},\ldots,a_{k}\in K^{*}/(K^{*})^{n}\) are algebraic integers. Call \(L_{i}=K(\sqrt[n]{a_{i}})\) for every \(1\leq i\leq k\) and assume that:_ 1. \(L_{i}\cap K(\zeta_{n_{i}})=K\) _for every_ \(1\leq i\leq k\) _(so that_ \(L_{i}/K\) _is almost classically Galois)._ 2. _The extensions_ \(L_{i}/K\) _are pairwise strongly disjoint as almost classically Galois extensions._ 3. \(\gcd(a_{1}n_{1},\ldots,a_{k}n_{k})=1\)_._ 4. \(\mathcal{O}_{L_{i}}=\mathcal{O}_{K}[\sqrt[n]{a_{i}}]\) _for every_ \(1\leq i\leq k\)_._ _Let \(H_{i}\) be the almost classically Galois structure on \(L_{i}/K\) corresponding to \(K(\zeta_{n_{i}})\). We know from Theorem 1.1 that \(L/K\) is almost Kummer with complement \(K(\zeta_{n})\) where \(n=\operatorname{lcm}(n_{1},\ldots,n_{k})\); let \(H\) be the almost classically Galois structure on \(L/K\) corresponding to \(K(\zeta_{n})\). Then \(\mathfrak{A}_{H}=\bigotimes_{i=1}^{k}\mathfrak{A}_{H_{i}}\) and \(\mathcal{O}_{L}\) is \(\mathfrak{A}_{H}\)-free._ Proof.: First, note that there is no loss of generality in assuming that \(a_{1},\ldots,a_{k}\) are algebraic integers, as otherwise we can multiply by the least common multiple of their denominators. Now, we note that the discriminant of a polynomial \(x^{n}-a\) with \(a\in K\) is \((-1)^{\binom{n}{2}}n^{n}(-a)^{n-1}\) (this is a particular case of [24, Theorem 4]), and hence the condition (3) is equivalent to the extensions \(L_{i}/K\) being pairwise arithmetically disjoint. We prove the result by a finite induction on \(1\leq i<k\). For \(i=1\), (2) is automatically satisfied and the statement is just Proposition 7.3. Now, assume that the statement holds for \(L_{1},\ldots,L_{i}\), with \(1\leq i<k\), and call \(\mathcal{L}_{i}=L_{1}\ldots L_{i}\). The condition (2) ensures that \(\mathcal{L}_{i}/K\) is strongly decomposable, so it is almost classically Galois with complement \(\mathcal{M}_{i}=K(\zeta_{n_{1}})\ldots K(\zeta_{n_{i}})=K(\zeta_{N_{i}})\), where \(N_{i}=\operatorname{lcm}(n_{1},\ldots n_{i})\). Let \(\mathcal{H}_{i}\) be the almost classically Galois structure on \(\mathcal{L}_{i}/K\) corresponding to \(\mathcal{M}_{i}\). From the induction, we have that: * \(\mathfrak{A}_{\mathcal{H}_{i}}=\bigotimes_{l=1}^{i}\mathfrak{A}_{H_{l}}\). * \(\mathcal{O}_{\mathcal{L}_{i}}\) is \(\mathfrak{A}_{\mathcal{H}_{i}}\)-free. By Proposition 3.11, \(\mathcal{H}_{i}\) is the product Hopf-Galois structure of \(H_{1},\ldots,H_{i}\) on \(\mathcal{L}_{i}/K\). Moreover, we know that \(\mathcal{L}_{i}\cap K(\zeta_{N_{i}})=K\) with \(\alpha_{l}^{N_{i}}\in K\) for every \(1\leq l\leq i\), and \(N_{i}\) is minimal for that property. From Theorem 1.1, we obtain that \(\mathcal{L}_{i}/K\) is almost Kummer of exponent \(N_{i}\) with complement \(K(\zeta_{\operatorname{lcm}(n_{1},\ldots n_{i})})\), and that elements \(\alpha_{l}\in L_{l}\), \(1\leq l\leq i\), with \(\alpha_{l}^{n_{l}}=a_{l}\) are \(\mathcal{H}_{i}\)-eigenvectors. On the other hand, since \(L_{i+1}\cap K(\zeta_{n_{i+1}})=K\), Theorem 6.2 gives that \(L_{i+1}/K\) is almost cyclic with complement \(K(\zeta_{n_{i+1}})\), and that an element \(\alpha_{i+1}\in L_{i+1}\) such that \(n_{i+1}\) is the minimal integer with \(\alpha_{i+1}^{n_{i+1}}=a_{i+1}\) is an \(H_{i+1}\)-eigenvector. Using the hypothesis that \(\mathcal{O}_{L_{i+1}}=\mathcal{O}_{K}[\,\nicefrac{{n_{i+1}}}{{\sqrt[n_{i+1}]}},\) Proposition 7.3 gives that \(\mathcal{O}_{L_{i+1}}\) is \(\mathfrak{A}_{H_{i+1}}\)-free. Let \(\mathcal{L}_{i+1}=\mathcal{L}_{i}L_{i+1}=L_{1}\ldots L_{i+1}\). It follows from (2) that the extensions \(\mathcal{L}_{i}/K\) and \(L_{i+1}/K\) are strongly disjoint. By Proposition 3.3, \(\mathcal{L}_{i+1}/K\) is almost classically Galois with complement \(\mathcal{M}_{i+1}=K(\zeta_{\operatorname{lcm}(n_{1},\ldots n_{i+1})})\). Hence, Proposition 3.11 gives that the almost classically Galois structure \(\mathcal{H}_{i+1}\) on \(\mathcal{L}_{i+1}/K\) corresponding to \(\mathcal{M}_{i+1}\) is just the product Hopf-Galois structure of \(\mathcal{H}_{i}\) and \(H_{i+1}\) on \(\mathcal{L}_{i+1}/K\). Then from Corollary 3.16 it follows that the statement is satisfied for \(L_{1},\ldots,L_{i+1}\). ### The case of number fields Let \(L/K\) be an \(n\)-radical extension of number fields such that \(L\cap K(\zeta_{n})=K\), so that \(L/K\) is almost Kummer. Let \(H\) be its almost classically Galois structure corresponding to \(K(\zeta_{n})\). Recall that the number fields \(K\) such that \(\mathcal{O}_{K}\) is a PID are just the fields with class number one. In this situation, Proposition 7.6 gives sufficient conditions for \(\mathcal{O}_{L}\) being \(\mathfrak{A}_{H}\)-free. Note that in all this discussion we do not need any assumption on the ramification of \(L/K\). In particular, this includes tamely ramified extensions of \(K\). However, the existing results on that case are characterizations of the freeness for subclasses of these extensions. Namely, Del Corso and Rossi [15, Theorem 11] characterized the existence of a normal integral basis (in other words, the freeness of the ring of integers over the associated order in the classical Galois structure) for tamely ramified Kummer extensions. Moreover, Truman [37, Theorem 5.5] characterized the freeness over the associated order in the unique Hopf-Galois structure for tamely ramified simple radical extensions of prime degree. However, Proposition 7.6 is the first result involving the freeness of the ring of integers for wildly ramified radical extensions of number fields. Let us derive some interesting particular cases of this one and the other results at Section 7. For a simple radical extension \(L=K(\sqrt[n]{a})\), the condition that \(\mathcal{O}_{L}=\mathcal{O}_{K}[\,\sqrt[n]{a}]\) means that an element \(\alpha\in\mathcal{O}_{L}\) with \(\alpha^{n}=a\) generates a power integral basis of \(L/K\). When \(K=\mathbb{Q}\), Gassert [18] proved that this is equivalent to \(a^{p}\not\equiv a\,(\operatorname{mod}p^{2})\) for every prime divisor \(p\) of \(n\). Applying Corollary 7.5, we conclude: **Corollary 7.7**.: _Let \(L=\mathbb{Q}(\sqrt[n]{a})\), where \(a\in\mathbb{Q}\). Assume that \(a^{p}\not\equiv a\,(\operatorname{mod}p^{2})\) for every prime divisor \(p\) of \(n\). Then, \(\mathcal{O}_{L}\) is \(\mathfrak{A}_{H}\)-free, where \(H\) is the almost classically Galois structure corresponding to \(\mathbb{Q}(\zeta_{n})\)._ Likewise, by specializing Proposition 7.6 suitably we obtain: **Corollary 7.8**.: _Let \(L=\mathbb{Q}(\,\nicefrac{{n}}{{\sqrt[n]{a_{1}}}},\,\dots,\,\nicefrac{{n}}{{ \sqrt[n]{a_{k}}}})\), with \(a_{1},\dots,a_{k}\in\mathbb{Z}\) such that \(a_{i}^{p_{i}}\not\equiv a_{i}\,(\operatorname{mod}p_{i}^{2})\) for every prime divisor \(p_{i}\) of \(n_{i}\) and \(\gcd(a_{1}n_{1},\dots,a_{k}n_{k})=1\). Call \(L_{i}=\mathbb{Q}(\,\sqrt[n]{a_{i}})\) for every \(1\leq i\leq k\) and assume that the extensions \(L_{i}/K\) are pairwise strongly disjoint as almost classically Galois extensions. Then, \(\mathcal{O}_{L}\) is \(\mathfrak{A}_{H}\)-free, where \(H\) is the almost classically Galois structure corresponding to \(\mathbb{Q}(\zeta_{n_{1}\ldots n_{k}})\)._ Next, consider a simple radical extension \(L=K(\sqrt[n]{a})\) with \(p\) prime. If \(K=\mathbb{Q}(\zeta_{p})\), Smith [34] proved that \(\mathcal{O}_{L}=\mathcal{O}_{K}[\,\sqrt[n]{a}]\) if and only if the ideal generated by \(a\) in \(\mathbb{Z}[\zeta_{p}]\) is square-free and \(a^{p}\not\equiv a\,(\operatorname{mod}(1-\zeta_{p})^{2})\). From Corollary 7.4, we deduce: **Corollary 7.9**.: _Let \(K=\mathbb{Q}(\zeta_{p})\) and let \(L=K(\sqrt[n]{a})\), where \(a\in K\). Assume that the ideal \(\langle a\rangle\) of \(\mathbb{Z}[\zeta_{p}]\) is square-free and that \(a^{p}\not\equiv a\,(\operatorname{mod}(1-\zeta_{p})^{2})\). Then \(\mathcal{O}_{L}\) is \(\mathfrak{A}_{L/K}\)-free._ ### The case of \(p\)-adic fields In this part we consider simple radical degree \(n\) extensions \(L/K\) of \(p\)-adic fields that are linearly disjoint with \(K(\zeta_{n})\). As in the case of number fields, we know sufficient conditions for the freeness of \(\mathcal{O}_{L}\) as \(\mathfrak{A}_{H}\)-module, where \(H\) is the almost classically Galois structure on \(L/K\) corresponding to \(K(\zeta_{n})\). For simplicity, we will assume that \(L/K\) is totally ramified. First, assume that \(p\nmid n\), in which case \(L/K\) is tamely ramified. By definition, \(H=\widetilde{L}[J]^{G}\), where \(J\) is the Galois group of the cyclic extension \(\widetilde{L}/M\), and in particular abelian. Hence \(H\) is commutative, and applying [36, Theorem 5.3] gives that \(\mathcal{O}_{L}\) is \(\mathfrak{A}_{H}\)-free. Hence, from now on we will be preferably interested in wildly ramified extensions, for which \(p\mid n\). Recall that totally ramified extensions of \(p\)-adic fields are those for which there is a uniformizer which is a root of some \(p\)-Eisenstein polynomial. If we assume that such a polynomial is in addition a radical one, we obtain the following: **Proposition 7.10**.: _Let \(K\) be a \(p\)-adic field and let \(L=K(\sqrt[p]{a})\) of degree \(n\) with \(a\in K\) and \(v_{K}(a)=1\). Suppose in addition that \(L\cap K(\zeta_{n})=K\). Then \(\mathcal{O}_{L}\) is \(\mathfrak{A}_{H}\)-free, where \(H\) is the almost classically Galois structure on \(L/K\) corresponding to \(K(\zeta_{n})\)._ Proof.: Let \(\alpha\in L\) with \(\alpha^{n}=a\). Since \(v_{K}(a)=1\), \(\alpha\) is a root of a \(p\)-Eisenstein polynomial and \(L/K\) is totally ramified. Then, we have that \(v_{L}(\alpha)=\frac{v_{L}(a)}{n}=1\) and hence \(\alpha\) is a uniformizer of \(L\). Thus, \(\mathcal{O}_{L}=\mathcal{O}_{K}[\sqrt[p]{a}]\), so we can apply Proposition 7.3 to obtain that \(\mathcal{O}_{L}\) is \(\mathfrak{A}_{H}\)-free. Again, if \(n\) is a Burnside number (in particular, if \(n\) is prime), the condition that \(L\cap K(\zeta_{n})=K\) is automatically satisfied. #### Maximally ramified extensions An interesting particular case of the previous result is when \(L/K\) has degree \(p\) and its normal closure is maximally ramified. This means that the ramification jump of the normal closure (where the only jump we take into account is from the cyclic group of order \(p\) to the trivial group, see for instance [3, SS1.2]) is as high as possible. Our objective is to prove the following. **Proposition 7.11**.: _Let \(L/K\) be a degree \(p\) extension of \(p\)-adic fields whose normal closure is maximally ramified over \(K\). Then \(L/K\) is almost classically Galois with complement \(K(\zeta_{p})\) and \(\mathcal{O}_{L}\) is \(\mathfrak{A}_{H}\)-free, where \(H\) is the almost classically Galois structure on \(L/K\) corresponding to \(K(\zeta_{p})\)._ We will need some preparations. Let \(E/F\) be a cyclic degree \(p\) extension of \(p\)-adic fields. We assume that \(E/F\) is ramified, in which case it is totally ramified and it has a unique ramification jump \(t\). Let \(e=e(F/\mathbb{Q}_{p})\). Then, it is known that \(1\leq t\leq\frac{pe}{p-1}\) (see [33, Chapter IV, Proposition 2]). The condition that \(E/F\) is maximally ramified is that \(t=\frac{pe}{p-1}\). It is also known that if \(E/F\) is maximally ramified then \(F\) contains the \(p\)-th roots of unity and \(E=F(\sqrt[p]{\pi_{F}})\) (see for instance [14, Proposition 6.3]). Now, let \(L/K\) be a degree \(p\) extension of \(p\)-adic fields and let \(\widetilde{L}\) be its normal closure. Let \(G=\operatorname{Gal}(\widetilde{L}/K)\) and consider the chain of ramification groups \(\{G_{i}\}_{i=1}^{\infty}\) for \(\widetilde{L}/K\). Assume that \(\widetilde{L}/K\) is totally ramified. By [33, Chapter IV, Corollary 5], \(G=G_{0}\) is solvable, and hence by [9, (7.5)], \(L/K\) is Hopf-Galois. Since \(p\) is Burnside, Byott's uniqueness theorem [7, Theorem 2] gives that \(L/K\) admits a unique Hopf-Galois structure and it is almost classically Galois. On the other hand, [33, Chapter IV, Corollaries 3, 4] give that \(G_{1}\) is cyclic of order \(p\) and \(G=G_{1}\rtimes C\), where \(C\) is cyclic of order \(r\) coprime to \(p\). Let us establish a presentation \[G=\langle\sigma,\tau\,|\,\sigma^{p}=\tau^{r}=1,\,\tau\sigma\tau=\sigma^{g}\rangle,\] where \(g\in\mathbb{Z}\) has order \(r\) modulo \(p\), such that \(G_{1}=\langle\sigma\rangle\) and \(C=\langle\tau\rangle\). We can assume without loss of generality that \(L=\widetilde{L}^{C}\). Moreover, note that there is a unique \(t\geq 1\) such that \(G_{t}\cong C_{p}\) and \(G_{t+1}=\{1\}\). In analogy with the cyclic case, \(t\) is also called the ramification jump of the extension \(\widetilde{L}/K\). From [33, Chapter IV, Proposition 2] we deduce that \(t\) is the unique ramification jump of the cyclic degree \(p\) extension \(\widetilde{L}/M\). Since \(M/K\) is totally ramified, the ramification index of \(M/\mathbb{Q}_{p}\) is \(re\), where \(e=e(K/\mathbb{Q}_{p})\). Then, we have that \[1\leq t\leq\frac{rpe}{p-1},\] and \(\widetilde{L}/K\) is maximally ramified if and only if \(t=\frac{rpe}{p-1}\). **Lemma 7.12**.: _Let \(L/K\) be a maximally ramified degree \(p\) extension of \(p\)-adic fields. Then \(M=K(\zeta_{p})\) and there is \(\alpha\in\mathcal{O}_{L}\) such that \(v_{L}(\alpha)=1\) and \(\alpha^{p}\in\mathcal{O}_{K}\)._ Proof.: Let \(\zeta_{p}\) be a primitive \(p\)-th root of the unity. Since \(\widetilde{L}/M\) is maximally ramified, \(\zeta_{p}\in M\) and \(\widetilde{L}=M(\sqrt[p]{\pi_{M}})\), that is, there is \(\gamma\in\mathcal{O}_{\widetilde{L}}\) with \(v_{\widetilde{L}}(\gamma)=1\) and \(\gamma^{p}\in\mathcal{O}_{M}\) such that \(\widetilde{L}=M(\gamma)\). Let \(\alpha=N_{\widetilde{L}/L}(\gamma)\in L\). Then \(v_{\widetilde{L}}^{r}(\alpha)=rv_{L}(\alpha)\), and on the other hand, using that \(\gamma\) is a uniformizer of \(\widetilde{L}\), we have \(v_{\widetilde{L}}^{r}(\alpha)=r\), so \(v_{L}(\alpha)=1\). In particular, \(\alpha\) is a primitive element of \(L/K\). Moreover, \(\alpha^{p}=N_{\widetilde{L}/L}(\gamma^{p})\in L\cap M=K\), and it is clear that \(\alpha\) is an algebraic integer, so \(\alpha^{p}\in\mathcal{O}_{K}\). Finally, we prove that \(M=K(\zeta_{p})\). Since \(\alpha^{p}\in K\), the normal closure of \(L/K\) is \(\widetilde{L}=K(\alpha,\zeta_{p})=LK(\zeta_{p})\), and since we have also that \(\widetilde{L}=LM\) with \(L\) and \(M\)\(K\)-linearly disjoint and \(K(\zeta_{p})\subseteq M\), \(M=K(\zeta_{p})\) To sum up, a degree \(p\) extension of \(p\)-adic fields \(L/K\) with maximally ramified normal closure is almost classically Galois with complement \(K(\zeta_{p})\), and in particular \(L\cap K(\zeta_{p})=K\). Moreover, \(L=K(\,\sqrt[p]{a})\) with \(v_{L}(\,\sqrt[p]{a})=1\), that is, \(v_{K}(a)=1\). Hence we are under the hypotheses of Proposition 7.10, and then Proposition 7.11 is established. ## Acknowledgements The author is thankful with Ilaria Del Corso, Paul Truman and Lorenzo Stefanello for their insightful comments. This work was supported by Czech Science Foundation, grant 21-00420M, and by Charles University Research Centre program UNCE/SCI/022.
2306.09006
Functional Dependencies with Predicates: What Makes the $g_3$-error Easy to Compute?
The notion of functional dependencies (FDs) can be used by data scientists and domain experts to confront background knowledge against data. To overcome the classical, too restrictive, satisfaction of FDs, it is possible to replace equality with more meaningful binary predicates, and use a coverage measure such as the $g_3$-error to estimate the degree to which a FD matches the data. It is known that the $g_3$-error can be computed in polynomial time if equality is used, but unfortunately, the problem becomes NP-complete when relying on more general predicates instead. However, there has been no analysis of which class of predicates or which properties alter the complexity of the problem, especially when going from equality to more general predicates. In this work, we provide such an analysis. We focus on the properties of commonly used predicates such as equality, similarity relations, and partial orders. These properties are: reflexivity, transitivity, symmetry, and antisymmetry. We show that symmetry and transitivity together are sufficient to guarantee that the $g_3$-error can be computed in polynomial time. However, dropping either of them makes the problem NP-complete.
Simon Vilmin, Pierre Faure--Giovagnoli, Jean-Marc Petit, Vasile-Marian Scuturici
2023-06-15T10:04:19Z
http://arxiv.org/abs/2306.09006v1
# Functional Dependencies with Predicates: ###### Abstract The notion of functional dependencies (FDs) can be used by data scientists and domain experts to confront background knowledge against data. To overcome the classical, too restrictive, satisfaction of FDs, it is possible to replace equality with more meaningful binary predicates, and use a coverage measure such as the \(g_{3}\)-error to estimate the degree to which a FD matches the data. It is known that the \(g_{3}\)-error can be computed in polynomial time if equality is used, but unfortunately, the problem becomes \(\NP\)-complete when relying on more general predicates instead. However, there has been no analysis of which class of predicates or which properties alter the complexity of the problem, especially when going from equality to more general predicates. In this work, we provide such an analysis. We focus on the properties of commonly used predicates such as equality, similarity relations, and partial orders. These properties are: reflexivity, transitivity, symmetry, and antisymmetry. We show that symmetry and transitivity together are sufficient to guarantee that the \(g_{3}\)-error can be computed in polynomial time. However, dropping either of them makes the problem \(\NP\)-complete. **Keywords:** functional dependencies, \(g_{3}\)-error, predicates ## 1 Introduction Functional dependencies (FDs) are database constraints initially devoted to database design [14]. Since then, they have been used for numerous tasks ranging from data cleaning [1] to data mining [13]. However, when dealing with real world data, FDs are also a simple yet powerful way to syntactically express background knowledge coming from domain experts [10]. More precisely, a FD \(X\to A\) between a set of attributes (or features) \(X\) and another attribute \(A\) depicts a _function_ of the form \(f(X)=A\). In this context, asserting the existence of a function which determines \(A\) from \(X\) in a dataset amounts to testing the validity of \(X\to A\) in a relation, _i.e._ to checking that _every pair_ of tuples that are _equal_ on \(X\) are also _equal_ on \(A\). Unfortunately, this semantics of satisfaction suffers from two major drawbacks which makes it inadequate to capture the complexity of real world data: (i) it must be checked on the whole dataset, and (ii) it uses equality. Drawback (i) does not take into account data quality issues such as outliers, mismeasurements or mistakes, which should not impact the relevance of a FD in the data. To tackle this problem, it is customary to estimate the partial validity of a given FD with a _coverage_ measure, rather than its total satisfaction. The most common of these measures is the \(g_{3}\)-error [1, 2, 3, 1], introduced by Kivinen and Mannila [15]. It is the minimum proportion of tuples to remove from a relation in order to satisfy a given FD. As shown for instance by Huhtala et al. [1], the \(g_{3}\)-error can be computed in polynomial time for a single (classical) FD. As for drawback (ii), equality does not always witness efficiently the closeness of two real-world values. It screens imprecisions and uncertainties that are inherent to every observation. In order to handle closeness (or difference) in a more appropriate way, numerous researches have replaced equality by _binary predicates_, as witnessed by recent surveys on relaxed FDs [1, 2]. However, if predicates extend FDs in a powerful and meaningful way with respect to real-world applications, they also make computations harder. In fact, contrary to strict equality, computing the \(g_{3}\)-error with binary predicates becomes \(\NP\)-complete [11, 2]. In particular, it has been proven for differential [14], matching [13], metric [10], neighborhood [15], and comparable dependencies [2]. Still, there is no detailed analysis of what makes the \(g_{3}\)-error hard to compute when dropping equality for more flexible predicates. As a consequence, domain experts are left without any insights on which predicates they can use in order to estimate the validity of their background knowledge in their data quickly and efficiently. This last problem constitutes the motivation for our contribution. In this work, we study the following question: _which properties of predicates make the \(g_{3}\)-error easy to compute?_ To do so, we introduce binary predicates on each attribute of a relation scheme. Binary predicates take two values as input and return true or false depending on whether the values match a given comparison criteria. Predicates are a convenient framework to study the impact of common properties such as reflexivity, transitivity, symmetry, and antisymmetry (the properties of equality) on the hardness of computing the \(g_{3}\)-error. In this setting, we make the following contributions. First, we show that dropping reflexivity and antisymmetry does not make the \(g_{3}\)-error hard to compute. When removing transitivity, the problem becomes \(\NP\)-complete. This result is intuitive as transitivity plays a crucial role in the computation of the \(g_{3}\)-error for dependencies based on similarity/distance relations [1, 2]. Second, we focus on symmetry. Symmetry has attracted less attention, despite its importance in partial orders and order FDs [13, 14]. Even though symmetry seems to have less impact than transitivity in the computation of the \(g_{3}\)-error, we show that when it is removed the problem also becomes \(\NP\)-complete. This result holds in particular for ordered dependencies. **Paper Organization.** In Section 2, we recall some preliminary definitions. Section 3 is devoted to the usual \(g_{3}\)-error. In Section 4, we introduce predicates, along with definitions for the relaxed satisfaction of a functional dependency. Section 5 investigates the problem of computing the \(g_{3}\)-error when equality is replaced by predicates on each attribute. In Section 6 we relate our results with existing extensions of FDs. We conclude in Section 7 with some remarks and open questions for further research. ## 2 Preliminaries All the objects we consider are finite. We begin with some definitions on graphs [1] and ordered sets [12]. A _graph_\(G\) is a pair \((V,E)\) where \(V\) is a set of _vertices_ and \(E\) is a collection of pairs of vertices called _edges_. An edge of the form \((u,u)\) is called a _loop_. The graph \(G\) is _directed_ if edges are ordered pairs of elements. Unless otherwise stated, we consider _loopless undirected_ graphs. Let \(G=(V,E)\) be an undirected graph, and let \(V^{\prime}\subseteq V\) The graph \(G[V^{\prime}]=(V^{\prime},E^{\prime})\) with \(E^{\prime}=\{(u,v)\in E\mid\{u,v\}\subseteq V^{\prime}\}\) is the graph _induced_ by \(V^{\prime}\) with respect to \(G\). A _path_ in \(G\) is a sequence \(e_{1},\ldots,e_{m}\) of pairwise distinct edges such that \(e_{i}\) and \(e_{i+1}\) share a common vertex for each \(1\leq i<m\). The _length_ of a path is its number of edges. An _independent set_ of \(G\) is a subset \(I\) of \(V\) such that no two vertices in \(I\) are connected by an edge of \(G\). An independent set is _maximal_ if it is inclusion-wise maximal among all independent sets. It is _maximum_ if it is an independent set of maximal cardinality. Dually, a _clique_ of \(G\) is a subset \(K\) of \(V\) such that every pair of distinct vertices in \(K\) are connected by an edge of \(G\). A graph \(G\) is a _co-graph_ if it has no induced subgraph corresponding to a path of length \(3\) (called \(P_{4}\)). A _partially ordered set_ or _poset_ is a pair \(P=(V,\leq)\) where \(V\) is a set and \(\leq\) a reflexive, transitive, and antisymmetric binary relation. The relation \(\leq\) is called a _partial order_. If for every \(x,y\in V\), \(x\leq y\) or \(y\leq x\) holds, \(\leq\) is a _total order_. A poset \(P\) is associated to a directed graph \(G(P)=(V,E)\) where \((u_{i},u_{j})\in E\) exactly when \(u_{i}\neq u_{j}\) and \(u_{i}\leq u_{j}\). An undirected graph \(G=(V,E)\) is a _comparability graph_ if its edges can be directed so that the resulting directed graph corresponds to a poset. We move to terminology from database theory [11]. We use capital first letters of the alphabet (\(A\), \(B\), \(C\),...) to denote attributes and capital last letters (..., \(X\), \(Y\), \(Z\)) for attribute sets. Let \(U\) be a universe of attributes, and \(R\subseteq U\) a relation scheme. Each attribute \(A\) in \(R\) takes value in a domain \(\mathsf{dom}(A)\). The domain of \(R\) is \(\mathsf{dom}(R)=\bigcup_{A\in R}\mathsf{dom}(A)\). Sometimes, especially in examples, we write a set as a concatenation of its elements (e.g. \(AB\) corresponds to \(\{A,B\}\)). A _tuple_ over \(R\) is a mapping \(t\colon R\to\mathsf{dom}(R)\) such that \(t(A)\in\mathsf{dom}(A)\) for every \(A\in R\). The _projection_ of a tuple \(t\) on a subset \(X\) of \(R\) is the restriction of \(t\) to \(X\), written \(t[X]\). We write \(t[A]\) as a shortcut for \(t[\{A\}]\). A _relation_\(r\) over \(R\) is a finite set of tuples over \(R\). A _functional dependency_ (FD) over \(R\) is an expression \(X\to A\) where \(X\cup\{A\}\subseteq R\). Given a relation \(r\) over \(R\), we say that \(r\)_satisfies_\(X\to A\), denoted by \(r\models X\to A\), if for every pair of tuples \((t_{1},t_{2})\) of \(r\), \(t_{1}[X]=t_{2}[X]\) implies \(t_{1}[A]=t_{2}[A]\). In case when \(r\) does not satisfy \(X\to A\), we write \(r\not\models X\to A\). ## 3 The \(g_{3}\)-error This section introduces the \(g_{3}\)-error, along with its connection with independent sets in graphs through counterexamples and conflict-graphs [1]. Let \(r\) be a relation over \(R\) and \(X\to A\) a functional dependency. The \(g_{3}\)_-error_ quantifies the degree to which \(X\to A\) holds in \(r\). We write it as \(g_{3}(r,X\to A)\). It was introduced by Kivinen and Mannila [14], and it is frequently used to estimate the partial validity of a FD in a dataset [1, 15, 16, 17]. It is the minimum proportion of tuples to remove from \(r\) to satisfy \(X\to A\), or more formally: **Definition 1**.: _Let \(R\) be a relation scheme, \(r\) a relation over \(R\) and \(X\to A\) a functional dependency over \(R\). The \(g_{3}\)-error of \(X\to A\) with respect to \(r\), denoted by \(g_{3}(r,X\to A)\) is defined as:_ \[g_{3}(r,X\to A)=1-\frac{\mathsf{max}(\{|s|\mid s\subseteq r,s\models X\to A \})}{|r|}\] In particular, if \(r\models X\to A\), we have \(g_{3}(r,X\to A)=0\). We refer to the problem of computing \(g_{3}(r,X\to A)\) as the _error validation problem_[14, 15]. Its decision version reads as follows: Error Validation Problem (EVP) It is known [1, 12] that there is a strong relationship between this problem and the task of computing the size of a maximum independent set in a graph: Maximum Independent Set (MIS) _Input:_ A graph \(G=(V,E)\), \(k\in\mathbb{N}\). _Output:_ yes if \(G\) has a maximal independent set \(I\) such that \(|I|\geq k\), no otherwise. To see the relationship between EVP and MIS, we need the notions of _counterexample_ and _conflict-graph_[1, 12]. A _counterexample_ to \(X\to A\) in \(r\) is a pair of tuples \((t_{1},t_{2})\) such that \(t_{1}[X]=t_{2}[X]\) but \(t_{1}[A]\neq t_{2}[A]\). The _conflict-graph_ of \(X\to A\) with respect to \(r\) is the graph \(\mathsf{CG}(r,X\to A)=(r,E)\) where a (possibly ordered) pair of tuples \((t_{1},t_{2})\) in \(r\) belongs to \(E\) when it is a counterexample to \(X\to A\) in \(r\). An independent set of \(\mathsf{CG}(r,X\to A)\) is precisely a subrelation of \(r\) which satisfies \(X\to A\). Therefore, computing \(g_{3}(r,X\to A)\) reduces to finding the size of a maximum independent set in \(\mathsf{CG}(r,X\to A)\). More precisely, \(g_{3}(r,X\to A)=1-\frac{|I|}{|r|}\) where \(I\) is a maximum independent set of \(\mathsf{CG}(r,X\to A)\). _Example 1_.: Consider the relation scheme \(R=\{A,B,C,D\}\) with \(\mathsf{dom}(R)=\mathbb{N}\). Let \(r\) be the relation over \(R\) on the left of Figure 1. It satisfies \(BC\to A\) but not \(D\to A\). Indeed, \((t_{1},t_{3})\) is a counterexample to \(D\to A\). The conflict-graph \(\mathsf{CG}(r,D\to A)\) is given on the right of Figure 1. For example, \(\{t_{1},t_{2},t_{6}\}\) is a maximum independent set of \(\mathsf{CG}(r,D\to A)\) of maximal size. We obtain: \[g_{3}(r,D\to A)=1-\frac{|\{t_{1},t_{2},t_{6}\}|}{|r|}=0.5\] In other words, we must remove half of the tuples of \(r\) in order to satisfy \(D\to A\). However, MIS is an **NP**-complete problem [1] while computing \(g_{3}(r,X\to A)\) takes polynomial time in the size of \(r\) and \(X\to A\)[13]. This difference is due to the properties of equality, namely reflexivity, transitivity, symmetry and antisymmetry. They make \(\mathsf{CG}(r,X\to A)\) a disjoint union of complete \(k\)-partite graphs, and hence a co-graph [12]. In this class of graphs, solving MIS is polynomial [1]. This observation suggests to study in greater detail the impact of such properties on the structure of conflict-graphs. First, we need to introduce predicates to relax equality, and to define a more general version of the error validation problem accordingly. ## 4 Predicates to relax equality In this section, in line with previous researches on extensions of functional dependencies [1, 1], we equip each attribute of a relation scheme with a binary predicate. Figure 1: The relation \(r\) and the conflict-graph \(\mathsf{CG}(r,D\to A)\) of Example 1. We define the new \(g_{3}\)-error and the corresponding error validation problem. Let \(R\) be a relation scheme. For each \(A\in R\), let \(\phi_{A}\colon\mathsf{dom}(A)\times\mathsf{dom}(A)\to\{\mathtt{true},\mathtt{ false}\}\) be a predicate. For instance, the predicate \(\phi_{A}\) can be equality, a distance, or a similarity relation. We assume that predicates are black-box oracles that can be computed in polynomial time in the size of their input. Let \(\Phi\) be a set of predicates, one for each attribute in \(R\). The pair \((R,\Phi)\) is a _relation scheme with predicates_. In a relation scheme with predicates, relations and FDs are unchanged. However, the way a relation satisfies (or not) a FD can easily be adapted to \(\Phi\). **Definition 2** (Satisfaction with predicates).: _Let \((R,\Phi)\) be a relation scheme with predicates, \(r\) a relation and \(X\to A\) a functional dependency both over \((R,\Phi)\). The relation \(r\) satisfies \(X\to A\) with respect to \(\Phi\), denoted by \(r\models_{\Phi}X\to A\), if for every pair of tuples \((t_{1},t_{2})\) of \(r\), the following formula holds:_ \[\left(\bigwedge_{B\in X}\phi_{B}(t_{1}[B],t_{2}[B])\right)\implies\phi_{A}(t _{1}[A],t_{2}[A])\] An new version of the \(g_{3}\)-error adapted to \(\Phi\) is presented in the following definition. **Definition 3**.: _Let \((R,\Phi)\) be a relation scheme with predicates, \(r\) be a relation over \((R,\Phi)\) and \(X\to A\) a functional dependency over \((R,\Phi)\). The \(g_{3}\)-error with predicates of \(X\to A\) with respect to \(r\), denoted by \(g_{3}^{\Phi}(r,X\to A)\) is defined as:_ \[g_{3}^{\Phi}(r,X\to A)=1-\frac{\mathsf{max}(\{|s|\mid s\subseteq r,s\models_{ \Phi}X\to A\})}{|r|}\] From the definition of \(g_{3}^{\Phi}(r,X\to A)\), we derive the extension of the error validation problem from equality to predicates: Error Validation Problem with Predicates (EVPP) -- _Input:_ A relation \(r\) over a relation scheme with predicates \((R,\Phi)\), a FD \(X\to A\) over \((R,\Phi)\), \(k\in\mathbb{R}\). _Output:_ yes if \(g_{3}^{\Phi}(r,X\to A)\leq k\), no otherwise. Observe that according to the definition of satisfaction with predicates (Definition 2), counterexamples and conflict-graphs remain well-defined. However, for a given predicate \(\phi_{A}\), \(\phi_{A}(x,y)=\phi_{A}(y,x)\) needs not be true in general, meaning that we have to consider ordered pairs of tuples. That is, an ordered pair of tuples \((t_{1},t_{2})\) in \(r\) is a counterexample to \(X\to A\) if \(\bigwedge_{B\in X}\phi_{B}(t_{1}[B],t_{2}[B])=\mathtt{true}\) but \(\phi_{A}(t_{1}[A],t_{2}[A])\neq\mathtt{true}\). We call \(\mathsf{CG}_{\Phi}(r,X\to A)\) the conflict-graph of \(X\to A\) in \(r\). In general, \(\mathsf{CG}_{\Phi}(r,X\to A)\) is directed. It is undirected if the predicates of \(\Phi\) are symmetric (see Section 5). In particular, computing \(g_{3}^{\Phi}(r,X\to A)\) still amounts to finding the size of a maximum independent set in \(\mathsf{CG}_{\Phi}(r,X\to A)\). _Example 2_.: We use the relation of Figure 1. Let \(\Phi=\{\phi_{A},\phi_{B},\phi_{C},\phi_{D}\}\) be the collection of predicates defined as follows, for every \(x,y\in\mathbb{N}\): * \(\phi_{A}(x,y)=\phi_{B}(x,y)=\phi_{C}(x,y)=\mathtt{true}\) if and only if \(|x-y|\leq 1\). Thus, \(\phi_{A}\) is reflexive and symmetric but not transitive (see Section 5), * \(\phi_{D}\) is the equality. The pair \((R,\Phi)\) is a relation scheme with predicates. We have \(r\models_{\Phi}AB\to D\) but \(r\not\models_{\Phi}C\to A\). In Figure 2, we depict \(\mathsf{CG}_{\Phi}(r,C\to A)\). A maximum independent set of this graph is \(\{t_{1},t_{2},t_{3},t_{5}\}\). We deduce \[g_{3}^{\Phi}(r,C\to A)=1-\frac{|\{t_{1},t_{2},t_{3},t_{5}\}|}{|r|}=\frac{1}{3}\] Thus, there is also a strong relationship between EVPP and MIS, similar to the one between EVP and MIS. Nonetheless, unlike EVP, the problem EVPP is \(\mathsf{NP}\)-complete [1]. In the next section, we study this gap of complexity between EVP and EVPP via different properties of predicates. ## 5 Predicates properties in the \(g_{3}\)-error In this section, we study properties of binary predicates that are commonly used to replace equality. We show how each of them affects the error validation problem. First, we define the properties of interest in this paper. Let \((R,\Phi)\) be a relation scheme with predicates. Let \(A\in R\) and \(\phi_{A}\) be the corresponding predicate. We consider the following properties: 1. \(\phi_{A}(x,x)=\mathtt{true}\) for all \(x\in\mathsf{dom}(A)\) (reflexivity) 2. for all \(x,y,z\in\mathsf{dom}(A)\), \(\phi_{A}(x,y)=\phi_{A}(y,z)=\mathtt{true}\) implies \(\phi_{A}(x,z)=\mathtt{true}\) (transitivity) 3. for all \(x,y\in\mathsf{dom}(A)\), \(\phi_{A}(x,y)=\phi_{A}(y,x)\) (symmetry) 4. for all \(x,y\in\mathsf{dom}(A)\), \(\phi_{A}(x,y)=\phi_{A}(y,x)=\mathtt{true}\) implies \(x=y\) (antisymmetry). Note that symmetry and antisymmetry together imply transitivity as \(\phi_{A}(x,y)=\mathtt{true}\) entails \(x=y\). As a first step, we show that symmetry and transitivity are sufficient to make EVPP solvable in polynomial time. In fact, we prove that the resulting conflict-graph is a co-graph, as with equality. **Theorem 1**.: _The problem EVPP can be solved in polynomial time if the predicates used on each attribute are transitive (tra) and symmetric (sym)._ Proof.: Let \((R,\Phi)\) be a relation scheme with predicates. Let \(r\) be relation over \((R,\Phi)\) and \(X\to A\) be a functional dependency, also over \((R,\Phi)\). We assume that each predicate in \(\Phi\) is transitive and symmetric. We show how to compute the size of a maximum independent set of \(\mathsf{CG}_{\Phi}(r,X\to A)\) in polynomial time. As \(\phi_{A}\) is not necessarily reflexive, a tuple \(t\) in \(r\) can produce a counter-example \((t,t)\) to \(X\to A\). Indeed, it may happen that \(\phi_{B}(t[B],t[B])=\mathtt{true}\) for each \(B\in X\), but \(\phi_{A}(t[A],t[A])=\texttt{false}\). However, it follows that \(t\) never belongs to a subrelation \(s\) of \(r\) satisfying \(s\models_{\Phi}X\to A\). Thus, let \(r^{\prime}=r\setminus\{t\in r\mid\{t\}\not\models_{\Phi}X\to A\}\). Then, a subrelation of \(r\) satisfies \(X\to A\) if and only if it is an independent set of \(\mathsf{CG}_{\Phi}(r,X\to A)\) if and only if it is an independent set of \(\mathsf{CG}_{\Phi}(r^{\prime},X\to A)\). Consequently, computing \(g_{3}^{\Phi}(r,X\to A)\) is solving MIS in \(\mathsf{CG}_{\Phi}(r^{\prime},X\to A)\). We prove now that \(\mathsf{CG}_{\Phi}(r^{\prime},X\to A)\) is a co-graph. Assume for contradiction that \(\mathsf{CG}_{\Phi}(r^{\prime},X\to A)\) has an induced path \(P\) with \(4\) elements, say \(t_{1},t_{2},t_{3},t_{4}\) with edges \((t_{1},t_{2})\), \((t_{2},t_{3})\) and \((t_{3},t_{4})\). Remind that edges of \(\mathsf{CG}_{\Phi}(r^{\prime},X\to A)\) are counterexamples to \(X\to A\) in \(r^{\prime}\). Hence, by symmetry and transitivity of the predicates of \(\Phi\), we deduce that for each pair \((i,j)\) in \(\{1,2,3,4\}\), \(\bigwedge_{B\in X}\phi_{B}(t_{i}[B],t_{j}[B])=\texttt{true}\). Thus, we have \(\bigwedge_{B\in X}\phi_{B}(t_{3}[B],t_{1}[B])=\bigwedge_{B\in X}\phi_{B}(t_{1} [B],t_{4}[B])=\texttt{true}\). However, neither \((t_{1},t_{3})\) nor \((t_{1},t_{4})\) belong to \(\mathsf{CG}_{\Phi}(r^{\prime},X\to A)\) since \(P\) is an induced path by assumption. Thus, \(\phi_{A}(t_{3}[A],t_{1}[A])=\phi_{A}(t_{1}[A],t_{4}[A])=\texttt{true}\) must hold. Nonetheless, the transitivity of \(\phi_{A}\) implies \(\phi_{A}(t_{3}[A],t_{4}[A])=\texttt{true}\), a contradiction with \((t_{3},t_{4})\) being an edge of \(\mathsf{CG}_{\Phi}(r^{\prime},X\to A)\). We deduce that \(\mathsf{CG}_{\Phi}(r^{\prime},X\to A)\) cannot contain an induced \(P_{4}\), and that it is indeed a co-graph. As MIS can be solved in polynomial time for co-graphs [1], the theorem follows. One may encounter non-reflexive predicates when dealing with strict orders or with binary predicates derived from SQL equality. In the 3-valued logic of SQL, comparing the null value with itself evaluates to false rather than true. With this regard, it could be natural for domain experts to use a predicate which is transitive, symmetric and reflexive almost everywhere but on the null value. This would allow to deal with missing information without altering the data. The previous proof heavily makes use of transitivity, which has a strong impact on the edges belonging to the conflict-graph. Intuitively, conflict-graphs can become much more complex when transitivity is dropped. Indeed, we prove an intuitive case: when predicates are not required to be transitive, EVPP becomes intractable. **Theorem 2**.: _The problem EVPP is **NP**-complete even when the predicates used on each attribute are symmetric (sym) and reflexive (ref)._ The proof is given in Appendix A. It is a reduction from the problem (dual to MIS) of finding the size of a maximum clique in general graphs. It uses arguments similar to the proof of Song et al. [1] showing the **NP**-completeness of EVPP for comparable dependencies. We turn our attention to the case where symmetry is dropped from the predicates. In this context, conflict-graphs are directed. Indeed, an ordered pair of tuples \((t_{1},t_{2})\) may be a counterexample to a functional dependency, but not \((t_{2},t_{1})\). Yet, transitivity still contributes to constraining the structure of conflict-graphs, as suggested by the following example. _Example 3_.: We consider the relation of Example 1. We equip \(A,B,C,D\) with the following predicates: * \(\phi_{C}(x,y)=\texttt{true}\) if and only if \(x\leq y\) * \(\phi_{A}(x,y)\) is defined by \[\phi_{A}(x,y)=\begin{cases}\texttt{true}&\text{if $x=y$}\\ \texttt{true}&\text{if $x=1$ and $y\in\{2,4\}$}\\ \texttt{true}&\text{if $x=3$ and $y=4$}\\ \texttt{false}&\text{otherwise.}\end{cases}\] * \(\phi_{B}\) and \(\phi_{D}\) are the equality. Let \(\Phi=\{\phi_{A},\phi_{B},\phi_{C},\phi_{D}\}\). The conflict-graph \(\mathsf{CG}_{\Phi}(C\to A)\) is represented in Figure 3. Since \(\phi_{C}\) is transitive, we have \(\phi_{C}(t_{3}[C],t_{j}[C])=\mathtt{true}\) for each tuple \(t_{j}\) of \(r\). Moreover, \(\phi_{A}(t_{3}[A],t_{6}[A])=\mathtt{false}\) since \((t_{3},t_{6})\) is a counterexample to \(C\to A\). Therefore, the transitivity of \(\phi_{A}\) implies either \(\phi_{A}(t_{3}[A],t_{4}[A])=\mathtt{false}\) or \(\phi_{A}(t_{4}[A],t_{6}[A])=\mathtt{false}\). Hence, at least one of \((t_{3},t_{4})\) and \((t_{4},t_{6})\) must be a counterexample to \(C\to A\) too. In the example, this is \((t_{3},t_{4})\). Nevertheless, if transitivity constrains the complexity of the graph, dropping symmetry still allows new kinds of graph structures. Indeed, in the presence of symmetry, a conflict-graph cannot contain induced paths with more than 3 elements because of transitivity. However, such paths may exist when symmetry is removed. _Example 4_.: In the previous example, the tuples \(t_{2},t_{4},t_{5},t_{6}\) form an induced \(P_{4}\) of the underlying undirected graph of \(\mathsf{CG}_{\Phi}(r,C\to A)\), even though \(\phi_{A}\) and \(\phi_{C}\) enjoy transitivity. Therefore, we are left with the following intriguing question: can the loss of symmetry be used to break transitivity, and offer conflict-graphs a structure sufficiently complex to make EVPP intractable? The next theorem answers this question affirmatively. **Theorem 3**.: _The problem \(\textsc{EVPP}\) is **NP**-complete even when the predicates used on each attribute are transitive (tra), reflexive (ref), and antisymmetric (asym)._ The proof is given in Appendix B. It is a reduction from MIS in 2-subdivision graphs [14]. Theorem 1, Theorem 2 and Theorem 3 characterize the complexity of EVPP for each combination of predicates properties. In the next section, we discuss the granularity of these, and we use them as a framework to compare the complexity of EVPP for some known extensions of functional dependencies. ## 6 Discussions Replacing equality with various predicates to extend the semantics of classical functional dependencies is frequent [13, 15]. Our approach offers to compare these extensions on EVPP within a unifying framework based on the properties of the predicates they use. We can summarize our results with the hierarchy of classes of predicates given in Figure 4. Regarding the computation of the \(g_{3}\)-error, most existing works have focused on similarity/distance predicates. First, the \(g_{3}\)-error can be computed in polynomial time for classical functional dependencies [10]. Then, Song et al. [13] show that EVPP is **NP**-complete for a broad range of extensions of FDs which happen to be reflexive (ref) and symmetric (sym) predicates, which coincides with Theorem 2. However, they do not study predicate properties as we do in this paper. More precisely, they identify the hardness of EVPP for differential [12], matching [11], metric [14], neighborhood [15], and comparable dependencies [13]. For some of these dependencies, predicates may be defined over sets of attributes. Using one predicate per attribute and taking their conjunction is a particular case of predicate on attribute sets. Some extensions of FDs use partial orders as predicates. This is the case of ordered dependencies [11, 12], ordered FDs [13], and also of some sequential dependencies [10] and denial constraints [1] for instance. To our knowledge, the role of symmetry in EVPP has received little attention. For sequential dependencies [10], a measure different than the \(g_{3}\)-error have been used. The predicates of Theorem 3 are reflexive, transitive and antisymmetric. Hence they are partial orders. Consequently, the FDs in this context are _ordered functional dependencies_ as defined by Ng [13]. We obtain the following corollary: **Corollary 1**.: \(\mathrm{EVPP}\) _is \(\mathbf{NP}\)-complete for ordered functional dependencies._ Ordered functional dependencies are a restricted case of ordered dependencies [12], sequential dependencies [10], and denial constraints [1] (see [11]). The hardness of computing the \(g_{3}\)-error for these dependencies follows from Corollary 1. The hierarchy depicts quite accurately the current knowledge about EVPP and the delimitation between tractable and intractable cases. However, this analysis may require further refinements. Indeed, there may be particular types of FDs with predicates where EVPP is tractable in polynomial time, even though their predicates belong to a class for which the problem is \(\mathbf{NP}\)-complete. For instance, assume that each attribute \(A\) in \(R\) is equipped with a _total_ order \(\phi_{A}\). We show in Proposition 1 and Corollary 2 that in this case, EVPP can be solved in polynomial time, even though the predicates are reflexive, transitive and antisymmetric. **Proposition 1**.: _Let \((R,\Phi)\) be a relation scheme with predicates. Then, EVPP can be solved in polynomial time for a given FD \(X\to A\) if \(\phi_{B}\) is transitive for each \(B\in X\) and \(\phi_{A}\) is a total order._ Proof.: Let \((R,\Phi)\) be a relation scheme with predicates and \(X\to A\) a functional dependency. Assume that \(\phi_{B}\) is transitive for each \(B\in X\) and that \(\phi_{A}\) is a total order. Let \(r\) be a relation over \((R,\Phi)\). Let \(G=(r,E)\) be the undirected graph underlying \(\mathsf{CG}_{\Phi}(r,X\to A)\), that is, \((t_{i},t_{j})\in E\) if and only if \((t_{i},t_{j})\) or \((t_{j},t_{i})\) is an edge of \(\mathsf{CG}_{\Phi}(r,X\to A)\). We show that \(G\) is a comparability graph. To do so, we associate the following predicate \(\leq\) to \(\mathsf{CG}_{\Phi}(r,X\to A)\): for each pair \(t_{i},t_{j}\) of tuples of \(r\), \(t_{i}\leq t_{i}\) and \(t_{i}\leq t_{j}\) if \((t_{i},t_{j})\) is a counterexample to \(X\to A\). We show that \(\leq\) is a partial order: * _reflexivity_. It follows by definition. * _antisymmetry_. We use contrapositive. Let \(t_{i},t_{j}\) be two distinct tuples of \(r\) and assume that \((t_{i},t_{j})\) belongs to \(\mathsf{CG}_{\Phi}(r,X\to A)\). We need to prove that \((t_{j},t_{i})\) does not belong to \(\mathsf{CG}_{\Phi}(r,X\to A)\), _i.e._ it is not a counterexample to \(X\to A\). First, Figure 4: Complexity of EVPP with respect to the properties of predicates. \((t_{i},t_{j})\in\mathsf{CG}_{\Phi}(r,X\to A)\) implies that \(\phi_{A}(t_{i}[A],t_{j}[A])=\mathtt{false}\). Then, since \(\phi_{A}\) is a total order, \(\phi_{A}(t_{j}[A],t_{i}[A])=\mathtt{true}\). Consequently, \((t_{j},t_{i})\) cannot belong to \(\mathsf{CG}_{\Phi}(r,X\to A)\) and \(\leq\) is antisymmetric. * _transitivity_. Let \(t_{i},t_{j},t_{k}\) be tuples of \(r\) such that \((t_{i},t_{j})\) and \((t_{j},t_{k})\) are in \(\mathsf{CG}_{\Phi}(r,X\to A)\). Applying transitivity, we have that \(\bigwedge_{B\in X}\phi_{B}(t_{i}[B],t_{k}[B])=\mathtt{true}\). We show that \(\phi_{A}(t_{i}[A],t_{k}[A])=\mathtt{false}\). Since \((t_{i},t_{j})\) is a counterexample to \(X\to A\), we have \(\phi_{A}(t_{i}[A],t_{j}[A])=\mathtt{false}\). As \(\phi_{A}\) is a total order, we deduce that \(\phi_{A}(t_{j}[A],t_{i}[A])=\mathtt{true}\). Similarly, we obtain \(\phi_{A}(t_{k}[A],t_{j}[A])=\mathtt{true}\). As \(\phi_{A}\) is transitive, we derive \(\phi_{A}(t_{k}[A],t_{j}[A])=\mathtt{true}\). Now assume for contradiction that \(\phi_{A}(t_{i}[A],t_{k}[A])=\mathtt{true}\). Since, \(\phi_{A}(t_{k}[A],t_{j}[A])=\mathtt{true}\), we derive \(\phi_{A}(t_{i}[A],t_{j}[A])=\mathtt{true}\) by transitivity of \(\phi_{A}\), a contradiction. Therefore, \(\phi_{A}(t_{i}[A],t_{k}[A])=\mathtt{false}\). Using the fact that \(\bigwedge_{B\in X}\phi_{B}(t_{i}[B],t_{k}[B])=\mathtt{true}\), we conclude that \((t_{i},t_{k})\) is also a counterexample to \(X\to A\). The transitivity of \(\leq\) follows. Consequently, \(\leq\) is a partial order and \(G\) is indeed a comparability graph. Since MIS can be solved in polynomial time for comparability graphs [1], the result follows. We can deduce the following corollary on total orders, that can be used for ordered dependencies. **Corollary 2**.: _Let \((R,\Phi)\) be a relation scheme with predicates. Then, \(\operatorname{EVPP}\) can be solved in polynomial time if each predicate in \(\Phi\) is a total order._ In particular, Golab et al. [1] proposed a polynomial-time algorithm for a variant of \(g_{3}\) applied to a restricted type of sequential dependencies using total orders on each attribute. ## 7 Conclusion and future work In this work, we have studied the complexity of computing the \(g_{3}\)-error when equality is replaced by more general predicates. We studied four common properties of binary predicates: reflexivity, symmetry, transitivity, and antisymmetry. We have shown that when symmetry and transitivity are taken together, the \(g_{3}\)-error can be computed in polynomial time. Transitivity strongly impacts the structure of the conflict-graph of the counterexamples to a functional dependency in a relation. Thus, it comes as no surprise that dropping transitivity makes the \(g_{3}\)-error hard to compute. More surprisingly, removing symmetry instead of transitivity leads to the same conclusion. This is because deleting symmetry makes the conflict-graph directed. In this case, the orientation of the edges weakens the impact of transitivity, thus allowing the conflict-graph to be complex enough to make the \(g_{3}\)-error computation problem intractable. We believe our approach sheds new light on the problem of computing the \(g_{3}\)-error, and that it is suitable for estimating the complexity of this problem when defining new types of FDs, by looking at the properties of predicates used to compare values. We highlight now some research directions for future works. In a recent paper [14], Livshits et al. study the problem of computing optimal repairs in a relation with respect to a set of functional dependencies. A repair is a collection of tuples which does not violate a prescribed set of FDs. It is optimal if it is of maximal size among all possible repairs. Henceforth, there is a strong connection between the problem of computing repairs and computing the \(g_{3}\)-error with respect to a collection of FDs. In their work, the authors give a dichotomy between tractable and intractable cases based on the structure of FDs. In particular, they use previous results from Gribkoff et al. [1] to show that the problem is already \(\NP\)-complete for 2 FDs in general. In the case where computing an optimal repair can be done in polynomial time, it would be interesting to use our approach and relax equality with predicates in order to study the tractability of computing the \(g_{3}\)-error on a collection of FDs with relaxed equality. From a practical point of view, the exact computation of the \(g_{3}\)-error is extremely expensive in large datasets. Recent works [11, 12] have proposed to use approximation algorithms to compute the \(g_{3}\)-error both for equality and predicates. It could be of interest to identify properties or classes of predicates where more efficient algorithms can be adopted. It is also possible to extend the existing algorithms calculating the classical \(g_{3}\)-error (see _e.g._[10]). They use the projection to identify equivalence classes among values of \(A\) and \(X\). However, when dropping transitivity (for instance in similarity predicates), separating the values of a relation into _"similar classes"_ requires to devise a new projection operation, a seemingly tough but fascinating problem to investigate. Acknowledgment.We thank the reviewers for their valuable feedback.. We also thank the Datavalor initiative of Insavalor (subsidiary of INSA Lyon) for funding part of this work.
2305.10007
$L^p$ cohomology and Hodge decomposition for ALE manifolds
We relate the dimensions of $L^p$ reduced cohomology spaces in degree k of an ALE manifold to the dimension of some spaces of decaying harmonic forms, depending both on p and on k. In this class of manifolds, this provides an extension to $p\neq 2$ of the well-known result of Hodge. In particular, we prove that for fixed $k\notin\left\{1,n-1\right\}$, the dimension of the $L^p$ reduced cohomology spaces in degree k is independent of $p\in (1,\infty)$, while for $k\in\{1,n-1\}$, the dimension jumps exactly once by a factor N-1 (N being the number of ends) when $p$ varies in $(1,\infty)$. We also prove $L^p$ Hodge decompositions for k-forms on such manifolds, for the optimal values of k and p. When these are not available, we provide a substitute (a modified Hodge decomposition).
Baptiste Devyver, Klaus Kroencke
2023-05-17T07:31:38Z
http://arxiv.org/abs/2305.10007v1
# \(L^{p}\) cohomology and Hodge decomposition for ALE manifolds ###### Abstract. In this article, we prove that the dimensions of \(H^{k}_{p}(M)\), the \(L^{p}\) reduced cohomology spaces on an ALE manifold \(M\) of dimension \(n\geq 3\), for any values \(p\in(1,+\infty)\) and \(k\in\{0,\cdots,n\}\), is equal to the dimension of some spaces of decaying harmonic forms that depends on \(p\) and \(k\). In this class of manifolds, this provides an extension to \(p\neq 2\) of the well-known result of Hodge. In particular, we prove that for fixed \(k\notin\{1,n-1\}\), the dimension is independent of \(p\in(1,\infty)\), while for \(k\in\{1,n-1\}\), the dimension jumps exactly once by a factor \(N-1\) (where \(N\) is the number of ends) if \(p\) varies in \((1,\infty)\). We also prove \(L^{p}\) Hodge decompositions for \(k\)-forms on such manifolds, for the optimal values of \(k\) and \(p\). ###### Contents * 1 Introduction * 2 The setting, and decay of harmonic forms * 3 Harmonic functions on ALE manifolds * 4 Hodge projectors * 5 \(L^{p}\) cohomology and Hodge decomposition * 6 Boundedness of Riesz transforms on forms * 7 \(L^{p}\) Hodge-Sobolev decompositions * A A uniqueness lemma for Hodge projectors * B Weighted Sobolev spaces and decay of harmonic forms ## 1. Introduction In this paper, we look at \(L^{p}\) reduced cohomology spaces and \(L^{p}\) Hodge decomposition in the class of asymptotically, locally Euclidean manifolds (ALE manifolds, in short). In the case \(p=2\), there is a very rich literature about \(L^{2}\) cohomology and \(L^{2}\) Betti numbers for non-compact manifolds. Much less is known in the case \(p\neq 2\), however see for instance the survey [33] for pointers to the litterature, both in the case \(p=2\) and \(p\neq 2\). A general question is to relate the \(L^{p}\) Betti numbers with the topology of the manifold, and its geometry at infinity. For \(p=2\), an important fact for \(L^{2}\) cohomology is the Hodge theorem, which asserts that for a general complete manifold, the dimension of the \(k^{th}\) space of reduced \(L^{2}\) cohomology is equal Introduction Let \(M\) be a connected, oriented, oriented, oriented, oriented, oriented, oriented, oriented, oriented, oriented, oriented, oriented, oriented, oriented, oriented, oriented, oriented, oriented, oriented oriented, oriented oriented, oriented oriented, oriented **Remark 1.2**.: In fact, we will show that \(\mathcal{H}_{k}(M)=\ker_{1-n}(\Delta_{k})\) for every \(k\) and \(\ker_{1-n}(\Delta_{k})=\ker_{-n}(\Delta_{k})\) for \(k\notin\{1,n-1\}\). For \(k\in\{1,n-1\}\), we will in turn show that \(\ker_{-n}(\Delta_{k})\subset\ker_{1-n}(\Delta_{k})\) is a subspace of codimension \(N-1\). **Remark 1.3**.: According to G. Carron in [10], the spaces of reduced \(L^{2}\) cohomology spaces have a topological interpretation, for manifolds with ends that are quasi-isometric to flat ends; this class of manifolds includes in particular all ALE manifolds. Thus, as a corollary to Theorem 1.1, for any \(p\in(1,+\infty)\), one can relate the reduced \(L^{p}\) cohomology spaces of an ALE manifold with some topological invariants. In the special case of AE (asymptotically Euclidean) manifolds of dimension \(n\geq 3\), the topological interpretation of the reduced \(L^{2}\) cohomology spaces is as follows: these spaces can be identified with the relative cohomology spaces of a manifold with boundary, obtained from \(M\) by removing in each end the complement of a large Euclidean ball. If \(M\) is AE, then the dimension of \(\mathcal{H}_{k}(M)\) is also equal to \[\begin{cases}\dim H^{k}(\bar{M}),&k\notin\{1,n-1\}\\ \dim H^{k}(\bar{M})+N-1,&k\in\{1,n-1\}\end{cases}\] where \(\bar{M}\) is a compact manifold without boundary, obtained by one point - compactifying each end of \(M\). Let us now compare briefly the result of Theorem 1.1 for \(p\neq 2\) with the results available in the literature. For \(p\neq 2\), the reduced \(L^{p}\) cohomology spaces of ALE manifolds have been studied by Gold'shtein, Kuzminov and Shvedov in [23]; more specifically, they show (see [23, Theorem 1]) that the long exact sequence in relative cohomology induces pieces of exact sequences in reduced \(L^{p}\) cohomology. They also compute the reduced \(L^{p}\) cohomology of cylinders (see [23, Theorem 2]), which can then be used to get some information about the spaces \(H^{k}_{p}(M)\) (see for instance the proof of [23, Theorem 5]). Let us explain what their approach yields in the particular case where the manifold \(M\) has Euclidean ends; fix a smooth, open relatively compact set \(U\) such that \(M\setminus U\) is the union of \(N\) cones \(E_{i}:=[a,\infty)\times S^{n-1}\), \(1\leq i\leq N\), each one being endowed with the cone metric \(dt^{2}+t^{2}g_{S^{n-1}}\). Note that \(\partial U\) consists in the disjoint union of \(N\) spheres \(S^{n-1}\). Let \(1\leq k\leq n-1\). According to [23, Theorem 1], the following two pieces of long sequences are exact: \[H^{k-1}(U)\to\oplus_{i=1}^{N}H^{k}_{p}(E_{i},\partial E_{i})\to H^{k}_{p}(M) \to H^{k}(U), \tag{1.1}\] and \[H^{k}(U,\partial U)\to H^{k}_{p}(M)\to\oplus_{i=1}^{N}H^{k}_{p}(E_{i})\to H^{k +1}(U,\partial U) \tag{1.2}\] (all the \(L^{p}\) cohomology spaces here are reduced, and we recall that the relative cohomology spaces above are defined using similar definitions as the non-relative ones, but using forms with vanish at the boundary in a certain sense). We warn the reader that the exactness of the above two sequences stop on the right and on the left, and does not extend to a long exact sequence. According to [23, Theorem 2], one has \[p\geq\frac{n}{k}\Rightarrow H^{k}_{p}(E_{i},\partial E_{i})=\{0\} \tag{1.3}\] and \[p\leq\frac{n}{k}\Rightarrow H^{k}_{p}(E_{i})=\{0\}. \tag{1.4}\] Hence, \[p\geq\frac{n}{k}\Rightarrow\dim H^{k}_{p}(M)\leq\dim H^{k}(U), \tag{1.5}\] and \[p\leq\frac{n}{k}\Rightarrow\dim H^{k}_{p}(M)\leq\dim H^{k}(U,\partial U). \tag{1.6}\] Moreover, Carron's result in [10] implies that \[H^{k}_{2}(M)\simeq H^{k}(U,\partial U).\] The exact sequence in relative cohomology of the pair \((U,\partial U)\) and the well-known computation of the cohomology of spheres imply that \[\dim H^{k}(U,\partial U)=\begin{cases}\dim H^{k}(U),&k\neq 1,\\ \dim H^{k}(U)+(N-1),&k=1.\end{cases}\] Therefore, \[\dim H^{k}_{2}(M)=\dim H^{k}(U,\partial U)=\begin{cases}\dim H^{k}(U),&k\neq 1,\\ \dim H^{k}(U)+(N-1),&k=1.\end{cases} \tag{1.7}\] Assume first that \(k\neq 1\), then by (1.5), (1.6) and (1.7), one has \[\dim H^{k}_{p}(M)\leq\dim H^{k}_{2}(M). \tag{1.8}\] And for \(k=1\), one has \[\dim H^{1}_{p}(M)\leq\begin{cases}\dim H^{1}_{2}(M),&p<n,\\ \dim H^{1}_{2}(M)-(N-1),&p\geq n.\end{cases} \tag{1.9}\] Using duality (see Proposition 5.11), one can in fact obtain for \(k=n-1\) that \[\dim H^{n-1}_{p}(M)\leq\begin{cases}\dim H^{1}_{2}(M),&p>\frac{n}{n-1},\\ \dim H^{1}_{2}(M)-(N-1),&p\leq\frac{n}{n-1}.\end{cases} \tag{1.10}\] It does not seem possible to relate more precisely reduced \(L^{2}\) and \(L^{p}\) cohomology using directly the results in [23]. Our Theorem 1.1 therefore appears as a substantial improvement. Another main result of this paper concerns \(L^{p}\) Hodge decompositions on ALE manifolds. For this, we need two definitions: **Definition 1.4**.: For \(p\in(1,+\infty)\) and \(k\in\{0,\cdots,n\}\), we say that the \(L^{p}\)_Hodge decomposition_ holds for \(k\)-forms, provided \(\operatorname{im}_{L^{p}}(d_{k-1})+\operatorname{im}_{L^{p}}(d_{k+1}^{*})\) is closed, and \[L^{p}(\Lambda^{k}M)=\operatorname{im}_{L^{p}}(d_{k-1})\oplus\operatorname{im }_{L^{p}}(d_{k+1}^{*})\oplus\ker_{L^{p}}(\Delta_{k}).\] We also consider a "modified" \(L^{p}\) Hodge decomposition: **Definition 1.5**.: For \(p\in(1,+\infty)\) and \(k\in\{0,\cdots,n\}\), we say that a _modified \(L^{p}\) Hodge decomposition_ holds for \(k\)-forms, provided \(\operatorname{im}_{L^{p}}(d_{k-1})+\operatorname{im}_{L^{p}}(d_{k+1}^{*})\) is closed and \[L^{p}(\Lambda^{k}M)=\operatorname{im}_{L^{p}}(d_{k-1})\oplus\operatorname{im }_{L^{p}}(d_{k+1}^{*})\oplus\ker_{-n}(\Delta_{k}).\] The celebrated De Rham-Kodaira theorem asserts that \((\mathscr{H}_{p})\) holds for \(p=2\) on _any_ complete Riemannian manifold. However, for \(p\neq 2\), the question of the existence or non-existence of a \(L^{p}\) Hodge decomposition, or -what is closely related- of the \(L^{p}\) boundedness of the Hodge projectors, is notoriously already difficult in the case of forms of degree \(1\). In the case of the Euclidean space itself, the \(L^{p}\) Hodge decomposition \((\mathscr{H}_{p})\) has first been proved (even in a stronger form) by T. Iwaniec and G. Martin in [29]. It is well-known that the question of the \(L^{p}\) boundedness of the Hodge projectors is itself related to the \(L^{p}\) boundedness of Riesz transforms (on forms), that is of the operators \(d\Delta_{k}^{-1/2}\) and \(d^{*}\Delta_{k}^{-1/2}\) (see Section 6 for some details on this). This is in fact the approach of [29] to prove the \(L^{p}\) Hodge decomposition on \(\mathbb{R}^{n}\). The same approach has been used by X.D. Li in order to prove \((\mathscr{H}_{p})\) in the case the Bochner-Weitzenbock curvature tensors in degree \(k-1\), \(k\) and \(k+1\) are non-negative (see [36]), building on the celebrated work of Bakry about the Riesz transforms on Riemannian manifolds ([6]). X.D. Li also generalizes results due to Lohoue, and obtains stronger \(L^{p}\) Hodge decompositions under assumptions that are essentially equivalent to the positivity of the bottom of the spectrum of the Hodge Laplacian (see [37]), however these assumptions are not satisfied for ALE manifolds. Let us also warn the reader that the terminology used by X. D. Li (weak and strong \(L^{p}\) Hodge decomposition) has a different meaning than that used in the present paper. Concerning \(L^{p}\) cohomology in degree \(1\), more is known: according to P. Auscher and T. Coulhon in [1], on ALE manifolds, the \(L^{p}\) boundedness of the Hodge projector on exact forms is essentially equivalent to the boundedness of the scalar Riesz transform \(\nabla\Delta^{-1/2}\). The boundedness on \(L^{p}\) spaces of the latter has only relatively recently been elucidated (see [14]). For the boundedness on \(L^{p}\) of the Riesz transforms on forms, partial results on asymptotically conical manifolds can be found in [27]; even for ALE manifolds, these results are somehow incomplete (but they can be made complete by using results that we will prove in the present paper). They however rely on a difficult, precise blow-up analysis of the Schwarz kernel of the resolvent of the Hodge Laplacian. To summarize, even for manifolds with Euclidean ends, the only results known to the authors about the \(L^{p}\) Hodge decomposition problem use the boundedness of the Riesz transform either on functions, or on forms. In their 2003 paper [17, p.5], T. Coulhon and X.T. Duong write in this respect: "the problem of finding sufficient conditions [for the \(L^{p}\) boundedness of the Hodge projector on exact 1-forms] is not easier than the Riesz transform problem". However, in the present paper, we want to convey the idea that the former _can_ indeed be easier to prove, at least in some situations: indeed, we prove _directly_ the \(L^{p}\) boundedness or unboundedness of the Hodge projectors of any degree on ALE manifolds, _via_ the \(L^{p}\) Hodge decomposition, without using any result on Riesz transforms. As a consequence of our detailed analysis of the decay properties of harmonic forms on ALE manifolds, we are in fact also able to complete the results of [27] and to completely characterize on any ALE manifold the exponents \(p\in(1,+\infty)\) and \(k\in\{0,\cdots,n\}\) for which the Riesz transforms on \(k\)-forms are bounded on \(L^{p}\). See Corollary 6.3. However, we stress again that these results are not used to obtain the various \(L^{p}\) Hodge decompositions, and in fact, in the case \(p\) and \(k\) are such that the Riesz transforms on \(k\)-forms are unbounded on \(L^{p}\), we still obtain a modified Hodge decomposition, a result that seems inaccessible using only results about the Riesz transforms and the Hodge projectors. Our method ultimately relies on Fredholm properties of the Hodge Laplacian in weighted Sobolev spaces, which are well-known on ALE manifolds. Thus, we prove the \(L^{p}\) Hodge decomposition on \(k\)-forms for the optimal values of \(p\) and \(k\) on ALE manifolds, and moreover we give a substitute (the modified \(L^{p}\) Hodge decomposition) for the values of \(p\) when this decomposition fails; our result is as follows: **Theorem 1.6**.: _Let \(M\) be a connected, oriented ALE manifold with dimension \(n\geq 3\), and let \(p\in(1,+\infty)\), \(k\in\{0,\cdots,n\}\). Then, the \(L^{p}\) Hodge decomposition \((\mathscr{H}_{p})\) holds if and only if one of the following holds:_ * \(k\notin\{1,n-1\}\)_;_ * \(k\in\{1,n-1\}\) _and_ \(p\in(\frac{n}{n-1},n)\)_;_ * \(k\in\{1,n-1\}\) _and_ \(M\) _has only one end._ _Moreover,_ * _if_ \(k\in\{1,n-1\}\)_,_ \(M\) _has_ \(N\geq 2\) _ends and_ \(p\geq n\)_, then the modified_ \(L^{p}\) _Hodge decomposition_ \((\widehat{\mathscr{H}_{p}})\) _holds._ _._ * _if_ \(k\in\{1,n-1\}\)_,_ \(M\) _has_ \(N\geq 2\) _ends and_ \(p\leq\frac{n}{n-1}\)_, then the space_ \(im_{L^{p}}(d_{k-1})\oplus\operatorname{im}_{L^{p}}(d_{k+1}^{*})\oplus \ker_{L^{p}}(\Delta_{k})\) _is closed and has codimension_ \(N-1\) _in_ \(L^{p}(\Lambda^{k}M)\)_._ **Remark 1.7**.: In case (e), a complement of dimension \(N-1\) can be found explicitly, see Proposition 5.16. This result has interesting consequences for the \(L^{p}\) boundedness of the Hodge projectors, and the Riesz transforms on forms (see Sections 4 and 6). In particular, it easily implies that on an ALE manifold of dimension \(n\geq 3\), the Riesz transform on functions \(\nabla\Delta^{-1/2}\) is bounded on \(L^{p}\), if and only if \(p\in(1,p^{*})\) where \(p^{*}=n\) if \(M\) has at least two ends, \(p^{*}=+\infty\) otherwise. See Corollary 6.1. This provides arguably one of the shortest and most elementary proof of this well-known fact (see [14], [26]). As the careful reader would have noticed, our results do not apply to ALE manifolds of dimension \(n=2\). We thus state as an interesting open problem: **Open problem:** extend Theorems 1.6 and 1.1 to ALE manifolds of dimension \(n=2\). The plan of this article is as follows: in Section 2, we set the stage and present some general results concerning the decay of harmonic forms on ALE manifolds. In Section 3, we concentrate more specifically on the case of 1-forms, which is special. In Section 4, we give some general results concerning \(L^{p}\) Hodge decompositions. In Section 5, we prove the main results of this paper. In Section 6, we give applications to Hodge projectors. The appendix contains mostly material related to the theory of weighted Sobolev spaces on ALE manifolds, which is not presented elsewhere in the manuscript; it also contains the proof of some of the key technical results of the paper (some of which appearing for the first time in the literature). **Acknowledgements** The authors wish to thank G. Carron for having pointed out an important mistake in an early version of this article. B. Devyver was partly supported by the French ANR through the project RAGE ANR-18-CE40-0012, and as well as in the framework of the "Investissements d'avenir" program (ANR-15-IDEX-02) and the LabEx PERSYVAL (ANR-11-LABX-0025-01). K. Kroncke was partly supported by the DFG through the projects 338891943 and 441564857 in the framework of the priority program "Geometry at Infinity" (SPP2026). ## 2. The setting, and decay of harmonic forms _Notation:_ if \(\alpha\in\mathbb{R}\) and \(f:\mathbb{R}^{n}\to\mathbb{R}\), we write \(f=\mathcal{O}_{\infty}(r^{\alpha})\) or \(f\in\mathcal{O}_{\infty}(r^{\alpha})\) to mean that for every \(k\in\mathbb{N}\), \[\partial^{k}f(x)=O(r(x)^{\alpha-k})\quad\text{as }r(x)\to\infty.\] Here, \(r(x)=d(0,x)\) is the usual radial coordinate. In all this paper, we consider \((M^{n},g)\) a connected, oriented ALE manifold of dimension \(n\geq 3\) and order \(\tau>0\). By definition, this means that there exists a compact set \(K\subset M\) and a finite number of ends \((E_{i})_{i=1,\cdots,N}\), finite subgroups \((\Gamma_{i})_{i=1,\cdots,N}\) of \(SO(n,\mathbb{R})\) acting freely on \(S^{n-1}\), and diffeomorphims \(\phi_{i}:E_{i}\to\left(\mathbb{R}^{n}\setminus\overline{B(0,1)}\right)/\Gamma_ {i}\), \(i=1,\cdots,N\) such that \(M\setminus K=\sqcup_{i=1}^{N}E_{i}\) and \((\phi_{i})^{*}g_{eucl}-g\in\mathcal{O}_{\infty}(r^{-\tau})\). Here, by abuse of notations, we call \(r\) a smooth, positive function on \(M\), which in each end \(E_{i}\) is equal to \(r_{i}\circ\phi_{i}\), where \(r_{i}\) is the distance function to \(0\) in \(\mathbb{R}^{n}/\Gamma_{i}\), and \(\mathcal{O}_{\infty}(r^{-\tau})\) has an obvious meaning despite \(r\) not being the Euclidean radial coordinate. We denote by \[d_{k}:C^{\infty}(\Lambda^{k}M)\to C^{\infty}(\Lambda^{k+1}M)\] the exterior derivative on k-forms. Its formal adjoint is denoted by \[d_{k+1}^{*}=(d_{k})^{*}:C^{\infty}(\Lambda^{k+1}M)\to C^{\infty}(\Lambda^{k}M)\] and the Hodge Laplacian on k-forms is \[\Delta_{k}=d_{k-1}\circ d_{k}^{*}+d_{k+1}^{*}\circ d_{k}:C^{\infty}(\Lambda^{k }M)\to C^{\infty}(\Lambda^{k}M).\] Sometimes, we will work with the vector bundle of differential forms of any degree: \[\Lambda^{*}M=\oplus_{k=0}^{n}\Lambda^{k}M,\] and the differential, co-differential and Hodge Laplacian acting on sections of \(\Lambda^{*}M\) will be simply denoted \(d\), \(d^{*}\) and \(\Delta\) respectively. We also define the Hodge-Dirac operator \[\mathcal{D}=d+d^{*}\] acting on sections of \(\Lambda^{*}M\). Because of \(d^{2}=0\) and \((d^{*})^{2}=0\), it is obvious that \(\mathcal{D}^{2}=\Delta\). Furthermore, we introduce the notations \[\operatorname{im}_{L^{p}}(d_{k-1})=\overline{d(C_{c}^{\infty}(\Lambda^{k-1}M )}^{L^{p}},\qquad\operatorname{im}_{L^{p}}(d_{k+1}^{*})=\overline{d^{*}(C_{c} ^{\infty}(\Lambda^{k+1}M)}^{L^{p}}\] and \[\ker_{L^{p}}(d_{k}) =\left\{\omega\in L^{p}(\Lambda^{k}M)\mid d\omega=0\right\},\] \[\ker_{L^{p}}(d_{k}^{*}) =\left\{\omega\in L^{p}(\Lambda^{k}M)\mid d^{*}\omega=0\right\},\] \[\ker_{L^{p}}(\Delta_{k}) =\left\{\omega\in L^{p}(\Lambda^{k}M)\mid\Delta_{k}\omega=0\right\}.\] Here, the equations \(d_{k}\omega=0\), \(d_{k}^{*}\omega=0\) and \(\Delta_{k}\omega=0\) are intended in the weak sense. These are closed subspaces of \(L^{p}(\Lambda^{k}M)\). Note that the following inclusions take place: \[\operatorname{im}_{L^{p}}(d_{k-1})\subset\ker_{L^{p}}(d_{k}),\qquad \operatorname{im}_{L^{p}}(d_{k+1}^{*})\subset\ker_{L^{p}}(d_{k}^{*}),\] \[\operatorname{im}_{L^{p}}(d_{k-1})\cap\operatorname{im}_{L^{p}}(d_ {k+1}^{*})\subset\ker_{L^{p}}(d_{k})\cap\ker_{L^{p}}(d_{k}^{*})\subset\ker_{L^ {p}}(\Delta_{k}).\] Finally, for \(\alpha\in\mathbb{R}\), we introduce the notation \[\ker_{\alpha}(\Delta_{k})=\left\{\omega\in C^{\infty}(\Lambda^{k}M)\mid\Delta _{k}\omega=0,\omega=\mathcal{O}_{\infty}(r^{\alpha})\text{ as }r\to\infty\right\}.\] With these notations settled, one can quote a first result, which is well-known and is due to Yau (see [38]): **Proposition 2.1**.: _Let \(p\in(1,\infty)\) and \(k\in\{0,n\}\). Then,_ \[\ker_{L^{p}}(\Delta_{k})=\{0\}.\] Furthermore, it is easily seen that: **Proposition 2.2**.: _Let \(p\in(1,\infty)\), then_ \[L^{p}(\Lambda^{0}M)=\operatorname{im}_{L^{p}}(d_{1}^{*}),\quad L^{p}(\Lambda^{ n}M)=\operatorname{im}_{L^{p}}(d_{n-1}).\] Proof.: Let us prove the identity for the \(0\)-forms, the other case being completely analogous. Denote \(q=p^{\prime}\) the conjugate Holder exponent of \(p\). Since \(\operatorname{im}_{L^{p}}(d_{1}^{*})\) is closed, by \(L^{p}-L^{q}\) duality it is enough to show that the annihilator in \(L^{q}\) of \(\operatorname{im}_{L^{p}}(d_{1}^{*})\) is \(\{0\}\). But if \(\varphi\in L^{q}(\Lambda^{0}M)\) belongs to this annihilator, then by definition one has \[(\varphi,\omega)=0,\quad\omega\in\operatorname{im}_{L^{p}}(d_{1}^{*}),\] and in particular \(d\varphi=0\) in the distribution sense. Since \(\varphi\in L^{1}_{loc}\) and \(M\) is connected, it follows that \(\varphi\) is constant. But since \(\varphi\in L^{q}\), \(q\in(1,\infty)\) and \(M\) has infinite volume, necessarily \(\varphi\equiv 0\). This concludes the proof. Thus, in the case \(k\in\{0,n\}\), we have a (trivial) \(L^{p}\) Hodge decomposition for every \(p\in(1,\infty)\), and the \(L^{p}\) cohomology spaces \(H^{0}_{p}(M)\), \(H^{n}_{p}(M)\) vanish (recall \(M\) is assumed to be connected). In the remaining part of the paper, we will thus only deal with forms of degree \(k\in\{1,\cdots,n-1\}\). The following proposition is one of the main ingredients of this paper: **Proposition 2.3** (see also [30, Proposition 4.3]).: _Let \(p\in(1,\infty)\) and \(k\in\{1,\cdots,n-1\}\). The following hold true:_ * \[\ker_{L^{p}}(\Delta_{k})=\ker_{L^{p}}(d_{k})\cap\ker_{L^{p}}(d_{k}^{*}).\] * \[\ker_{L^{p}}(\Delta_{k})\subset\ker_{1-n}(\Delta_{k}).\] * _if_ \(k\notin\{1,n-1\}\) _or_ \(p\leq\frac{n}{n-1}\)_, then_ \[\ker_{L^{p}}(\Delta_{k})=\ker_{-n}(\Delta_{k}).\] _ * _if_ \(p>\frac{n}{n-1}\)_, then_ \[\ker_{L^{p}}(\Delta_{k})=\ker_{1-n}(\Delta_{k}).\] * _Furthermore, both_ \(\ker_{1-n}(\Delta_{1})\) _and_ \(\ker_{-n}(\Delta_{1})\) _are finite dimensional._ The proof of (a)-(d) in Proposition 2.3 relies on a standard iteration procedure in weighted Sobolev spaces. For readability of the paper, and since some of the arguments of the proof will be used again later in the paper, we have decided to include a sketch of the proof in appendix. From the proof of Proposition 2.3, we also extract the following result which will be used later: **Corollary 2.4**.: _Let \(\omega\) be a harmonic form on \(M\), and assume that there exists \(\epsilon>0\) such that, as \(r\to\infty\),_ \[\omega\in\mathcal{O}_{\infty}(r^{1-n-\epsilon}).\] _Then, as \(r\to\infty\),_ \[\omega\in\mathcal{O}_{\infty}(r^{-n}).\] We will also need two results about the asymptotic of harmonic forms at infinity. These two results are also proved in appendix. The first one is an expansion for bounded harmonic functions: **Lemma 2.5**.: _Let \(u\in\ker_{0}(\Delta_{0})\). Then, there exists \(\epsilon>0\) such that in each end \(E_{i}\), there exists a constant \(A_{i}\in\mathbb{R}\) such that following expansion holds:_ \[u=c_{i}+A_{i}r^{2-n}+\mathcal{O}_{\infty}(r^{2-n-\epsilon}),\quad r\to\infty. \tag{2.1}\] The second lemma is an expansion for decaying harmonic one-forms: **Lemma 2.6**.: _Let \(\omega\in\ker_{-\alpha}(\Delta_{1})\) for some \(\alpha>0\). Then, there exists \(\epsilon>0\) such that in each end \(E_{i}\), there exists a constant \(B_{i}\in\mathbb{R}\) such that following expansion holds:_ \[\omega=B_{i}r^{1-n}dr+\mathcal{O}_{\infty}(r^{1-n-\epsilon}),\quad r\to\infty. \tag{2.2}\] ## 3. Harmonic functions on ALE manifolds In this section, we investigate in more details the space of harmonic forms of degree \(1\) and \(n-1\). In order to do that, we will first have to look at the space of bounded harmonic functions. In the following, we will use the Green operator \(\Delta^{-1}\) on \(M\), defined by \[\Delta^{-1}w(x)=\int_{M}G(x,y)w(y)\,dy,\] where \(G(x,y)\) is the positive Green function on \(M\), which exists thanks to the assumptions \(n\geq 3\) and \(M\) being ALE (see [24]). Moreover, the following assymptotics hold: for all \(y\in M\), \[G(x,y)\in O(r^{2-n}(x)),\quad\text{as }r(x)\to\infty. \tag{3.1}\] Also, the above estimate is uniform for all \(y\) staying in a fixed compact set. For the sake of completeness, we will explain these two points below. The proof of (3.1) relies on the concept of _minimal growth at infinity_ for positive harmonic functions that we now introduce: **Definition 3.1**.: Let \(u\) be a postive harmonic function defined in an end \(E_{i}\). We say that \(u\) has _minimal growth at infinity in the end \(E_{i}\)_, if the following holds: for every positive harmonic function \(v\) in the end \(E_{i}\), there exists a constant \(C>0\) such that \[u\leq Cv.\] If \(u\) is a positive harmonic function outside a compact set and has minimal growth at infinity in every end, then we shall simply say that \(u\) has _minimal growth at infinity_. It follows right away that if \(u\) and \(w\) are positive harmonic functions in an end \(E_{i}\) which both have minimal growth, then there is a positive constant \(C\) such that \[C^{-1}u\leq v\leq Cu.\] It is also standard and not hard to show that for every \(o\in M\), the Green function \(G(o,\cdot)\) with pole \(o\) -provided it is finite- has minimal growth at infinity: indeed, this is a consequence of the construction of the Green function through an exhaustion sequence of compact sets in \(M\), and of the maximum principle. See for instance [34]. Here is the two-sided estimate we need for the Green function: **Lemma 3.2**.: _Let \(K\Subset M\) be a compact set; then, there exists a constant \(C>0\) such that for all \(x\in K\) and \(r(y)\to\infty\),_ \[C^{-1}r(y)^{2-n}\leq G(x,y)\leq Cr(y)^{2-n}.\] Proof.: The hypothesis on the metric \(g\) implies that \[\Delta r^{2-n}=\mathcal{O}_{\infty}(r^{-\kappa}),\] with \(\kappa=n+\tau>3\). Let \(\chi\in C_{c}^{\infty}(\mathbb{R}^{n})\) with \(\chi\equiv 1\) in restriction to \(B_{\mathbb{R}^{n}}(0,1)\), \(\chi\equiv 0\) in restriction to \(\mathbb{R}^{n}\setminus B_{\mathbb{R}^{n}}(0,2)\). For \(R>0\) large enough let \(\xi_{R}\) be the radial function on \(M\) which, in each end \(E_{i}\), identifies with the function \(1-\chi\left(\frac{\cdot}{R}\right)\). Finally, let \(f=\Delta r^{2-n}\) and \(f_{R}=f\xi_{R}\); it is plain to see that the support of \(f_{R}\) is inside \(\Omega_{R}^{i}=(\mathbb{R}^{n}\setminus B_{\mathbb{R}^{n}}(0,R))/\Gamma_{i}\) in each end \(E_{i}\), and that there is a constant \(C>0\) independant of \(R\) such that \[|f_{R}|\leq Cr^{-\kappa}.\] Consider the function \(\varphi_{R}:=r^{2-n}-\Delta^{-1}f_{R}\). Then, \(\Delta\varphi_{R}=0\) in \(\mathbb{R}^{n}\setminus B_{\mathbb{R}^{n}/\Gamma}(0,2R)\) in each end. We claim that it is enough to prove that for some choice of \(R\) large enough, there exists a constant \(c>0\) such that, as \(x\to\infty\), \[c^{-1}r(x)^{2-n}\leq\varphi_{R}(x)\leq cr(x)^{2-n}. \tag{3.2}\] Indeed, (3.2) first implies that \[\lim_{x\to\infty}\varphi_{R}(x)=0,\] and according to [21, Proposition 6.1] (with \(u_{0}=\varphi_{R}\), \(u_{1}=1\)), \(\varphi_{R}\) has minimal growth at infinity, hence there is a constant \(C>0\) such that, as \(y\to\infty\), \[C^{-1}\varphi(y)\leq G(o,y)\leq C\varphi(y),\] where \(o\in K\) is a fixed point. Given the asymptotics on \(\varphi_{R}\), one gets \[G(o,y)\simeq r(y)^{2-n},\quad\text{ as }y\to\infty.\] The Harnack principle then yields the existence of a uniform constant \(\tilde{C}>0\) such that for all \(x\in K\), \[\tilde{C}^{-1}r(y)^{2-n}\leq G(x,y)\leq\tilde{C}r(y)^{2-n},\quad\text{ as }y\to\infty.\] Hence, in order to prove Lemma 3.2, it is enough to prove (3.2), which is what we do now. We first estimate, thanks to the positivity of the Green operator: \[|\Delta^{-1}f_{R}(x)|\leq\Delta^{-1}|f_{R}|(x)\lesssim(\Delta^{-1}(\mathbf{1}_ {\Omega_{R}^{i}}r^{-\kappa}))(x).\] We claim that on \(M\) the following following Sobolev inequality holds: \[||u||_{\frac{2n}{n-2}}\lesssim||\nabla u||_{2},\quad\forall u\in C_{c}^{ \infty}(M). \tag{3.3}\] Indeed, such an inequality when \(u\) has compact support in an end of \(M\) comes from the fact that (3.3) holds on \(\mathbb{R}^{n}\), hence also on \(\mathbb{R}^{n}\) endowed with a metric which is bi-Lipschitz to the Euclidean one, and the fact that \(u\) identifies with a smooth, \(\Gamma\)-periodic function on \(\mathbb{R}^{n}\) with compact support (this uses the fact that \(\Gamma\) is finite). Hence, (3.3) holds in the ends of \(M\), and the validity of (3.3) on \(M\) itself then follows from [9, Proposition 2.4]. The Sobolev inequality (3.3) implies estimates \(p_{t}(x,y)\leq Ct^{-n/2}e^{-\frac{d^{2}(x,y)}{ct}}\) for some positive constants \(C,c\), and by integration this yields the following uniform bound for the Green function on \(M\): \[G(x,y)\lesssim d(x,y)^{2-n}.\] See for instance [25, Exercice 15.8] for details. Hence, one has the following estimate: \[|\Delta^{-1}f_{R}(x)|\lesssim(k*\mathbf{1}_{\Omega_{R}^{i}}r^{-\kappa})(x),\] where the kernel \(k\) is by definition \(k(x,y)=d(x,y)^{2-n}\). Since (up to a constant) the Riemannian measure on \(M\) is bounded from above by the Euclidean measure in each end and \(d\) is comparable to the Euclidean distance, it follows that \(|\Delta^{-1}f_{R}(x)|\) is bounded by a constant multiple of the following integral in \(\mathbb{R}^{n}\): \[I(x):=\int_{|y|\geq R}\frac{dy}{|\bar{x}-y|^{n-2}|y|^{\kappa}},\] where \(\bar{x}\in\mathbb{R}^{n}\) is such that \(\pi(\bar{x})=x\). In order to estimate \(I(x)\), we let \(\delta=\frac{|\bar{x}|}{2}\), and we split the domain of integration into three parts (if \(R<4\delta\)) or two parts (if \(R\geq 4\delta\)): in the case \(R<4\delta\), we write \[I\leq I_{1}+I_{2}+I_{3},\] where \[I_{1}(x)=\int_{|\bar{x}-y|\leq\delta}\frac{dy}{|\bar{x}-y|^{n-2}|y|^{\kappa}},\] \[I_{2}(x)=\int_{R\leq|y|\leq 4\delta\,;\,|\bar{x}-y|\geq\delta}\frac{dy}{|\bar{ x}-y|^{n-2}|y|^{\kappa}},\] and \[I_{3}(x)=\int_{|y|\geq 4\delta,\,|\bar{x}-y|\geq\delta}\frac{dy}{|\bar{x}-y| ^{n-2}|y|^{\kappa}}.\] In the case \(R\geq 4\delta\), we do not need \(I_{2}(x)\) so by convention in this case we set \(I_{2}(x)=0\). We first estimate \(I_{1}(x)\): notice that by the definition of \(\delta\), \(|\bar{x}-y|\leq\delta\) implies \(|y|\geq\delta\), so \[I_{1}(x)\lesssim\delta^{-\kappa}\int_{0}^{\delta}\frac{t^{n-1}dt}{t^{n-2}} \simeq\delta^{2-\kappa}.\] Next, if \(R<4\delta\), we estimate \(I_{2}(x)\) by \[I_{2}(x)\lesssim\delta^{2-n}\int_{R}^{4\delta}t^{n-1-\kappa}\,dt\lesssim \delta^{2-n}R^{n-2-\kappa},\] (since \(n-2-\kappa<0\)). Finally, we estimate \(I_{3}(x)\): note that if \(|y|\geq 4\delta=2|\bar{x}|\) then \(|\bar{x}-y|\simeq|y|\), so \[I_{3}(x)\lesssim\int_{|y|\geq 4\delta}\frac{dy}{|y|^{n-2+\kappa}}\simeq\delta^{2- \kappa}.\] Hence, recalling that \(\delta\simeq r(x)\), one obtains the existence of a constant \(C>0\) such that for all \(x\in M\), \[I(x)\leq C(R^{n-2-\kappa}+r(x)^{n-\kappa})r(x)^{2-n}.\] Therefore, recalling that \(\kappa>n\), one can choose \(R>0\) so that \(CR^{n-2-\kappa}<\frac{1}{4}\), one gets that for all \(x\in M\) such that \(r(x)\geq(4C)^{\frac{1}{\kappa-n}}\), \[|\Delta^{-1}f_{R}(x)|\leq I(x)<\frac{1}{2}r(x)^{2-n}.\] Let \(r_{0}:=(4C)^{\frac{1}{\kappa-n}}\). Then, for the above choice of \(R\) we get the following estimate on \(\varphi_{R}\): \[\frac{1}{2}r(x)^{2-n}\leq\varphi_{R}(x)\leq\frac{3}{2}r(x)^{2-n},\quad r(x) \geq r_{0},\] which proves (3.2), and the proof is complete. We now describe the space of bounded harmonic functions on \(M\). It is well-known that the dimension of this space is closely related to the number of ends of \(M\), even if \(M\) is not ALE, see [31], [32]. Recall that \(\ker_{0}(\Delta_{0})\) denotes the space of bounded harmonic functions. Then we have: **Lemma 3.3**.: _For each tuple \(c=(c_{1},\ldots c_{N})\in\mathbb{R}^{N}\), there exists a unique bounded harmonic function \(u\in C^{\infty}(M)\) such that in each end \(E_{i}\), \(i=1,\ldots,N\), we have \(u\to c_{i}\), as \(r\to\infty\). In particular,_ \[\dim(\ker_{0}(\Delta_{0}))=N.\] Proof.: We first consider uniqueness: assume that \(u,v\in\ker_{0}(\Delta_{0})\) are both converging to the same constant \(c_{i}\) at each end \(E_{i}\), then \(u-v\) is a harmonic function converging to \(0\) at each end. By the maximum principle, this implies that \(u-v\equiv 0\), so \(u\equiv v\). For the existence, let \(c=(c_{1},\ldots c_{N})\in\mathbb{R}^{N}\) and consider a bounded function \(v\in C^{\infty}(M)\) such that \(v\equiv c_{i}\) at each end \(E_{i}\). Then, \(\Delta v\) is compactly supported and by (3.1), we have at each end \(\Delta^{-1}(\Delta v)=O(r^{2-n})\) as \(r\to\infty\), so that \(u:=v-\Delta^{-1}(\Delta v)\) is the desired bounded harmonic function. Because \(M\) is connected, the kernel of \(d_{0}\) is given by the constant functions and the isomorphism theorem from linear algebra directly yields **Corollary 3.4**.: _The subspace_ \[d_{0}(\ker_{0}(\Delta_{0}))\subset C^{\infty}(\Lambda^{1}M)\] _is \(N-1\)-dimensional_ **Lemma 3.5**.: _Let \(u\) be a bounded harmonic function; by (2.1) in every end \(E_{i}\), one has for some \(\varepsilon>0\),_ \[u=c_{i}+A_{i}r^{2-n}+\mathcal{O}_{\infty}(r^{2-n-\varepsilon}).\] _Then the following two properties hold concerning the real numbers \(A_{i}\):_ * \[\sum_{i=1}^{N}\frac{A_{i}}{\operatorname{Card}(\Gamma_{i})}=0.\] * _If_ \(u\) _is nonconstant, then there is_ \(i\in\{1,\cdots,N\}\) _such that_ \(A_{i}\neq 0\)_._ Proof.: The proof of (i) follows from arguments similar to [20, Proposition 4.5]. Denote \(\omega=du\), and let \(f\in C^{\infty}(M)\) be such that for every \(i=1,\cdots,N\) and every \(x\) in the end \(E_{i}\), \(f(x)=c_{i}+A_{i}r^{2-n}\). Then, \[\omega=df+\eta,\] with \(\eta\in\mathcal{O}_{\infty}(r^{1-n-\epsilon})\) for some \(\epsilon>0\). Let \(R\geq 1\) and let \(D(R)\) be the smooth open set defined by \[D(R)=K\sqcup_{i=1}^{N}\phi_{i}^{-1}(B^{\mathbb{R}^{n}/\Gamma_{i}}(0,R)\setminus \overline{B^{\mathbb{R}^{n}/\Gamma_{i}}(0,1)}).\] By the Green formula, \[\int_{D(R)}\Delta f\,\mathrm{d}v=\int_{\partial D(R)}\frac{\partial f}{ \partial\nu}\,\mathrm{d}S.\] Given the asymptotic of the metric on \(M\), and the definition of \(f\), one has, as \(R\to\infty\), \[\int_{\partial D(R)}\frac{\partial f}{\partial\nu}\,\mathrm{d}S=-(n-2)\left( \sum_{i=1}^{N}\frac{A_{i}}{\operatorname{Card}(\Gamma_{i})}\right)\omega_{n}+ \varepsilon(R),\] where \(\lim_{R\to\infty}\varepsilon(R)=0\) and \(\omega_{n}\) denotes the \((n-1)\)-dimensional volume of the unit sphere in \(\mathbb{R}^{n}\) (we have used here that the volume of the unit sphere \(S^{n-1}/\Gamma_{i}\) is equal to \(\frac{\operatorname{Vol}(S^{n-1})}{\operatorname{Card}(\Gamma_{i})}\)). On the other hand, since \(d^{*}\omega=\Delta u=0\), one has \(\Delta f=-d^{*}\eta\). Let \(X=\eta^{\flat}\) the vector field canonically associated to the \(1\)-form \(\eta\), then \(-d^{*}\eta=\operatorname{div}(X)\), so Gauss' theorem implies that, as \(R\to\infty\), \[\int_{D(R)}\Delta f\,\mathrm{d}v = \int_{D(R)}\operatorname{div}(X)\,\mathrm{d}v\] \[= \int_{\partial D(R)}\langle X,\nu\rangle\,\mathrm{d}S\] \[= O(R^{-\epsilon}),\] since \(X\in O(r^{1-n-\epsilon})\). Comparing the two estimates, one obtains that \[\lim_{R\to\infty}\int_{\partial D(R)}\frac{\partial f}{\partial\nu}\, \mathrm{d}S=0,\] which implies (i). For the proof of (ii), we choose \(i\in\{1,\cdots,N\}\) such that \(c_{i}=\inf_{j=1,\cdots,N}c_{j}\). Up to changing \(u\) into \(u-c_{i}\), we can -and will- assume that \(c_{i}=0\). Then, \[\liminf_{x\to\infty}u(x)=0,\] which implies, by the maximum principle and the assumption that \(u\) is non-constant, that \(u>0\) on \(M\). On the end \(E_{i}\), one thus has a positive harmonic function \(u\), whose limit is zero at infinity in \(E_{i}\). According to [21, Proposition 6.1], it follows that \(u\) has minimal growth at infinity in the end \(E_{i}\). But the metric in \(E_{i}\) being ALE, the minimal growth at infinity for positive harmonic functions on \(E_{i}\) is \(\simeq r^{2-n}\) (Lemma 3.2). Therefore, one obtains \[u(x)\gtrsim r(x)^{2-n},\quad x\in E_{i},\,x\to\infty.\] Hence, \(A_{i}\neq 0\), which concludes the proof of (ii). **Corollary 3.6**.: _For each tuple \(B=(B_{1},\ldots,B_{N})\) with \(\sum_{i=1}^{N}\frac{B_{i}}{\operatorname{Card}(\Gamma_{i})}=0\), there exists a function \(u\in\ker_{0}(\Delta_{0})\), unique up to the addition of a constant, such that for every \(i=1,\cdots,N\), \(A_{i}=B_{i}\), where \(A_{i}\) are the coefficients in the expansion (2.1) of \(u\) in the end \(E_{i}\)._ Proof.: Denote \(\gamma_{i}=\operatorname{Card}(\Gamma_{i})\), and consider the linear map \[\Phi:\ker_{0}(\Delta_{0}) \to\left\{(x_{1},\ldots x_{n})\in\mathbb{R}^{N}\mid\gamma_{1}x_{1} +\ldots\gamma_{n}x_{n}=0\right\}\] \[u \mapsto(A_{1},\ldots,A_{N}),\] where the \(A_{i}\) are as in (2.1). By Lemma 3.5, \(\Phi\) is well-defined and \(\ker(\Phi)\) is exactly given by the constant functions, hence is \(1\)-dimensional. Therefore, since the domain of \(\Phi\) is N-dimensional by Lemma 3.3, \(\operatorname{im}(\Phi)\) is (N-1)-dimensional and hence, \(\Phi\) is surjective. We will also need later the following result concerning the asymptotic behaviour of harmonic \(1\)-forms: **Lemma 3.7**.: _Let \(\omega\in\ker_{1-n}(\Delta_{1})\). Then, there are real numbers \((B_{j})_{j=1,\cdots,N}\) such that, in each end \(E_{i}\),_ \[\omega=B_{j}r^{1-n}dr+\mathcal{O}_{\infty}(r^{1-n-\epsilon})\] _for some \(\epsilon>0\). Furthermore, \(\sum_{j=1}^{N}\frac{B_{j}}{\operatorname{Card}(\Gamma_{i})}=0\)._ Proof.: The asymptotic expansion follows directly from Lemma 2.6. Let \(f\in C^{\infty}(M)\) be such that \(f=-\frac{B_{j}}{n-2}r^{2-n}\) outside a compact set in the end \(E_{j}\), then \[\omega=df+\mathcal{O}_{\infty}(r^{1-n-\epsilon})\] for some \(\epsilon>0\). Since \(d^{*}\omega=0\) by Proposition 2.3, the proof of part (i) in Lemma 3.5 now implies that \(\sum_{j=1}^{N}\frac{B_{j}}{\operatorname{Card}(\Gamma_{i})}=0\) **Corollary 3.8**.: _We have_ \[\ker_{1-n}(\Delta_{1})=d(\ker_{0}(\Delta_{0}))\oplus\ker_{-n}(\Delta_{1})\] _and this sum is \(L^{2}\)-orthogonal._ Proof.: Let \(\omega\in\ker_{1-n}(\Delta_{1})\). By Lemma 3.7, we may write \[\omega=B_{j}r^{1-n}dr+\mathcal{O}_{\infty}(r^{1-n-\epsilon})\] for some \(\epsilon>0\) and a tuple \((B_{j})_{j=1,\cdots,N}\) with \(\sum_{j=1}^{N}\frac{B_{j}}{\operatorname{Card}(\Gamma_{i})}=0\). By Corollary 3.6, there exists a harmonic function \(u\in\ker_{0}(\Delta_{0})\), such that the terms \(A_{i}\) in the expansion (2.1) of \(u\) are at each end equal to \(B_{i}\). Because \(u\) is unique up to a constant, \(du\in\ker_{1-n}(\Delta_{1})\) is uniquely determined. We obtain the expansion \[du=B_{j}r^{1-n}dr+\mathcal{O}_{\infty}(r^{1-n-\epsilon}).\] Therefore, \(\eta:=\omega-du=\mathcal{O}_{\infty}(r^{1-n-\epsilon})\) and \(\eta\) is uniquely determined. Because \(\eta\) is a harmonic form since \(u\) and \(\omega\) are, we conclude from Corollary 2.4 that \(\eta=\mathcal{O}_{\infty}(r^{-n})\). This finishes the first part of the proof. To show \(L^{2}\)-orthogonality of this decomposition, let \(u\in\ker_{0}(\Delta_{0})\) and \(\omega\in\ker_{-n}(\Delta_{1})\). Furthermore, let \(D(R)\) as in the proof of Lemma 3.5 be given by \[D(R)=K\sqcup_{i=1}^{N}\phi_{i}^{-1}(B^{\mathbb{R}^{n}/\Gamma_{i}}(0,R)\setminus \overline{B^{\mathbb{R}^{n}/\Gamma_{i}}(0,1)}).\] Because \(d_{1}^{*}\omega=0\) (Proposition 2.3), we get \[\int_{D(R)}\langle du,\omega\rangle\,\mathrm{d}v=\int_{\partial D(R)}u(* \omega)=O(R^{-1}),\] since \(\omega=O(r^{-n})\) and \(u\) is bounded. Letting \(R\to\infty\) yields \[(du,\omega)_{L^{2}}=0\] which is exactly what we needed to show. Proposition 2.3, Corollary 3.4, Corollary 3.8 and Hodge duality imply: **Corollary 3.9**.: _Assume that \(M\) is ALE with only one end; let \(k\in\{1,n-1\}\) and \(p\in(1,\infty)\). Then,_ \[\ker_{L^{p}}(\Delta_{k})=\ker_{-n}(\Delta_{k}).\] We conclude this section with the following result: **Proposition 3.10**.: _Let \(p\in[n,+\infty)\). Then,_ \[d(\ker_{0}(\Delta_{0}))\subset\operatorname{im}_{L^{p}}(d_{0})\cap\ker_{1-n} (\Delta_{1}).\] Proof.: Clearly, the expansion (2.1) for bounded harmonic functions provides that \(d(\ker_{0}(\Delta_{0}))\subset\ker_{1-n}(\Delta_{1})\), so in order to finish the proof of the proposition, it is enough to prove that \[d(\ker_{0}(\Delta_{0}))\subset\operatorname{im}_{L^{p}}(d_{0}).\] Since \(p\geq n\), \(M\) is \(p\)-parabolic, hence there is a sequence \((\chi_{n})_{n\in\mathbb{N}}\subset C_{c}^{\infty}(M)\) of cut-off functions such that for every \(n\geq 0\), \(0\leq\chi_{n}\leq 1\), \(\chi_{n}\to 1\) pointwise and \(\int_{M}|\nabla\chi_{n}|^{p}\to 0\) (see [18, Section 3.2]). Let \(\omega=du\in d(\ker_{0}(\Delta_{0}))\), and denote \(\omega_{n}=d(\chi_{n}u)\in dC_{c}^{\infty}(M)\). We are going to prove that \((\omega_{n})_{n\in\mathbb{N}}\) converges to \(\omega\) in \(L^{p}\). One has \[||\omega-\omega_{n}||_{p}\leq||(1-\chi_{n})du||_{p}+||u(d\chi_{n})||_{p}.\] Given that \(u\in L^{\infty}\) and \(||\nabla\chi_{n}||_{p}\to 0\), the second term on the right hand side converges to zero. The first one also converges to zero, as follows from the Dominated Convergence theorem, since \((1-\chi_{n})du\) converges pointwise to zero and \(du\in L^{p}\). Finally, we have \(||\omega-\omega_{n}||_{p}\to 0\), which implies that \(\omega\in\operatorname{im}_{L^{p}}(d_{0})\). This is what was needed to achieve the proof. ## 4. Hodge projectors In this section, we introduce the Hodge projectors and make the connection with the \(L^{p}\)-Hodge decomposition. In part of this section, we will work with general complete Riemannian manifolds, which are not ALE unless explicitly stated. We first recall the \(L^{2}\)-Hodge decomposition for \(k\)-forms, which holds true for any complete Riemannian manifold: \[L^{2}(\Lambda^{k}M)=\operatorname{im}_{L^{2}}(d_{k-1})\oplus \operatorname{im}_{L^{2}}(d_{k+1}^{*})\oplus\ker_{L^{2}}(\Delta_{k}),\qquad \qquad(\mathscr{H}_{2})\] and the sum is orthogonal. We will denote by \(\Pi_{k,d}\), \(\Pi_{k,d^{*}}\) and \(\Pi_{k,0}\) the three orthogonal projectors from \(L^{2}(\Lambda^{k}M)\) onto \(\operatorname{im}_{L^{2}}(d_{k-1})\), \(\operatorname{im}_{L^{2}}(d_{k+1}^{*})\), \(\ker_{L^{2}}(\Delta_{k})\) respectively. If now \(p\in(1,\infty)\), we say that the \(L^{p}\)_-Hodge decomposition_ holds for \(k\)-forms, provided \(\operatorname{im}_{L^{p}}(d_{k-1})+\operatorname{im}_{L^{p}}(d_{k+1}^{*})\) is _closed_ and \[L^{p}(\Lambda^{k}M)=\operatorname{im}_{L^{p}}(d_{k-1})\oplus \operatorname{im}_{L^{p}}(d_{k+1}^{*})\oplus\ker_{L^{p}}(\Delta_{k}).\qquad \qquad(\mathscr{H}_{p})\] If the \(L^{p}\) Hodge decomposition holds, we will denote by \(\Pi_{k,d}^{p}\), \(\Pi_{k,d^{*}}^{p}\) and \(\Pi_{k,0}^{p}\) the three projectors from \(L^{p}(\Lambda^{k}M)\) onto the three subspaces \(\operatorname{im}_{L^{p}}(d_{k-1})\), \(\operatorname{im}_{L^{p}}(d_{k+1}^{*})\), and \(\ker_{L^{p}}(\Delta_{k})\) respectively. The assumption that \(\operatorname{im}_{L^{p}}(d_{k-1})+\operatorname{im}_{L^{p}}(d_{k+1}^{*})\) is closed is equivalent to the three \(L^{p}\)-Hodge projectors being bounded on \(L^{p}\). We warn here the reader that in general, despite \(\operatorname{im}_{L^{p}}(d_{k-1})\) and \(\operatorname{im}_{L^{p}}(d_{k+1})\) being closed subspaces, their sum \(\operatorname{im}_{L^{p}}(d_{k-1})+\operatorname{im}_{L^{p}}(d_{k+1}^{*})\) is not necessarily closed. As will be apparent, sometimes it will be also useful to consider the following "modified" \(L^{p}\)- Hodge decomposition, which is said to hold provided \(\operatorname{im}_{L^{p}}(d_{k-1})+\operatorname{im}_{L^{p}}(d_{k+1}^{*})\) is closed and \[L^{p}(\Lambda^{k}M)=\operatorname{im}_{L^{p}}(d_{k-1})\oplus \operatorname{im}_{L^{p}}(d_{k+1}^{*})\oplus\ker_{-n}(\Delta_{k}).\qquad( \tilde{\mathscr{H}_{p}})\] In case \((\tilde{\mathscr{H}_{p}})\) holds, we will call \(\tilde{\Pi}_{k,d}^{p}\), \(\tilde{\Pi}_{k,d^{*}}^{p}\) and \(\tilde{\Pi}_{k,0}^{p}\) the three "modified" Hodge projectors, that is the projectors asociated with the decomposition \((\tilde{\mathscr{H}_{p}})\). It will be interesting to compare the Hodge projectors on \(L^{2}\) and on \(L^{p}\). However, in order to be able to do this, one implicitly needs that \[\ker_{L^{p}}(\Delta_{k})=\ker_{L^{2}}(\Delta_{k}),\] and we know from Proposition 2.3 that this does not always hold. We now introduce an assumption that will allow us to compare the kernels \(\ker_{L^{p}}(\Delta_{k})\) for different values of \(p\). Recall first that there is a Bochner formula: \[\Delta_{k}=\nabla^{*}\nabla+\mathscr{R}_{k},\] where \(\nabla\) is the connection induced on \(\Lambda^{*}M\) by the Levi-Civita connection, and \(\mathscr{R}_{k}\in\operatorname{End}(\Lambda^{k}M)\) is self-adjoint and is related to the curvature operator. See [22]. We say that \(M\) satisfies Assumption \((H_{k})\), provided \(||\mathscr{R}_{k}||_{L^{\infty}(M)}<\infty\), and for every \(1\leq p\leq q\leq+\infty\), \(e^{-\Delta_{0}}:L^{p}\to L^{q}\) is bounded. Note that the latter holds in particular if the Sobolev inequality holds true on \(M\) (see [35]). If \((H_{k})\) holds for any \(k=0,\cdots,n\), we say that \(M\) satisfies Assumption (H). It is well-known that ALE manifolds satisfy Assumption (H). The relevance of this assumption stands from the following result, whose proof (in the case \(k=1\)) can be found in [14, p.87]: **Lemma 4.1**.: _Let \(M\) be a complete Riemannian manifold satisfying (\(H_{k}\)) for some \(k\in\{0,\cdots,n\}\). Then, for every \(1\leq q\leq p\leq+\infty\), one has_ \[\ker_{L^{q}}(\Delta_{k})\subset\ker_{L^{p}}(\Delta_{k}).\] We have the following consequence of the \(L^{p}\) Hodge decomposition: **Proposition 4.2**.: _Let \(p\in(1,\infty)\) and \(k\in\{0,\cdots,n\}\) such that the \(L^{p}\) Hodge-De Rham decomposition \((\mathscr{H}_{p})\) holds. Assume moreover that_ \[\ker_{L^{p}}(\Delta_{k})=\ker_{L^{2}}(\Delta_{k}).\] _Then, each one of the three \(L^{2}\) Hodge projectors \(\Pi_{k,d}\), \(\Pi_{k,d^{*}}\) and \(\Pi_{k,0}\), in restriction to \(dC_{c}^{\infty}(\Lambda^{k-1}T^{*}M)\oplus d^{*}C_{c}^{\infty}(\Lambda^{k+1}T ^{*}M)\oplus\ker_{L^{2}}(\Delta_{k})\), extends uniquely to a bounded operator on \(L^{p}(\Lambda^{k}M)\)._ Proof.: Denote \[E=dC_{c}^{\infty}(\Lambda^{k-1}M)\oplus d^{*}C_{c}^{\infty}(\Lambda^{k+1}M) \oplus\ker(\Delta_{k}),\] where \(\ker(\Delta_{k}):=\ker_{L^{2}}(\Delta_{k})=\ker_{L^{p}}(\Delta_{k})\) by assumption. Then, by definition of the \(L^{2}\) Hodge projectors, the restriction to \(E\) of the three projectors \(\Pi_{k,d}^{p}\), \(\Pi_{k,d^{*}}^{p}\) and \(\Pi_{k,0}^{p}\) coincide with the restriction to \(E\) of their \(L^{2}\) counterparts; indeed, this is an easy consequence of the facts that \(dC_{c}^{\infty}(\Lambda^{k-1}M)\subset\operatorname{im}_{L^{2}}(d_{k-1})\cap \operatorname{im}_{L^{p}}(d_{k-1})\), \(d^{*}C_{c}^{\infty}(\Lambda^{k+1}M)\subset\operatorname{im}_{L^{2}}(d_{k+1}^ {*})\cap\operatorname{im}_{L^{p}}(d_{k+1}^{*})\) and the assumption on the \(L^{2}\) and \(L^{p}\) kernel. Since \(E\) is dense in \(L^{p}(\Lambda^{k}M)\) by \((\mathscr{H}_{p})\), we conclude that \(\Pi_{k,d}^{p}\), \(\Pi_{k,d^{*}}^{p}\) and \(\Pi_{k,0}^{p}\) are \(L^{p}\) bounded extensions of their \(L^{2}\) counterparts defined on \(E\). There is a subtlety in the extent to which by Proposition 4.2 the \(L^{2}\) projectors have a unique extension to \(L^{p}\), which we now explain. By Proposition 4.2, starting from one of the three \(L^{2}\) Hodge projectors \(\rho\), we have first restricted \(\rho\) to a projector \(\bar{\rho}\) on \[E=dC_{c}^{\infty}(\Lambda^{k-1}M)\oplus d^{*}C_{c}^{\infty}(\Lambda^{k+1}M) \oplus\ker(\Delta_{k}),\] then we have extended \(\bar{\rho}\) by density to a projector \(\hat{\rho}\) on \(L^{p}(\Lambda^{k}M)\). However, it is desirable but not clear _a priori_ that the extended projector \(\hat{\rho}\) agrees with the original projector \(\rho\) on \(L^{2}\cap L^{p}\), that is: \[\hat{\rho}|_{L^{2}\cap L^{p}}=\rho|_{L^{2}\cap L^{p}},\] Of course, \(L^{2}(\Lambda^{k}M)\cap L^{p}(\Lambda^{k}M)\) is dense both in \(L^{2}(\Lambda^{k}M)\) and in \(L^{p}(\Lambda^{k}M)\) separately, but this is not enough in order that the above equality holds. Consider the following diagram in which each _black_ arrow is a dense inclusion: (4.1) It turns out that the above uniqueness issue is related to the question whether the middle red arrow in the above diagram is a dense inclusion. In appendix (see Lemma A.1 and Corollary A.2 therein), we give conditions so that this is indeed the case, and we prove that indeed, \[\hat{\rho}|_{L^{2}\cap L^{p}}=\rho|_{L^{2}\cap L^{p}}.\] In particular, we have the following criterion for the uniqueness of the \(L^{p}\) extension of \(L^{2}\) Hodge projectors: **Corollary 4.3**.: _Let \(M\) be a complete manifold satisfying (H); let \(p\in(1,\infty)\) and \(q=p^{\prime}\), and assume that_ \[\ker_{L^{p}}(\Delta_{k})=\ker_{L^{q}}(\Delta_{k})=\ker_{L^{2}}(\Delta_{k}).\] _Suppose that \((\mathscr{H}_{p})\), the \(L^{p}\) Hodge decomposition for forms of degree \(k\) holds on \(M\). Then, each one of the three \(L^{2}\) Hodge projectors \(\Pi_{k,d}\), \(\Pi_{k,d^{*}}\) and \(\Pi_{k,0}\) coincide with its \(L^{p}\) counterpart given by Proposition 4.2, in restriction to \(L^{2}(\Lambda^{k}M)\cap L^{p}(\Lambda^{k}M)\). In short, we will say that the \(L^{2}\) projectors extend uniquely to \(L^{p}\) bounded projectors._ Another situation that we will encounter in the case of ALE manifolds is that of a modified \(L^{p}\) Hodge decomposition \((\tilde{\mathscr{H}}_{p})\). This case will happen only for \(k\in\{1,n-1\}\), \(p\geq n\) and \(N\geq 2\) (\(N\) being the number of ends of \(M\)). The case \(k=n-1\) will follow from the case \(k=1\) by Hodge duality, so in the remaining part of this section, we assume that \(M\) is an ALE manifold, that \(p\in[n,\infty)\), that \(k=1\) and that \(N\geq 2\). Recall from Proposition 2.3 and Corollary 3.8 that for \(k=1\), we have the orthogonal decomposition: \[\ker_{L^{2}}(\Delta_{1})=d(\ker_{0}(\Delta_{0}))\oplus\ker_{-n}(\Delta_{1}).\] Denote by \(\Pi_{d\ker_{0}}\) and \(\Pi_{-n}\) the \(L^{2}\) orthogonal projectors onto \(d(\ker_{0}(\Delta_{0}))\) and \(\ker_{-n}(\Delta_{1})\) respectively. Denote \(q=p^{\prime}\) the conjugate exponent, then according to Proposition 2.3, one has \[\ker_{L^{p}}(\Delta_{1})=\ker_{L^{2}}(\Delta_{1})\] and \[\ker_{L^{q}}(\Delta_{1})=\ker_{-n}(\Delta_{1})\varsubsetneq}\ker_{L^{2}}( \Delta_{1}).\] We now have the following modified version of Corollary 4.3: **Corollary 4.4**.: _Let \(M\) be ALE, \(p\in[n,\infty)\); suppose that the modified Hodge decomposition \((\mathscr{\hat{H}}_{p})\) for forms of degree \(1\) holds on \(M\). Then, each one of the three \(L^{2}\) projectors \(\Pi_{1,d}+\Pi_{d\ker_{0}}\), \(\Pi_{1,d^{*}}\) and \(\Pi_{-n}\) coincide with its \(L^{p}\) counterpart \(\tilde{\Pi}^{p}_{1,d}\), \(\tilde{\Pi}^{p}_{1,d^{*}}\), \(\tilde{\Pi}^{p}_{1,0}\) in restriction to \(L^{2}(\Lambda^{k}M)\cap L^{p}(\Lambda^{k}M)\). Hence, these three \(L^{2}\) projectors extend uniquely to \(L^{p}\) bounded projectors._ Proof.: Let \[\mathscr{G}=dC^{\infty}_{c}(M)\oplus d^{*}C^{\infty}_{c}(\Lambda^{2}M)\oplus \ker_{L^{2}}(\Delta_{1}).\] According to the remarks above, one can also write \[\mathscr{G}=(dC^{\infty}_{c}(M)\oplus d(\ker_{0}(\Delta_{0})))\oplus d^{*}C^{ \infty}_{c}(\Lambda^{2}M)\oplus\ker_{-n}(\Delta_{1}).\] On other hand, the assume modified \(L^{p}\) Hodge decomposition writes \[L^{p}(\Lambda^{1}M)=\operatorname{im}_{L^{p}}(d_{0})\oplus\operatorname{im}_{ L^{p}}(d_{2}^{*})\oplus\ker_{-n}(\Delta_{1}).\] We first notice that in restriction to \(\mathscr{G}\), one has \[\Pi_{1,d}+\Pi_{d\ker_{0}}=\tilde{\Pi}^{p}_{1,d},\ \Pi_{1,d^{*}}=\tilde{\Pi}^{p}_{1,d ^{*}},\ \Pi_{-n}=\tilde{\Pi}^{p}_{1,0}.\] Indeed, the only difference with Proposition 4.2 is that, according to Proposition 3.10 we have \[d(\ker_{0}(\Delta_{0}))\subset\operatorname{im}_{L^{p}}(d_{0}),\] so the modified \(L^{p}\) Hodge decomposition of \(\omega\in d(\ker_{0}(\Delta_{0}))\) writes \[\omega=\omega\oplus 0\oplus 0.\] This entails that \[\Pi_{1,d}+\Pi_{d\,\mathrm{ker}_{0}}=\tilde{\Pi}_{1,d}^{p},\] and the other equalities between Hodge projectors are easy to prove and left to the reader. The result now follows from Lemma A.1. Finally, we conclude this section with an interpolation result that will be useful later in order to prove the closedness of \(\mathrm{im}_{L^{p}}(d_{k-1})+\mathrm{im}_{L^{p}}(d_{k+1}^{*})\). One will see in the next section that on an ALE manifold, for all \(k\in\{1,\cdots,n-1\}\) one can find a finite dimensional space \(E_{k}\subset L^{n}(\Lambda^{k}M)\) such that the following _weak_\(L^{n}\) Hodge decomposition holds: \[L^{n}(\Lambda^{k}M)=\overline{\mathrm{im}_{L^{n}}(d_{k-1})\oplus\mathrm{im}_{ L^{n}}(d_{k+1}^{*})}^{L^{n}}\oplus E_{k}.\] Moreover, \(E_{k}=\ker_{L^{2}}(\Delta_{k})\) if \(k\notin\{1,n-1\}\) or if \(M\) has only one end, while \(E_{k}=\ker_{-n}(\Delta_{k})\) if \(k\in\{1,n-1\}\) and \(M\) has at least two ends. Here, the term "weak" refers to the fact that one needs a closure on the exact/co-exact factor. Compare with the \(L^{p}\)-Hodge decomposition \((\mathscr{H}_{p})\) and the modified \(L^{p}\)-Hodge decomposition \((\hat{\mathscr{H}}_{p})\), which we may sometimes call _strong_ decomposition to emphasize the contrast with \((\mathrm{w}\mathscr{H}_{n})\). With this settled, we show: **Proposition 4.5**.: _Let \(M\) be an ALE manifold, \(1<p<n<q<+\infty\), \(k\in\{1,\cdots,n-1\}\). Assume that the weak \(L^{n}\)-Hodge decomposition \((\mathrm{w}\mathscr{H}_{n})\) holds, and moreover assume one of the following:_ 1. _the_ \(L^{p}\)_- and_ \(L^{q}\)_-Hodge decomposition hold;_ 2. _or_ \(k\in\{1,n-1\}\)_,_ \(2\leq p<n\)_, the_ \(L^{p}\) _Hodge decomposition holds, and the_ \(L^{q}\) _modified Hodge decomposition holds._ _Then, the direct sum_ \[\mathrm{im}_{L^{n}}(d_{k-1})\oplus\mathrm{im}_{L^{n}}(d_{k+1}^{*})\] _is closed in \(L^{n}(\Lambda^{k}M)\), and the following (strong) \(L^{n}\)-Hodge decomposition holds:_ \[L^{n}(\Lambda^{k}M)=\mathrm{im}_{L^{n}}(d_{k-1})\oplus\mathrm{im}_{L^{n}}(d_{ k+1}^{*})\oplus E_{k}.\] Proof.: We treat only the more complicated case (b) for \(k=1\). The case (b) for \(k=n-1\) follows by Hodge duality, and the other case (a) is similar and is left to the reader. By assumption, we have the \(L^{p}\) Hodge decomposition, which can be written, using Proposition 2.3 and Corollary 3.8, \[L^{p}(\Lambda^{1}M) = \mathrm{im}_{L^{p}}(d_{0})\oplus\mathrm{im}_{L^{p}}(d_{2}^{*}) \oplus\ker_{L^{p}}(\Delta_{1})\] \[= (\mathrm{im}_{L^{p}}(d_{0})\oplus d(\ker_{0}(\Delta_{0})))\oplus \mathrm{im}_{L^{p}}(d_{2}^{*})\oplus\ker_{-n}(\Delta_{1}).\] Also, by assumption, one has the modified \(L^{q}\) Hodge decomposition: \[L^{q}(\Lambda^{1}M)=\operatorname{im}_{L^{q}}(d_{0})\oplus\operatorname{im}_{L^{p}} (d_{2}^{*})\oplus\ker_{-n}(\Delta_{1}),\] and the weak \(L^{n}\) Hodge decomposition: \[L^{n}(\Lambda^{k}M)=\overline{\operatorname{im}_{L^{n}}(d_{k-1})\oplus \operatorname{im}_{L^{n}}(d_{k+1}^{*})}^{L^{n}}\oplus\ker_{-n}(\Delta_{1}).\] Recall also that according to Proposition 3.10, \[d(\ker_{0}(\Delta_{0}))\subset\operatorname{im}_{L^{n}}(d_{0}).\] Consider the \(L^{2}\) Hodge projector \[\Pi:=\Pi_{1,d}+\Pi_{d\ker_{0}}\] onto \[\operatorname{im}_{L^{2}}(d_{0})\oplus d(\ker_{0}(\Delta_{0})).\] According to Corollary 4.4, \(\Pi|_{L^{2}\cap L^{q}}\) extends uniquely to a bounded \(L^{q}\) projector, which we will denote \(\Pi^{q}\). Furthermore, according to Corollary 4.3, the \(L^{2}\) projector \(\Pi_{1,d}\), in restriction to \(L^{2}\cap L^{p}\), extends uniquely to a bounded \(L^{p}\). Moreover, \(\Pi_{d\ker_{0}}\) is bounded on \(L^{p}\) if and only if \[d(\ker_{0}(\Delta_{0}))\subset L^{p}\cap L^{p^{\prime}},\] and the latter holds since \(d(\ker_{0}(\Delta_{0}))\subset\ker_{1-n}(\Delta)\) (Lemma 2.5) and \(2\leq p<n\). So, \(\Pi|_{L^{2}\cap L^{p}}\), extends uniquely to a bounded projector \(\Pi^{p}\) on \(L^{p}\). If \(\omega\in L^{p}(\Lambda^{1}M)\cap L^{q}(\Lambda^{1}M)\), there exists a sequence \((\omega_{n})_{n\in\mathbb{N}}\) with \(\omega_{n}\in L^{2}\cap L^{p}\cap L^{q}\) such that \[L^{p}-\lim_{n\to\infty}\omega_{n}=\omega,\quad L^{q}-\lim_{n\to\infty}\omega_ {n}=\omega.\] Indeed, just take an non-decreasing exhaustion \((\Omega_{n})_{n\in\mathbb{N}}\) of \(M\), and define \(\omega_{n}=\omega\cdot\mathbf{1}_{\Omega_{n}}\). Since \[\Pi^{p}|_{L^{2}\cap L^{p}\cap L^{q}}=\Pi^{q}|_{L^{2}\cap L^{p}\cap L^{q}}=\Pi| _{L^{2}\cap L^{p}\cap L^{q}},\] one has for every \(n\in\mathbb{N}\), \[\Pi^{p}(\omega_{n})=\Pi^{q}(\omega_{n}),\] and passing to the limit as \(n\to\infty\), one gets \[\Pi^{p}(\omega)=\Pi^{q}(\omega).\] Thus, \[\Pi^{p}|_{L^{p}\cap L^{q}}=\Pi^{q}|_{L^{p}\cap L^{q}},\] and extends uniquely to a bounded operator on \(L^{p}\) and \(L^{q}\). The Riesz-Thorin interpolation theorem now implies that \(\Pi\) extends uniquely to a bounded projector on \(L^{n}\). A completely similar interpolation argument shows that \(\Pi_{1,d^{*}}\) extends uniquely to a bounded projector on \(L^{n}\). We have the following lemma, whose proof is postponed to the end of the proof of the proposition: **Lemma 4.6**.: _The following identities hold:_ \[\operatorname{im}_{L^{n}}(\Pi)=\operatorname{im}_{L^{n}}(d_{0}),\quad \operatorname{im}_{L^{n}}(\Pi_{1,d^{*}})=\operatorname{im}_{L^{n}}(d_{2}^{*}),\] _and_ \[\operatorname{im}_{L^{n}}(d_{2}^{*})\subset\ker_{L^{n}}(\Pi),\quad \operatorname{im}_{L^{n}}(d_{0})\subset\ker_{L^{n}}(\Pi_{1,d^{*}}).\] Now, let \(\omega\in\overline{\operatorname{im}_{L^{n}}(d_{0})\oplus\operatorname{im}_ {L^{n}}(d_{2}^{*})^{L^{n}}}\). One finds a sequence \((\omega_{j})_{j\in\mathbb{N}}\subset\operatorname{im}_{L^{n}}(d_{0})\oplus \operatorname{im}_{L^{n}}(d_{2}^{*})\) and forms \(\eta_{j}\in\operatorname{im}_{L^{n}}(d_{0})\), \(\nu_{j}\in\operatorname{im}_{L^{n}}(d_{2}^{*})\) such that for all \(j\in\mathbb{N}\), \(\omega_{j}=\eta_{j}+\nu_{j}\) and \[||\omega_{j}-\omega||_{n}\to 0,\quad\text{as $j\to\infty$}.\] By Lemma 4.6, since the projectors \(\Pi\) and \(\Pi_{1,d^{*}}\) are bounded on \(L^{n}\), \[\eta_{j}=\Pi(\omega_{j})\underset{L^{n}}{\longrightarrow}\Pi(\omega)=:\eta\in \operatorname{im}_{L^{n}}(d_{0})\quad\text{as $j\to\infty$},\] and \[\nu_{j}=\Pi_{1,d^{*}}(\omega_{j})\underset{L^{n}}{\longrightarrow}\Pi_{1,d^{* }}(\omega)=:\nu\in\operatorname{im}_{L^{n}}(d_{2}^{*})\quad\text{as $j\to\infty$}.\] Passing to the limit in the identity \(\omega_{j}=\eta_{j}+\nu_{j}\) as \(j\to\infty\), one gets \[\omega=\eta+\nu\in\operatorname{im}_{L^{n}}(d_{0})\oplus\operatorname{im}_{L^ {n}}(d_{2}^{*}).\] Thus, \[\overline{\operatorname{im}_{L^{n}}(d_{0})\oplus\operatorname{im}_{L^{n}}(d_ {2}^{*})}^{L^{n}}=\operatorname{im}_{L^{n}}(d_{0})\oplus\operatorname{im}_{L^ {n}}(d_{2}^{*}),\] which shows that \(\operatorname{im}_{L^{n}}(d_{0})\oplus\operatorname{im}_{L^{n}}(d_{2}^{*})\) is closed. The strong \(L^{n}\) Hodge decomposition now follows directly from the weak one. Proof of Lemma 4.6:.: We first show that \(\operatorname{im}_{L^{n}}(\Pi)=\operatorname{im}_{L^{n}}(d_{0})\). Since the projector \(\Pi\) is bounded on \(L^{n}\), \(\operatorname{im}_{L^{n}}(\Pi)=\ker_{L^{n}}(I-\Pi)\) is a closed subspace of \(L^{n}\). Furthermore, since \(\Pi\) coincide with \(\Pi_{1,d}+\Pi_{d\ker_{0}}\) on \(L^{2}\cap L^{n}\), \[dC_{c}^{\infty}(M)\oplus d(\ker_{0}(\Delta_{0}))\subset\operatorname{im}_{L^{ n}}(\Pi),\] thus by taking the closure in \(L^{n}\) and using the fact that according to Proposition 3.10, \[d(\ker_{0}(\Delta_{0}))\subset\operatorname{im}_{L^{n}}(d_{0}),\] one concludes that \[\operatorname{im}_{L^{n}}(d_{0})\subset\operatorname{im}_{L^{n}}(\Pi).\] We now prove the converse inclusion. Denote \[E:=dC_{c}^{\infty}(M)\oplus d(\ker_{0}(\Delta_{0})),\quad F=d^{*}C_{c}^{\infty}( \Lambda^{2}M)\oplus\ker_{-n}(\Delta_{1}).\] By definition of \(\Pi\), \[\Pi(E\oplus F)=E\subset\operatorname{im}_{L^{n}}(d_{0}).\] Since by definition \(\operatorname{im}_{L^{n}}(d_{0})\) is closed in \(L^{n}\), by taking the closure in \(L^{n}\) of the above inclusion one gets \[\overline{\Pi(E\oplus F)}^{L^{n}}\subset\operatorname{im}_{L^{n}}(d_{0}).\] However, since \(\Pi\) is bounded, \[\Pi\left(\overline{E\oplus F}^{L^{n}}\right)\subset\overline{\Pi(E\oplus F)}^{ L^{n}},\] hence \[\Pi\left(\overline{E\oplus F}^{L^{n}}\right)\subset\operatorname{im}_{L^{n}}(d _{0}).\] But the weak \(L^{n}\) Hodge decomposition implies that \(L^{n}(\Lambda^{1}M)=\overline{E\oplus F}^{L^{n}}\), so the above inclusion yields \[\operatorname{im}_{L^{n}}(\Pi)\subset\operatorname{im}_{L^{n}}(d_{0}).\] Therefore, we get the equality \(\operatorname{im}_{L^{n}}(\Pi)=\operatorname{im}_{L^{n}}(d_{0})\) which ends the first part of the proof. The proof that \(\operatorname{im}_{L^{n}}(\Pi_{1,d^{*}})=\operatorname{im}_{L^{n}}(d_{2}^{*})\) is completely similar and is skipped. Concerning the last two inclusions: we have \[d_{0}C_{c}^{\infty}(M)\subset\ker_{L^{n}}(\Pi_{1,d^{*}}),\quad d_{2}^{*}C_{c}^ {\infty}(\Lambda^{2}M)\subset\ker_{L^{n}}(\Pi),\] and since the two projectors \(\Pi\), \(\Pi_{1,d^{*}}\) are bounded on \(L^{n}\), their kernels are closed, so by taking the closure of the above inclusions, one gets \[\operatorname{im}_{L^{n}}(d_{2}^{*})\subset\ker_{L^{n}}(\Pi),\quad \operatorname{im}_{L^{n}}(d_{0})\subset\ker_{L^{n}}(\Pi_{1,d^{*}}),\] and this completes the proof of Lemma 4.6. ## 5. \(L^{p}\) cohomology and Hodge decomposition Recall that the reduced \(L^{p}\)-cohomology vector spaces are defined as \[H_{p}^{k}(M):=\frac{\ker_{L^{p}}(d_{k-1})}{\operatorname{im}_{L^{p}}(d_{k})},\] where \(k\in\{0,\cdots,n\}\) is an integer. In the following, we wish to compute these \(L^{p}\) cohomology spaces, and more precisely we wish to relate \(H_{p}^{k}(M)\) to \(L^{p}\)-harmonic \(k\)-forms. We will first establish _weak_\(L^{p}\)-Hodge decompositions, where the term "weak" refers to the fact that the factor \(\operatorname{im}_{L^{p}}(d_{k-1})\oplus\operatorname{im}_{L^{p}}(d_{k+1}^{*})\) is replaced by \(\overline{\operatorname{im}_{L^{p}}(d_{k-1})\oplus\operatorname{im}_{L^{p}}(d_ {k+1}^{*})}^{L^{p}}\). Then, we will show how to turn these into _strong_\(L^{p}\)-Hodge decomposition, and study the consequences for the \(L^{p}\)-cohomology. First we recall some notations: for a subspace \(V\) of a Banach space \(X\), its _annihilator_ is defined by \[\operatorname{Ann}(V)=\{x^{*}\in X^{*}\mid x^{*}(v)=0\text{ for all }v\in V\} \subset X^{*}.\] If \(p\in(1,\infty)\), \(X\) is an \(L^{p}\) space, \(q=p^{\prime}\) is the conjugate exponent and \(V\subset X\) is a subspace, we use for the sake of clarity the notation \[\operatorname{Ann}_{L^{q}}(V):=\operatorname{Ann}(V)\subset L^{q}\] for the annihilator, in order to emphasise that it is a subspace of \(L^{q}\). If \(p,q\in(1,\infty)\) with \(\frac{1}{p}+\frac{1}{q}=1\), we clearly have \[\operatorname{Ann}_{L^{q}}(\operatorname{im}_{L^{p}}(d_{k-1}))=\ker_{L^{q}}( d_{k}^{*}),\qquad\operatorname{Ann}_{L^{q}}(\operatorname{im}_{L^{p}}(d_{k+1}^{*})) =\ker_{L^{q}}(d_{k}).\] In this section, we will prove both weak and strong \(L^{p}\) Hodge decompositions; the proofs will rely in particular on \(L^{q}-L^{p}\) duality. In order to prove weak \(L^{p}\) Hodge decompositions, one will often use the following well-known lemma: **Lemma 5.1**.: _Let \(\mathscr{B}\) be a Banach space, \(E\) and \(F\) two closed subspaces of \(\mathscr{B}\) such that \(E\cap F=\{0\}\) and \(F\) is finite dimensional. Then, the sum \(E\oplus F\) is closed._ We point out that in general, the result of the lemma is _false_ if \(F\) is not finite dimensional. Since Lemma 5.1 is instrumental in the present article, for the sake of completeness we give a sketch of its proof. Proof of Lemma 5.1.: It is enough to prove that the projection \[\pi:E\oplus F\to F\] which is defined by \(\pi(z)=y\) for all \(z=x+y\in E\oplus F\), is continuous. Indeed, if it is the case, then \(\pi\) and \(id-\pi\) both extend to bounded projectors on \(\mathscr{B}=\overline{E\oplus F}\), which satisfy \(\operatorname{im}(\pi)=F\), \(\operatorname{im}(id-\pi)=E\) since \(E\) and \(F\) are closed; thus, if \(\{z_{n}\}_{n\in\mathbb{N}}\) is a sequence of \(E\oplus F\) which converges to \(z\), then \(\pi(z_{n})\) and \((id-\pi)(z_{n})\) converge respectively to \(e\in E\) and \(f\in F\), and passing to the limit in the equality \(z_{n}=\pi(z_{n})+(id-\pi)(z_{n})\), we have \(z=e+f\in E\oplus F\), which concludes the proof. To prove that \(\pi\) is continuous on \(E\oplus F\), we estimate its operator norm: \[|||\pi|||=\sup_{x+y\in E\oplus F\setminus\{0\}}\frac{||y||}{||x+y||},\] and by homogeneity and the symmetry \(y\mapsto-y\) (which preserves \(F\)), it is also equal to \[|||\pi|||=\sup_{x+y\in E\oplus F,\,||y||=1}\frac{1}{||x-y||}=\left(\inf_{x\in E,\,y\in F,\,||y||=1}||x-y||\right)^{-1}.\] The above infimum is precisely equal to the distance from \(E\) to the unit sphere in \(F\), which is compact since \(F\) is finite dimensional. But the distance between a compact set and a closed set that are disjoint being \(>0\), we conclude that \(|||\pi|||<+\infty\), and \(\pi\) is bounded. Let us now continue with a useful lemma: **Lemma 5.2**.: _Let \(p,q\in(1,\infty)\) such that \(\frac{1}{p}+\frac{1}{q}=1\). Then,_ \[\overline{\operatorname{im}_{L^{p}}(d_{k-1})+\operatorname{im}_{L^{p}}(d_{k+1 }^{*})}^{L^{p}}\cap\ker_{L^{p}}(\Delta_{k})\cap\ker_{L^{q}}(\Delta_{k})=\{0\} \tag{5.1}\] _Moreover, if \(\ker_{L^{p}}(\Delta_{k})\subset\ker_{L^{q}}(\Delta_{k})\), we have_ \[\operatorname{im}_{L^{p}}(d_{k-1})\cap\operatorname{im}_{L^{p}}(d_{k+1}^{*})= \{0\}\,. \tag{5.2}\] Proof.: Let \(\omega\in\ker_{L^{p}}(\Delta_{k})\cap\ker_{L^{q}}(\Delta_{k})\), and suppose that there exist sequences \(\alpha_{i}\in C_{c}^{\infty}(\Lambda^{k-1}M)\), \(\beta_{i}\in C_{c}^{\infty}(\Lambda^{k+1}M)\), \(i\in\mathbb{N}\) such that we have \(d\alpha_{i}+d^{*}\beta_{i}\to\omega\) in \(L^{p}\). By integration by parts and Proposition 2.3, \[(d\alpha_{i},\omega)_{L^{2}}=(\alpha_{i},d^{*}\omega)_{L^{2}}=0,\qquad(d^{*} \beta_{i},\omega)_{L^{2}}=(\beta_{i},d\omega)_{L^{2}}=0.\] Therefore, \[\|\omega\|_{L^{2}}^{2}=(\omega-d\alpha_{i}-d^{*}\beta_{i},\omega)_{L^{2}}\leq \|\omega-(d\alpha_{i}+d^{*}\beta_{i})\|_{L^{p}}\,\|\omega\|_{L^{q}}\to 0.\] Thus, \(\omega=0\), which proves (5.1). Concerning (5.2), we notice that we have the inclusions \(\operatorname{im}_{L^{p}}(d_{k+1}^{*})\subset\ker_{L^{p}}(d_{k})\) and \(\operatorname{im}_{L^{p}}(d_{k-1})\subset\ker_{L^{p}}(d_{k}^{*})\), so \[\operatorname{im}_{L^{p}}(d_{k-1})\cap\operatorname{im}_{L^{p}}(d_{k+1}^{*}) \subset\ker_{L^{p}}(\Delta_{k}),\] hence \[\operatorname{im}_{L^{p}}(d_{k-1})\cap\operatorname{im}_{L^{p}}(d_{k+1}^{*}) \subset(\operatorname{im}_{L^{p}}(d_{k-1})+\operatorname{im}_{L^{p}}(d_{k+1}^ {*}))\cap\ker_{L^{p}}(\Delta_{k})\cap\ker_{L^{q}}(\Delta_{k}),\] and the right hand side is \(\{0\}\) according to (5.1). The following theorem provides a weak Hodge-De Rham decomposition, for some values of \(p\) and \(k\): **Lemma 5.3**.: _Let \(p\in(1,\infty)\) and \(1\leq k\leq n-1\) be such that one of the following holds:_ * \(2\leq k\leq n-2\)_._ * \(k\in\{1,n-1\}\) _and_ \(p\in(\frac{n}{n-1},n)\)_._ * \(k\in\{1,n-1\}\)_,_ \(p\in(1,\infty)\)_. and_ \(M\) _has only one end._ _Then we have_ \[L^{p}(\Lambda^{k}M)=\overline{\operatorname{im}_{L^{p}}(d_{k-1})\oplus \operatorname{im}_{L^{p}}(d_{k+1}^{*})}^{L^{p}}\oplus\ker_{L^{p}}(\Delta_{k}). \tag{5.3}\] Proof.: By Proposition 2.3, Corollary 3.8 and Corollary 3.4, if one lets \(q=p^{\prime}\) the conjugate exponent, we have \(\ker_{L^{p}}(\Delta_{k})=\ker_{L^{q}}(\Delta_{k})\) by the assumptions on \(p\), \(k\) and \(M\). By Lemma 5.2, the sum on the right hand side of (5.3) is indeed direct. Moreover, according to Lemma 5.1, since \(\ker_{L^{p}}(\Delta_{k})\) is finite dimensional the direct sum of closed subspaces \(\overline{\operatorname{im}_{L^{p}}(d_{k-1})\oplus\operatorname{im}_{L^{p}}( d_{k+1}^{*})}^{L^{p}}\oplus\ker_{L^{p}}(\Delta_{k})\) is closed. Hence, in order to finish the proof, it suffices to show that the annihilator (in \(L^{q}\)) of the direct sum vanishes. We have \[\operatorname{Ann}_{L^{q}}(\overline{\operatorname{im}_{L^{p}}( d_{k-1})\oplus\operatorname{im}_{L^{p}}(d_{k+1}^{*})}^{L^{p}}\oplus\ker_{L^{p}}( \Delta_{k}))\] \[\operatorname{Ann}_{L^{q}}(\operatorname{im}_{L^{p}}(d_{k-1}) \oplus\operatorname{im}_{L^{p}}(d_{k+1}^{*})\oplus\ker_{L^{p}}(\Delta_{k}))\] \[\qquad=\ker_{L^{q}}(d_{k}^{*})\cap\ker_{L^{q}}(d_{k})\cap \operatorname{Ann}_{L^{q}}(\ker_{L^{p}}(\Delta_{k}))\] \[\qquad=\ker_{L^{q}}(\Delta_{k})\cap\operatorname{Ann}_{L^{q}}( \ker_{L^{p}}(\Delta_{k}))\] However, the right hand side is zero as \(\ker_{L^{p}}(\Delta_{k})=\ker_{L^{q}}(\Delta_{k})\). This finishes the proof of the lemma. Lemma 5.3 indicates that the hardest task for proving the (strong) \(L^{p}\) Hodge decomposition is to prove that the direct sum \(\operatorname{im}_{L^{p}}(d_{k-1})\oplus\operatorname{im}_{L^{p}}(d_{k+1}^{*})\) is closed. We now focus on one of the remaining cases: \(k\in\{1,n-1\}\), \(p\in[n,\infty)\) (and \(M\) has \(N\geq 2\) ends). Note that, as a consequence of Proposition 2.3, we have \[\ker_{L^{p}}(\Delta_{1}) =\ker_{1-n}(\Delta_{1}),\text{ if }p\in\left(\frac{n}{n-1}, \infty\right), \tag{5.4}\] \[\ker_{L^{p}}(\Delta_{1}) =\ker_{-n}(\Delta_{1}),\text{ if }p\in\left(1,\frac{n}{n-1} \right].\] The same is true for \(\ker_{L^{p}}(\Delta_{n-1})\) by Hodge duality. Similarly to the previous lemma, one has: **Lemma 5.4**.: _Let \(p\in[n,\infty)\). Then we have, for \(k\in\{1,n-1\}\):_ \[L^{p}(\Lambda^{k}M)=\overline{\operatorname{im}_{L^{p}}(d_{k-1})+\operatorname {im}_{L^{p}}(d_{k+1}^{*})}^{L^{p}}\oplus\ker_{-n}(\Delta_{k}). \tag{5.5}\] Proof.: By Hodge duality, it suffices to look at the case \(k=1\). Let \(q\in(1,\frac{n}{n-1}]\) be such that \(1=\frac{1}{p}+\frac{1}{q}\). Then, \(\ker_{-n}(\Delta_{1})=\ker_{L^{q}}(\Delta_{1})=\ker_{L^{p}}(\Delta_{1})\cap \ker_{L^{q}}(\Delta_{1})\) and from Lemma 5.2, we get that \[\overline{\operatorname{im}_{L^{p}}(d_{0})+\operatorname{im}_{L^{p}}(d_{2}^{*} )}^{L^{p}}\cap\ker_{-n}(\Delta_{1})=\{0\}\,,\] so that the sum on the right hand side of (5.5) is indeed direct; it is also closed, according to Lemma 5.1 since \(\ker_{-n}(\Delta_{1})\) is finite dimensional (cf point (e) in Proposition 2.3). According to Proposition 2.3 (a), the annihilator of the sum is given by \[\operatorname{Ann}_{L^{q}}(\overline{\operatorname{im}_{L^{p}}(d_{ 0})+\operatorname{im}_{L^{p}}(d_{2}^{*})}^{L^{p}}\oplus\ker_{-n}(\Delta_{1}))\] \[\operatorname{Ann}_{L^{q}}((\operatorname{im}_{L^{p}}(d_{0})+ \operatorname{im}_{L^{p}}(d_{2}^{*}))\oplus\ker_{-n}(\Delta_{1}))\] \[\qquad=\ker_{L^{q}}(d_{1})\cap\ker_{L^{q}}(d_{1}^{*})\cap \operatorname{Ann}_{L^{q}}(\ker_{-n}(\Delta_{1}))\] \[\qquad=\ker_{L^{q}}(\Delta_{1})\cap\operatorname{Ann}_{L^{q}}( \ker_{-n}(\Delta_{1}))\] \[\qquad=\ker_{-n}(\Delta_{1})\cap\operatorname{Ann}_{L^{q}}( \ker_{-n}(\Delta_{1}))=\{0\}\,.\] The latter implies that (5.5) holds We wish to prove now that for \(k\in\{1,n-1\}\), the sum \(\operatorname{im}_{L^{p}}(d_{k-1})+\operatorname{im}_{L^{p}}(d_{k+1}^{*})\) is direct, so that (5.5) is a weak modified \(L^{p}\) Hodge decomposition. It is done in points (iii) and (iv) of the next: **Proposition 5.5**.: _Let \(p\in[n,+\infty)\). Then, the following hold:_ * \(\ker_{1-n}(\Delta_{1})\cap\operatorname{im}_{L^{p}}(d_{2}^{*})\subset\ker_{-n }(\Delta_{1})\)_._ * \(\ker_{L^{p}}(\Delta_{1})\cap\operatorname{im}_{L^{p}}(d_{2}^{*})=\{0\}\)_._ * \(\operatorname{im}_{L^{p}}(d_{2}^{*})\cap\operatorname{im}_{L^{p}}(d_{0})=\{0\}\)_._ * \(\operatorname{im}_{L^{p}}(d_{n-2})\cap\operatorname{im}_{L^{p}}(d_{2}^{*})=\{0\}\)_._ Proof.: Let \(\omega\in\ker_{1-n}(\Delta_{1})\cap\operatorname{im}_{L^{p}}(d_{2}^{*})\) (in particular \(\omega\) is smooth). By Hodge duality, we have \(*\omega\in\operatorname{im}_{L^{p}}(d_{n-2})\), and according to Proposition 2.3, \(\omega\) is closed and co-closed, hence \(*\omega\) too. According to the proof of [13, Lemma 1.1], there exists \(\eta\in C^{\infty}(\Lambda^{n-2}M)\) such that \(*\omega=d\eta\). For the sake of completeness, let us detail this point. Recall the Poincare duality map for oriented manifolds: \[F\,:\,\,\,H^{k}(M)\times H^{n-k}_{c}(M) \to \mathbb{R}\] \[([\alpha],[\beta]) \mapsto \int_{M}\alpha\wedge\beta\] Here, by definition \[H^{j}(M)=\frac{\{\alpha\in C^{\infty}(\Lambda^{j}M)\,;\,d\alpha=0\}}{dC^{ \infty}(\Lambda^{j-1}M)},\] is the usual cohomology space of \(M\), and \[H^{j}_{c}(M)=\frac{\{\alpha\in C^{\infty}_{c}(\Lambda^{j}M)\,;\,d\alpha=0\}}{ dC^{\infty}_{c}(\Lambda^{j-1}M)},\] is the cohomology space with compact support. Poincare duality for non-compact manifolds (see [28, p.248] and [11, Section 1.1.2]) asserts that if \(M\) is oriented then \(F\) is well-defined and is a duality pairing between \(H^{k}(M)\) and \(H^{n-k}_{c}(M)\). By hypothesis, there is a sequence \((\eta_{j})_{j\in\mathbb{N}}\) of forms in \(C^{\infty}_{c}(\Lambda^{2}M)\) such that \(d^{*}\eta_{j}\) converges in \(L^{p}\) to \(\omega\). Let \(\beta\in C^{\infty}_{c}(\Lambda^{1}M)\) such that \(d\beta=0\), then \[F(*\omega,\beta) = \int_{M}*\omega\wedge\beta\] \[= \langle\omega,\beta\rangle_{L^{2}}\] \[= \lim_{j\to\infty}\langle d^{*}\eta_{j},\beta\rangle_{L^{2}}\] \[= \lim_{j\to\infty}\langle\eta_{j},d\beta\rangle_{L^{2}}\] \[= 0,\] where we have used the fact that since \(p\geq 2\), \(L^{p}\) convergence on a compact set implies \(L^{2}\) convergence. Thus, one concludes that \(*\omega\) is zero in \(H^{n-1}(M)\), that is there exists \(\eta\in C^{\infty}(\Lambda^{n-2}M)\) such that \(*\omega=d\eta\). Let \(\Sigma^{n-1}\subset M\) be a _closed_, smooth hypersurface, then by Stokes' theorem \[\int_{\Sigma^{n-1}}*\omega=\int_{\Sigma^{n-1}}d\eta=\int_{\partial\Sigma^{n-1} }\eta=0.\] By Lemma 3.7, we can at each end \(E_{i}\) expand \(\omega\) as \[\omega=B_{j}r^{1-n}dr+\mathcal{O}_{\infty}(r^{1-n-\epsilon}).\] Thus, if we fix an end \(E_{i}\), and we take the hypersurface \[\Sigma^{n-1}=\phi_{i}^{-1}(S^{n-1}(0,R)/\Gamma_{i}),\] for some large enough \(R>0\) so that this is well-defined inside \(E_{i}\), then we get, using the asymptotic of the metric \(g\) and the form \(\omega\), \[0=\int_{\Sigma^{n-1}}*\omega=B_{i}\cdot\operatorname{Vol}(\mathbf{S}^{n-1}/ \Gamma_{i})+O(R^{-\epsilon}).\] Consequently, by letting \(R\to\infty\), one concludes that \(B_{i}=0\) for every \(i\in\{1,\cdots,N\}\), and therefore \(\omega\in\ker_{1-n-\epsilon}(\Delta_{1})\). According to Corollary 2.4, we can conclude that \(\omega\in\ker_{-n}(\Delta_{1})\). Thus, part (i) is proved. In order to prove part (ii), we first notice that (i) implies that \[\ker_{1-n}(\Delta_{1})\cap\operatorname{im}_{L^{p}}(d_{2}^{*})\subset \operatorname{im}_{L^{p}}(d_{2}^{*})\cap\ker_{L^{p}}(\Delta_{1})\cap\ker_{L^{ q}}(\Delta_{1}),\] with \(q\) the conjugate exponent of \(p\). But according to Lemma 5.2, the intersection on the right hand side is \(\{0\}\), and part (ii) of the lemma follows. Concerning point (iii), we clearly have \[\operatorname{im}_{L^{p}}(d_{2}^{*})\cap\operatorname{im}_{L^{p}}(d_{0}) \subset\ker_{L^{p}}(\Delta_{1})\cap\operatorname{im}_{L^{p}}(d_{2}^{*}),\] and the right hand side is zero by part (ii). Finally, for point (iv), we rely on (iii) and Hodge duality: indeed, using that \(d^{*}=\pm*d*\) and \(*^{2}=\pm id\) (where the signs depend on the degree of the form), one easily sees that \[*\operatorname{im}_{L^{p}}(d_{0})=\operatorname{im}_{L^{p}}(d_{n}^{*}),\quad* \operatorname{im}_{L^{p}}(d_{2}^{*})=\operatorname{im}_{L^{p}}(d_{n-2}),\] and since the Hodge star is an isometry, we see that (iii) is equivalent to (iv). Note that, according to Corollary 3.9, if \(M\) has only one end and \(k\in\{1,n-1\}\) then \[\ker_{-n}(\Delta_{k})=\ker_{L^{p}}(\Delta_{k})\] for any \(p\in(1,\infty)\). According to Lemma 5.2, this implies \[\operatorname{im}_{L^{p}}(d_{0})\cap\operatorname{im}_{L^{p}}(d_{2}^{*})=\{0\}\] for \(p>\frac{n}{n-1}\). And moreover, by Corollaries 3.4 and 3.8, if \(M\) has at least two ends and \(p\geq n\), then \[\ker_{-n}(\Delta_{k})\subsetneqq\ker_{L^{p}}(\Delta_{k}),\] so the weak modified \(L^{p}\) Hodge decomposition is not a weak \(L^{p}\) Hodge decomposition. Therefore, we can summarize the results of Lemmas 5.4 and 5.3 and Proposition 5.5 as follows: **Corollary 5.6**.: _Let \(M\) be an ALE manifold, then, the weak \(L^{p}\)-Hodge decomposition_ \[L^{p}(\Lambda^{k}M)=\overline{\operatorname{im}_{L^{p}}(d_{k-1})\oplus \operatorname{im}_{L^{p}}(d_{k+1}^{*})}^{L^{p}}\oplus\ker_{L^{p}}(\Delta_{k}).\] _for forms of degree \(k\) on \(M\) holds if either_ * \(p\in(1,\infty)\)_,_ \(k\in\{2,\cdots,n-2\}\)_,_ * _or_ \(p\in(\frac{n}{n-1},n)\)_,_ \(k\in\{1,n-1\}\)_,_ * _or_ \(p\in[n,\infty)\)_,_ \(k\in\{1,n-1\}\) _and_ \(M\) _has only one end._ _Moreover, if \(p\in[n,\infty)\), \(k\in\{1,n-1\}\), and \(M\) has at least two ends, then the weak modified \(L^{p}\)-Hodge decomposition_ \[L^{p}(\Lambda^{k}M)=\overline{\operatorname{im}_{L^{p}}(d_{k-1})\oplus \operatorname{im}_{L^{p}}(d_{k+1}^{*})}^{L^{p}}\oplus\ker_{-n}(\Delta_{k}).\] _holds, while the weak \(L^{p}\)-Hodge decomposition does not._ Note that the case \(k\in\{1,n-1\}\), \(p\in(1\frac{n}{n-1}]\) is not covered by the above corollary. The next result, which is key, will allow us to get corresponding _strong_ Hodge decompositions from the weak ones: **Theorem 5.7**.: _For every \(p\in(1,\infty)\) and \(k\in\{1,\cdots,n-1\}\), the space \(\operatorname{im}_{L^{p}}(d_{k-1})+\operatorname{im}_{L^{p}}(d_{k+1})\) is closed in \(L^{p}(\Lambda^{k}M)\)._ Proof.: The proof is split into two parts. First, we prove the result for \(p\neq n\). In this case, the result follows from: **Proposition 5.8**.: _We have, for \(p\in(1,\infty)\) and \(p\neq n\) that_ \[\operatorname{Ann}_{L^{p}}(\ker_{L^{q}}(\Delta_{k}))=\operatorname{im}_{L^{p} }(d_{k-1})+\operatorname{im}_{L^{p}}(d_{k+1}^{*}).\] The proof of Proposition 5.8 (which is new) uses the theory of weighted Sobolev spaces; in order not to disturb the flow of the presentation, we chose to postpone it to Appendix B, where the theory of weighted Sobolev spaces on ALE manifolds will also be briefly recalled for the convenience of the reader. Since an annihilator is always closed, the result follows in the case \(p\neq n\). Let us now prove the result for \(p=n\). Note that Proposition 5.8 implies strong \(L^{p}\) Hodge decompositions and modified Hodge decompositions in any case of Corollary 5.6 if \(p\neq n\). Using Proposition 4.5 with \(2\leq p<n<q<\infty\), one concludes that \[\operatorname{im}_{L^{n}}(d_{k-1})+\operatorname{im}_{L^{n}}(d_{k+1}^{*})\] is also closed in \(L^{n}\), for every \(k\in\{1,\cdots,n-1\}\). This concludes the proof. **Corollary 5.9**.: _Let \(M\) be an ALE manifold, then, the strong \(L^{p}\) Hodge decompositions for an ALE manifold \((\mathscr{H}_{p})\) holds if either_ 1. \(p\in(1,\infty)\)_,_ \(k\in\{2,\cdots,n-2\}\)_,_ 2. _or_ \(p\in(\frac{n}{n-1},n)\)_,_ \(k\in\{1,n-1\}\)_,_ 3. _or_ \(p\in(1,\infty)\)_,_ \(k\in\{1,n-1\}\) _and_ \(M\) _has only one end._ _Moreover, if \(k\in\{1,n-1\}\), \(p\in[n,\infty)\), and \(M\) has at least two ends, then the strong modified \(L^{p}\) Hodge decomposition \((\hat{\mathscr{H}}_{p})\) holds._ From this result, one can already compute the \(L^{p}\) cohomology spaces in most of the cases: **Theorem 5.10**.: _Let \(M\) be ALE, and let \(p\in(1,\infty)\) and \(k\in\{1,\cdots,n-1\}\) such that one of the following holds:_ 1. \(p\in(1,\infty)\)_,_ \(k\in\{2,\cdots,n-2\}\)_,_ 2. _or_ \(p\in(\frac{n}{n-1},n)\)_,_ \(k\in\{1,n-1\}\)_,_ 3. _or_ \(p\in[n,\infty)\)_,_ \(k\in\{1,n-1\}\) _and_ \(M\) _has only one end._ _Then,_ \[H_{p}^{k}(M)\cong\mathcal{H}_{k}(M),\] _(recall that this latter space is by definition \(\ker_{L^{2}}(\Delta_{k})\))._ Proof.: By the assumptions on \(p\) and \(k\), \[\mathcal{H}_{k}(M)=\ker_{L^{2}}(\Delta_{k})=\ker_{L^{p}}(\Delta_{k}).\] Thus by definition of the \(L^{p}\)-cohomology, it suffices to show \[\ker_{L^{p}}(d_{k})=\operatorname{im}_{L^{p}}(d_{k-1})\oplus\ker_{L^{p}}( \Delta_{k}). \tag{5.6}\] We clearly have \(\operatorname{im}_{L^{p}}(d_{k-1})\oplus\ker_{L^{p}}(\Delta_{k})\subset\ker_ {L^{p}}(d_{k})\). By Theorem 5.7 and (5.3), we have \[L^{p}(\Lambda^{k}M)=\operatorname{im}_{L^{p}}(d_{k-1})\oplus\operatorname{im }_{L^{p}}(d_{k+1}^{*})\oplus\ker_{L^{p}}(\Delta_{k}),\] which, intersected with \(\ker_{L^{p}}(d_{k})\) yields (5.6), provided that \(\ker_{L^{p}}(d_{k})\cap\operatorname{im}_{L^{p}}(d_{k+1}^{*})=\{0\}\). To show the latter, observe that \[\ker_{L^{p}}(d_{k})\cap\operatorname{im}_{L^{p}}(d_{k+1}^{*})\subset\ker_{L^{ p}}(d_{k})\cap\ker_{L^{p}}(d_{k}^{*})\subset\ker_{L^{p}}(\Delta_{k}).\] But again by the assumptions on \(p\) and \(k\), \(\ker_{L^{p}}(\Delta_{k})=\ker_{L^{q}}(\Delta_{k})\), where \(q\) is the conjugate Holder exponent. Thus, \[\ker_{L^{p}}(d_{k})\cap\operatorname{im}_{L^{p}}(d_{k+1}^{*})=\ker_{L^{p}}(d_{k })\cap\operatorname{im}_{L^{p}}(d_{k+1}^{*})\cap\ker_{L^{q}}(\Delta_{k})\cap \ker_{L^{p}}(\Delta_{k}),\] which is \(\{0\}\) by (5.1). This finishes the proof of the theorem. The rest of the section is devoted to the computation of \(H_{p}^{k}(M)\) and the proof of the \(L^{p}\) Hodge decompositions in the remaining cases. Concerning the cohomology spaces, by Hodge duality one sees that it is enough to focus on the case \(k=1\). Indeed, one has the following proposition: **Proposition 5.11**.: _Let \(p\in(1,\infty)\), \(q=p^{\prime}\) the conjugate exponent, and \(k\in\{0,\cdots,n\}\). Then,_ \[H_{p}^{k}(M)\simeq H_{q}^{n-k}(M).\] Proof.: Using the fact that \(d^{*}=\pm*d*\) and \(*^{2}=\pm\mathrm{id}\) (where the signs depend on the degree of the forms), one finds that \[*\ker_{L^{p}}(d_{k})=\ker_{L^{p}}(d_{n-k}^{*}),\] and \[*\operatorname{im}_{L^{p}}(d_{k-1})=\operatorname{im}_{L^{p}}(d_{n-k+1}^{*}).\] Therefore, since the Hodge star is an isomorphism, \[H_{p}^{k}(M)\simeq\frac{\ker_{L^{p}}(d_{n-k}^{*})}{\operatorname{im}_{L^{p}}( d_{n-k+1}^{*})}.\] Using \(L^{p}-L^{q}\) duality, one has \[\frac{\ker_{L^{p}}(d_{n-k}^{*})}{\operatorname{im}_{L^{p}}(d_{n-k+1}^{*})}= \frac{\operatorname{Ann}_{L^{q}}\left(\operatorname{im}_{L^{p}}(d_{n-k+1}^{* })\right)}{\operatorname{Ann}_{L^{q}}\left(\ker_{L^{p}}(d_{n-k}^{*})\right)},\] and the result now follows from the two following facts: \[\operatorname{Ann}_{L^{q}}\left(\operatorname{im}_{L^{p}}(d_{n-k+1}^{*}) \right)=\ker_{L^{q}}(d_{n-k}),\] and \[\operatorname{Ann}_{L^{q}}\left(\ker_{L^{p}}(d_{n-k}^{*})\right)=\operatorname {Ann}_{L^{q}}\left(\operatorname{Ann}_{L^{p}}\left(\operatorname{im}_{L^{q}} (d_{n-k-1})\right)\right)=\operatorname{im}_{L^{q}}(d_{n-k-1}),\] where for the last equality we have used the fact that \[\operatorname{Ann}_{L^{q}}\left(\operatorname{Ann}_{L^{p}}(V)\right)=V,\] for a closed subspace \(V\subset L^{q}\). Let \[Z:=\operatorname{im}_{L^{p}}(d_{0})\cap\operatorname{im}_{L^{p}}(d_{2}^{*})\subset \ker_{L^{p}}(\Delta_{1})\] Furthermore, we let 1. \(X_{1}\) be a complement of \(Z\) in \(\operatorname{im}_{L^{p}}(d_{0})\cap\ker_{L^{p}}(\Delta_{1})\), 2. \(X_{2}\) be a complement of \(Z\oplus X_{1}\) in \(\operatorname{im}_{L^{p}}(d_{0})\), 3. \(Y_{1}\) be a complement of \(Z\) in \(\operatorname{im}_{L^{p}}(d_{2}^{*})\cap\ker_{L^{p}}(\Delta_{1})\) and 4. \(Y_{2}\) be a complement of \(Z\oplus Y_{1}\) in \(\operatorname{im}_{L^{p}}(d_{2}^{*})\). Here, by _complement_ of closed subspace \(V\subset E\) where \(E\) is a Banach space, we mean a _closed_ vector subspace \(W\subset E\) such that \(V\oplus W=E\). Recall that finite-dimensional subspaces of Banach spaces can always be complemented. Therefore, since \(\ker_{L^{p}}(\Delta_{1})\) is finite-dimensional, all the above complements do actually exist. **Proposition 5.12**.: _Let \(p\in[n,\infty)\) and the vector spaces \(X_{i},Y_{i},Z\subset L^{p}(\Lambda^{1}M)\), \(i=1,2\) defined as above. Then we have_ \[L^{p}(\Lambda^{1}M) =X_{2}\oplus X_{1}\oplus Z\oplus Y_{1}\oplus Y_{2}\oplus\ker_{-n }(\Delta_{1}), \tag{5.7}\] \[\operatorname{im}_{L^{p}}(d_{0}) =X_{2}\oplus X_{1}\oplus Z,\] (5.8) \[\operatorname{im}_{L^{p}}(d_{2}^{*}) =Z\oplus Y_{1}\oplus Y_{2},\] (5.9) \[\ker_{L^{p}}(d_{1}) =X_{2}\oplus X_{1}\oplus Z\oplus Y_{1}\oplus\ker_{-n}(\Delta_{1}),\] (5.10) \[\ker_{L^{p}}(d_{1}^{*}) =X_{1}\oplus Z\oplus Y_{1}\oplus Y_{2}\oplus\ker_{-n}(\Delta_{1}),\] (5.11) \[\ker_{1-n}(\Delta_{1}) =X_{1}\oplus Z\oplus Y_{1}\oplus\ker_{-n}(\Delta_{1}). \tag{5.12}\] Proof.: The equations (5.8) and (5.9) follow directly from construction. Then, (5.7) follows from the strong Hodge decompositions in Corollary 5.9. Now we are going to show (5.10). First, notice that if \(q=p^{\prime}\) is the conjugate exponent, then \(\ker_{L^{q}}(\Delta_{1})=\ker_{-n}(\Delta_{1})\), so \[\ker_{L^{p}}(\Delta_{1})\cap\ker_{L^{q}}(\Delta_{1})=\ker_{-n}(\Delta_{1}).\] It then follows from Lemma 5.2 that \[Z\cap\ker_{-n}(\Delta_{1})=\{0\}.\] Next, note that \[X_{2}\subset\operatorname{im}_{L^{p}}(d_{0})\subset\ker_{L^{p}}(d_{1}),\] \[X_{1}\oplus Z\oplus Y_{1}\oplus\ker_{-n}(\Delta_{1})\subset\ker_{L^{p}}(\Delta_ {1})\subset\ker_{L^{p}}(d_{1})\] (according to (a) in Proposition 2.3). Because \(Y_{2}\subset\operatorname{im}_{L^{p}}(d_{2}^{*})\subset\ker_{L^{p}}(d_{1}^{*})\), we have \[Y_{2}\cap\ker_{L^{p}}(d_{1})=Y_{2}\cap\ker_{L^{p}}(d_{1})\cap\ker_{L^{p}}(d_{1 }^{*})=Y_{2}\cap\ker_{L^{p}}(\Delta_{1})=\{0\},\] because \(Y_{2}\) complements \(Z\oplus Y_{1}=\operatorname{im}_{L^{p}}(d_{0})\cap\ker_{L^{p}}(\Delta_{1})\) in \(\operatorname{im}_{L^{p}}(d_{0})\). Therefore, we get (5.10) from intersecting \(\ker_{L^{p}}(d_{1})\) with (5.7). The proof of (5.11) is completely analogous. It remains to show (5.12). At first, we have \[X_{1}\oplus Z\oplus Y_{1}\oplus\ker_{-n}(\Delta_{1})\subset\ker_{1-n}(\Delta_{ 1})\] by construction and since \(\ker_{1-n}(\Delta_{1})=\ker_{L^{p}}(\Delta_{1})\). To show equality, it suffices by (5.7) to show \[(X_{2}\oplus Y_{2})\cap\ker_{1-n}(\Delta_{1})=\{0\}.\] From Proposition 2.3, we obtain \[(X_{2}\oplus Y_{2})\cap\ker_{1-n}(\Delta_{1}) =(X_{2}\oplus Y_{2})\cap\ker_{L^{p}}(\Delta_{1})\] \[=(X_{2}\oplus Y_{2})\cap\ker_{L^{p}}(d_{1})\cap\ker_{L^{p}}(d_{1}^ {*})\] and (5.7), (5.10) and (5.11) show that the intersection on the right hand side is the zero space. This finishes the proof. This proposition allows us to make the remaining cohomology spaces more explicit: **Corollary 5.13**.: _Let \(p\in[n,\infty)\) and \(q\in(1,\frac{n}{n-1}]\) such that \(1=\frac{1}{p}+\frac{1}{q}\). Let \(X_{i}\) and \(Y_{i}\), \(i=1,2\) be defined by Proposition 5.12, then we have identifications_ \[H^{1}_{p}(M)\cong Y_{1}\oplus\ker_{-n}(\Delta_{1}) \cong H^{n-1}_{q}(M),\] \[H^{1}_{q}(M)\cong X_{1}\oplus\ker_{-n}(\Delta_{1}) \cong H^{n-1}_{p}(M).\] Proof.: We treat only the case \(k=1\), the case \(k=n-1\) following from Proposition 5.11. The description of \(H^{1}_{p}(M)\) follows from (5.8) and (5.10). For \(H^{1}_{q}(M)\), recall that \[\ker_{L^{q}}(d_{1})=\operatorname{Ann}_{L^{q}}(\operatorname{im}_{L^{p}}(d_{2 }^{*})),\qquad\operatorname{im}_{L^{q}}(d_{0})=\operatorname{Ann}_{L^{q}}( \ker_{L^{p}}(d_{1}^{*})),\] which allows us to identify \[H^{1}_{q}(M)=\frac{\ker_{L^{q}}(d_{1})}{\operatorname{im}_{L^{q}}(d_{0})}= \frac{\operatorname{Ann}_{L^{q}}(\operatorname{im}_{L^{p}}(d_{2}^{*}))}{ \operatorname{Ann}_{L^{q}}(\ker_{L^{p}}(d_{1}^{*}))}\cong\left(\frac{\ker_{L^ {p}}(d_{1}^{*})}{\operatorname{im}_{L^{p}}(d_{2}^{*})}\right)^{*},\] where \({}^{*}\) denotes the dual space. From (5.9) and (5.11), we get \[\frac{\ker_{L^{p}}(d_{1}^{*})}{\operatorname{im}_{L^{p}}(d_{2}^{*})}\cong X_{ 1}\oplus\ker_{-n}(\Delta_{1})\] According to Proposition 4.2, \(X_{1}\oplus\ker_{-n}(\Delta_{1})\subset\ker_{1-n}(\Delta_{1})\) is finite dimensional, hence isomorphic to its dual space. This finishes the proof. Let us now analyse in more detail the spaces \(X_{i},Y_{i},Z\) which appeared in Proposition 5.4. **Proposition 5.14**.: _Let \(p\in[n,+\infty)\). Then, the following hold:_ * \(Z=\{0\}\)_._ * \(Y_{1}=\{0\}\)_._ * \(X_{1}\) _is given by the following equalities:_ \[X_{1}=\operatorname{im}_{L^{p}}(d_{0})\cap\ker_{1-n}(\Delta_{1})=d(\ker_{0}( \Delta_{0})).\] Proof.: The fact that \(Z=\{0\}\) follows immediately from point (iii) in Proposition 5.5. Concerning (ii), by intersecting (5.9) and (5.12), we get \[\ker_{L^{p}}(\Delta_{1})\cap\operatorname{im}_{L^{p}}(d_{2}^{*})=\ker_{1-n}( \Delta_{1})\cap\operatorname{im}_{L^{p}}(d_{2}^{*})=Z\oplus Y_{1}=Y_{1}\] However, point (ii) in Proposition 5.5 states that \[\ker_{L^{p}}(\Delta_{1})\cap\operatorname{im}_{L^{p}}(d_{2}^{*})=\{0\},\] hence \(Y_{1}=\{0\}\). Let us now prove (iii). The first equality in (iii) follows from intersecting (5.8) and (5.12) and using that \(Y_{1}=\{0\}\) and \(Z=\{0\}\). Additionally, (5.12) and the conditions \(Y_{1}=\{0\}\) and \(Z=\{0\}\) directly yield \[\ker_{1-n}(\Delta_{1})=X_{1}\oplus\ker_{-n}(\Delta_{1}).\] On the other hand, we know from Corollary 3.8 that \[\ker_{1-n}(\Delta_{1})=d(\ker_{0}(\Delta_{0}))\oplus\ker_{-n}(\Delta_{1}).\] Therefore, it suffices to show the inclusion \[d(\ker_{0}(\Delta_{0}))\subset X_{1}=\operatorname{im}_{L^{p}}(d_{0})\cap \ker_{1-n}(\Delta_{1}),\] and this follows from Proposition 3.10. We are now ready to state and prove the result concerning the remaining \(L^{p}\) cohomology spaces: **Theorem 5.15**.: _Let \(p\in[n,+\infty)\) and let \(q\in(1,\frac{n}{n-1}]\) be the conjugate exponent. Then,_ (i)__ \[H^{1}_{p}(M)\cong\ker_{-n}(\Delta_{1})\cong H^{n-1}_{q}(M),\] (ii)__ \[H^{1}_{q}(M)\cong\ker_{1-n}(\Delta_{1})\cong H^{n-1}_{p}(M),\] Proof.: As before, we only prove the case \(k=1\). By Corollary 5.13, we have \[H^{1}_{p}(M)\cong Y_{1}\oplus\ker_{-n}(\Delta_{1})\] but \(Y_{1}=\{0\}\) by Proposition 5.5. This proves (i). Again by Corollary 5.13, we have \[H^{1}_{q}(M)\cong X_{1}\oplus\ker_{-n}(\Delta_{1}).\] Combining this with Proposition 5.14 and Corollary 3.8 directly yields \[X_{1}\oplus\ker_{-n}(\Delta_{1})=d(\ker_{0}(\Delta_{0}))\oplus\ker_{-n}( \Delta_{1})=\ker_{1-n}(\Delta_{1}),\] which shows (ii). This completes the study of \(L^{p}\) cohomology spaces. It remains to look at \(L^{q}\) Hodge decomposition for \(q\in(1,\frac{n}{n-1}]\). In this respect, we can show: **Proposition 5.16**.: _Let \(k\in\{1,n-1\}\) and \(q\in(1,\frac{n}{n-1}]\). Then we have direct sums_ \[\operatorname{im}_{L^{q}}(d_{2}^{*})\oplus\operatorname{im}_{L^{q}}(d_{0}) \oplus\ker_{L^{q}}(\Delta_{1})\subset L^{q}(\Lambda^{1}M)\] _and_ \[\operatorname{im}_{L^{q}}(d_{n}^{*})\oplus\operatorname{im}_{L^{q}}(d_{n-2}) \oplus\ker_{L^{q}}(\Delta_{n-1})\subset L^{q}(\Lambda^{n-1}M)\] _which are closed subspaces of codimension \(N-1\), \(N\) being the number of ends._ Proof.: As usual, it is enough to prove the result for \(k=1\). We have \(\ker_{-n}(\Delta_{1})=\ker_{L^{q}}(\Delta_{1})\subset\ker_{L^{p}}(\Delta_{1}) =\ker_{1-n}(\Delta_{1})\), where \(p\in[n,\infty)\) is the conjugate exponent. Therefore, the sum on the left hand side is direct by Lemma 5.2. According to Theorem 5.7, and Lemma 5.1, \(\ker_{L^{q}}(\Delta_{1})\) being finite dimensional, it is closed. Its codimension equals the dimension of its annihilator in \(L^{p}\), which we are going to compute for the remainder of this proof. We have \[\operatorname{Ann}_{L^{p}} (\operatorname{im}_{L^{q}}(d_{2}^{*})\oplus\operatorname{im}_{L^{ q}}(d_{0})\oplus\ker_{-n}(\Delta_{1}))\] \[=\ker_{L^{p}}(d_{2}^{*})\cap\ker_{L^{p}}(d_{0})\cap\operatorname{ Ann}_{L^{p}}(\ker_{-n}(\Delta_{1}))\] \[=\ker_{L^{p}}(\Delta_{1})\cap\operatorname{Ann}_{L^{p}}(\ker_{-n }(\Delta_{1}))\] \[=\ker_{1-n}(\Delta_{1})\cap\operatorname{Ann}_{L^{p}}(\ker_{-n}( \Delta_{1})).\] Because \(L^{p}\) and \(L^{q}\) are dual by the \(L^{2}\)-pairing, the \(L^{2}\)-orthogonal decomposition in Corollary 3.8 directly implies \[\ker_{1-n}(\Delta_{1})\cap\operatorname{Ann}_{L^{p}}(\ker_{-n}(\Delta_{1}))= d(\ker_{0}(\Delta_{0})).\] By Corollary 3.4, the space on the right hand side is \(N-1\)-dimensional, which concludes the proof. Finally, we can conclude this section with the proof of two main results of the present article, Theorem 1.6 and Theorem 1.1: Proof of Theorem 1.6:.: The cases \(2\leq k\leq n-2\), or \(k\in\{1,n-1\}\) and \(\frac{n}{n-1}<p<n\), or \(k\in\{1,n-1\}\), \(p\in(1,\infty)\) and \(M\) has only one end, or \(k\in\{1,n-1\}\), \(p\geq n\) and \(M\) has more than two ends follow from Corollary 5.9. The case \(k\in\{1,n-1\}\), \(p\in(1,\frac{n}{n-1})\) and \(M\) has more than two ends follows from Proposition 5.16. It remains to see the cases \(k=0\) and \(k=n\). One can obtain the \(k=n\) from the \(k=0\) case, by Hodge duality, so let us prove the latter. For \(k=0\), one wishes to show that \[L^{p}(M)=\operatorname{im}_{L^{p}}(d_{1}^{*}), \tag{5.13}\] since it is well-known that \(\ker_{L^{p}}(\Delta_{0})=\{0\}\) on any complete manifold. Let \(q\) be the conjugate exponent to \(p\). Then we have \[\operatorname{Ann}_{L^{q}}(\operatorname{im}_{L^{p}}(d_{1}^{*}))=\ker_{L^{q}} (d_{0}).\] However if \(f\in\ker_{L^{q}}(d_{0})\), then first \(f\) is constant (since \(M\) is connected), and because \(f\in L^{q}(M)\), \(f\) must vanish identically. Thus, \[\operatorname{Ann}_{L^{q}}(\operatorname{im}_{L^{p}}(d_{1}^{*}))=\{0\},\] and (5.13) follows by duality. This concludes the proof. _Proof of Theorem 1.1:_ According to Theorem 5.10, if either \(k\notin\{1,n-1\}\) or \(p\in\left(\frac{n}{n-1},n\right)\), then \(H_{p}^{k}(M)\simeq\mathcal{H}_{k}(M)\simeq H_{2}^{k}(M)\). Let us now assume that \(k=1\); then, if \(p\leq\frac{n}{n-1}\), part (ii) of Theorem 5.15 yields \[H_{p}^{1}(M)\simeq\ker_{1-n}(\Delta)=\ker_{L^{2}}(\Delta_{1})=\mathcal{H}_{1} (M).\] Now, if \(p\geq n\), then according to Theorem 5.15, \[H_{p}^{1}(M)\simeq\ker_{-n}(\Delta_{1}),\] which has codimension \(N-1\) in \(\ker_{1-n}(\Delta_{1})\) by Corollaries 3.4 and 3.8. According to Theorem 5.15, if \(\frac{1}{p}+\frac{1}{q}=1\), then \[H_{p}^{1}(M)\cong H_{q}^{n-1}(M),\] and this easily implies the result for \(H_{q}^{n-1}(M)\). ## 6. Boundedness of Riesz transforms on forms In this section, we explain how the Hodge projectors can also be expressed by means of Riesz transforms on forms (see also [2, Section 2]), and this allows us to recover some known results concerning the Riesz transforms on manifolds with Euclidean ends. It is convenient for explaining this to consider the vector bundle of forms of any degree: \[\Lambda^{*}M=\oplus_{k=0^{n}}\Lambda^{k}M\] Recall that we denote \(\Delta=dd^{*}+d^{*}d\) the Hodge Laplacian on \(\Lambda^{*}M\). We will also denote \(\Pi_{>}=I-\Pi_{0}\), where \(\Pi_{0}\) is the orthogonal projector onto \(\ker_{L^{2}}(\Delta)\); and \(\Pi_{d}=\oplus\Pi_{k,d}\), \(\Pi_{d^{*}}=\oplus\Pi_{k,d^{*}}\) the Hodge projectors onto closed and co-closed forms of any degree respectively. It is well-known (and due to Gaffney) that if \(M\) is complete, then the Hodge-Dirac operator \(\mathcal{D}=d+d^{*}\), defined as a self-adjoint unbounded operator on \(L^{2}(\Lambda T^{*}M)\) whose domain is given as the domain of the closed quadratic form \[q(\omega,\omega)=\int_{M}|d\omega|^{2}+|d^{*}\omega|^{2},\] has \(C_{c}^{\infty}(\Lambda^{*}M)\) as a core. In fact, if one assumes that \(\omega\) is smooth, then \(\mathcal{D}\omega\) is easily shown to be the limit in \(L^{2}\) as \(n\to\infty\) of \(\mathcal{D}(\chi_{n}\omega)\), where \(0\leq\chi_{n}\leq 1\), \(\chi_{n}\) is \(1\) in restriction to \(B(o,n)\), \(0\) in restriction to \(M\setminus B(o,2n)\), and \(||\nabla\chi_{n}||_{\infty}\lesssim 1\). The same result holds for general \(\omega\) in the domain of \(\mathcal{D}\) by first approximating \(\omega\) by smooth forms on compact sets, using standard approximation of unity arguments in local charts. As a consequence, the _range_ of \((d+d^{*})\), namely \[\operatorname{Rg}(d+d^{*}):=\{(d+d^{*})\omega\,;\,\omega\in\mathcal{D}om(d+d^{*})\}\] is closed, and equal to the \(L^{2}\) closure of \((d+d^{*})C_{c}^{\infty}(\Lambda^{*}M)\). Moreover, since \((\operatorname{Rg}(d+d^{*}))^{\perp}=\ker_{L^{2}}(d+d^{*})\), one has that \[L^{2}(\Lambda^{*}M)=\operatorname{Rg}(d+d^{*})\oplus_{\perp}\ker_{L^{2}}(d+d^{ *}).\] Define \(\mathcal{D}om(d)=\{\omega\in L^{2}(\Lambda^{*}M)\,;d\omega\in L^{2}\}\), and analogously \(\mathcal{D}om(d^{*})\). Similar arguments to the ones presented above for \(\mathcal{D}\) show that \(C_{c}^{\infty}(\Lambda^{*}M)\) is a core for both \(d\) and \(d^{*}\), and this yields that \[\operatorname{im}_{L^{2}}(d_{k-1})=\overline{dC_{c}^{\infty}(\Lambda^{k-1}M) }^{L^{2}}=\operatorname{Rg}(d_{k-1}), \tag{6.1}\] and \[\operatorname{im}_{L^{2}}(d_{k+1}^{*})=\overline{d^{*}C_{c}^{\infty}(\Lambda^ {k+1}M)}^{L^{2}}=\operatorname{Rg}(d_{k+1}^{*}), \tag{6.2}\] where the first equalities are by definition, and the range \(\operatorname{Rg}(A)\) of an unbounded operator \(A\) on \(L^{2}\) is defined in general as \(\{A\omega\,;\,\omega\in\mathcal{D}om(A)\}\). The fact that \(d^{2}=0\) and (6.1), (6.2) yield that the vector spaces \(\operatorname{Rg}(d_{k-1})\) and \(\operatorname{Rg}(d_{k+1}^{*})\) are orthogonal one to another. Note moreover that if \(\omega\in\operatorname{Rg}(d+d^{*})\), then one can write \[\omega=(d+d^{*})\alpha,\] with \(\alpha\in\mathcal{D}om(d+d^{*})\) such that \(\alpha\in(\ker_{L^{2}}(d+d^{*}))^{\perp}=\operatorname{Rg}(d+d^{*})\). Define \[(d+d^{*})^{-1}\omega:=\alpha\] (this is also the operator associated by Spectral Calculus with the multiplier \(\lambda^{-1}\mathbf{1}_{\mathbb{R}^{*}}(\lambda)\)). The definition of the quadratic form \(q\) implies immediately that \[\mathcal{D}om(d+d^{*})\subset\mathcal{D}om(d)\cap\mathcal{D}om(d^{*}),\] so \[\alpha\in\mathcal{D}om(d)\cap\mathcal{D}om(d^{*}),\] and as a consequence, \[d(d+d^{*})^{-1}\omega=d\alpha\in\operatorname{Rg}(d),\quad d^{*}(d+d^{*})^{-1 }\omega=d^{*}\alpha\in\operatorname{Rg}(d^{*}).\] Since \(\omega=(d+d^{*})(d+d^{*})^{-1}\omega\) for \(\omega\in\operatorname{Rg}(d+d^{*})=(\ker_{L^{2}}(d+d^{*}))^{\perp}\), one gets that \[\operatorname{Rg}(d+d^{*})\subset\operatorname{Rg}(d)\oplus_{\perp} \operatorname{Rg}(d^{*}).\] Using this, it is not hard to see that the Hodge projectors on exact and coexact forms are given by the following formulae: \[\Pi_{d}=d(d+d^{*})^{-1},\quad\Pi_{d^{*}}=d^{*}(d+d^{*})^{-1}.\] Noticing that \(\Delta=(d+d^{*})^{2}\) (at least on \(C_{c}^{\infty}(\Lambda^{*}M)\)), one can define a self-adjoint extension of \(\Delta|_{C_{c}^{\infty}(\Lambda^{*}M)}\) as \((d+d^{*})^{2}\) (this is the operator associated with the multiplier \(|\lambda|^{2}\) in a spectral resolution of \((d+d^{*})\)). However, it is well-known that \(\Delta\) is essentially self-adjoint on \(C_{c}^{\infty}(\Lambda^{*}M)\) on any ALE manifold (in fact, bounded Riemann curvature tensor is enough). Hence, the above self-adjoint operator \((d+d^{*})^{2}\) is the self-adjoint Hodge Laplacian \(\Delta\) that we have considered throughout this article. From now on we thus will use freely the formula \(\Delta=(d+d^{*})^{2}\) as self-adjoint operators. On can then define \((\Delta\Pi_{>})^{-1/2}\) as \(\varphi(d+d^{*}),\) for the spectral multiplier \(\varphi(x)=|x|^{-1}{\bf 1}_{\mathbb{R}^{*}}(x).\) We consider the _Riesz transform on forms_: \[R=(d+d^{*})(\Delta\Pi_{>})^{-1/2},\] as well as its exact and co-exact parts: \[R_{d}=d(\Delta\Pi_{>})^{-1/2},\quad R_{d^{*}}=d^{*}(\Delta\Pi_{>})^{-1/2}.\] Then, \(R\) is \(L^{2}\) bounded (it is the operator associated with the bounded spectral multiplier \({\rm sgn}(x){\bf 1}_{\mathbb{R}^{*}}(x)\) in a spectral resolution of \((d+d^{*})\)), and thus \(R_{d}\), \(R_{d^{*}}\) are \(L^{2}\) bounded as well. The spectral theorem entails that \(R^{2}=\Pi_{>},\) and since \(R^{2}=(R_{d}+R_{d^{*}})^{2},\) developping the square and considering the degrees yield: \[(R_{d})^{2}=0,\quad(R_{d^{*}})^{2}=0,\] and \[\Pi_{>}=R_{d}R_{d^{*}}+R_{d^{*}}R_{d}.\] From the fact that \(R^{*}=R,\) it is also easily seen that \((R_{d})^{*}=R_{d^{*}}\). Moreover, if \(\eta\in L^{2}(\Lambda^{*}M)\) and \(\varphi\in L^{2}(\Lambda^{*}M)\), self-adjointess of \(R\) and the fact that \((d+d^{*})\) and \((\Delta\Pi_{>})^{-1/2}\) commute (by the functional calculus) yields that \[((d+d^{*})(\Delta\Pi_{>})^{-1/2}\eta,\varphi)=(\eta,(\Delta\Pi_{>})^{-1/2}(d+ d^{*})\varphi).\] Taking \(\eta\) of degree \(k\) and \(\varphi\) of degree \(k+1\) then gives \[(d(\Delta\Pi_{>})^{-1/2}\eta,\varphi)=(\eta,(\Delta\Pi_{>})^{-1/2}d^{*}\varphi),\] hence \((R_{d})^{*}=(\Delta\Pi_{>})^{-1/2}d^{*}\); similarly, \((R_{d^{*}})^{*}=(\Delta\Pi_{>})^{-1/2}d.\) If \(\omega=d\eta+d^{*}\varphi\in L^{2}\) and \(\eta,\varphi\in L^{2},\) then \[R_{d}\omega=d(\Delta\Pi_{>})^{-1/2}(d\eta+d^{*}\varphi)=d\left[(R_{d^{*}})^{* }\eta+(R_{d})^{*}\varphi\right]\in{\rm im}_{L^{2}}(d),\] hence \(R_{d}\) sends \({\rm im}_{L^{2}}(d^{*})\oplus{\rm im}_{L^{2}}(d)\) into \({\rm im}_{L^{2}}(d)\), and therefore \({\rm im}_{L^{2}}(R_{d})\subset{\rm im}_{L^{2}}(d)\). Similarly, \({\rm im}_{L^{2}}R_{d^{*}}\subset{\rm im}_{L^{2}}(d^{*})\). As a consequence, for \(\omega\in L^{2}(\Lambda^{*}M),\) \[\Pi_{>}\omega=R_{d}R_{d^{*}}\omega+R_{d^{*}}R_{d}\omega\] is necessarily the Hodge decomposition of \(\Pi_{>}\omega\). It follows that \[\Pi_{d}=R_{d}R_{d^{*}},\quad\Pi_{d^{*}}=R_{d^{*}}R_{d}.\] Hence, the boundedness on \(L^{p}\) of the Hodge projectors is closely related to the boundedness on \(L^{p}\) of the Riesz transforms on forms. In this direction, let us also mention that it is proved in [1, Theorem 2.1] that for \(k=1\), \(R_{d^{*}}\) and \(\Pi_{d}\) bounded on \(L^{p}\) imply that \(R_{d}\) is bounded on \(L^{p}\); since \(R_{d^{*}}\) is bounded on \(L^{p}(\Lambda^{1}M)\) for any \(p\in[2,+\infty)\) on an ALE manifold (see [16]), on such a manifold the boundedness on \(L^{p}(M)\) of \(R_{d}\) is equivalent to the boundedness on \(L^{p}(\Lambda^{1}M)\) of \(\Pi_{d}\) for any \(p\in[2,+\infty)\). Thus we get as a corollary: **Corollary 6.1**.: _Let \(M\) be an ALE manifold with dimension \(n\geq 3\). Then, the Riesz transform on functions \(R_{d}\) is bounded from \(L^{p}(M)\to L^{p}(\Lambda^{1}M)\) for every \(p\in(1,p^{*})\), where \(p^{*}=+\infty\) if \(M\) has only one end, \(p^{*}=n\) otherwise._ Of course, this result is well-known, at least if \(M\) is AE (asymptotically Euclidean): if \(M\) has Euclidean ends, this is the famous result of [14]; and the result for a merely AE manifold can be deduced from this by using the perturbation result [15]. In the case where \(M\) has Euclidean ends, there are several proofs of this result: apart from the original proof in [14], it is also a consequence of the combination of [12] and [19]; and in [13] there is yet another proof (which also applies to manifolds that are locally Euclidean outside a compact set). However, our proof in the present paper is arguably the shortest, and most elementary one of all these proofs. We also mention that the \(L^{p}\) boundedness of \(\Pi_{>}\) is easily characterized: **Proposition 6.2**.: _Let \(k\in\{0,\cdots,n\}\), \(p\in(1,\infty)\), and let \(q=p^{\prime}\) be the conjugate exponent. Assume that \(M\) satisfies assumption (H). Then, the following are equivalent:_ * \(\Pi_{>}\)_, defined on_ \(L^{2}(\Lambda^{k}M)\cap L^{p}(\Lambda^{k}M)\)_, extends uniquely to a bounded operator on_ \(L^{p}(\Lambda^{k}M)\)_._ * \(\ker_{L^{2}}(\Delta_{k})=\ker_{L^{\min(q,p)}}(\Delta_{k})\)_._ Proof.: Since \(\Pi_{>}\) is self-adjoint, it is bounded on \(L^{p}\) if and only if it is bounded on \(L^{q}\), so without loss of generality we can assume that \(q\leq 2\leq p\). Take \((\omega_{i})_{i=1,\cdots,N}\) an orthonormal basis of \(\ker_{L^{2}}(\Delta_{k})\). Recall from Lemma 4.1 that since \(M\) satisfies assumption (H), then \(\ker_{L^{q}}(\Delta_{k})\subset\ker_{L^{2}}(\Delta_{k})\subset\ker_{L^{p}}( \Delta_{k})\). Next, \(\Pi_{0}:=I-\Pi_{>}\) writes: \[\Pi_{0}=\sum_{i=1}^{N}(\omega_{i},\cdot)\omega_{i}.\] From this formula, it is clear that (b)\(\Rightarrow\)(a). Let us show the converse. Since \(C_{c}^{\infty}(\Lambda^{k}M)\) is dense in \(L^{2}(\Lambda^{k}M)\) and \(\ker_{L^{2}}(\Delta_{k})\) is finite dimensional, it follows that \(\Pi_{0}(C_{c}^{\infty}(\Lambda^{k}M))=\ker_{L^{2}}(\Delta_{k})\). Thus, for any \(i\in\{1,\cdots,N\}\), there exists \(\varphi_{i}\in C_{c}^{\infty}(\Lambda^{k}M)\subset L^{2}\cap L^{q}(\Lambda^{k}M)\) such that \[\Pi_{0}(\varphi_{i})=\omega_{i}.\] But since \(\Pi_{>}\) is bounded on \(L^{q}\), \(\Pi_{0}(L^{2}\cap L^{q}(\Lambda^{k}M))\subset L^{q}(\Lambda^{k}M)\), hence \(\omega_{i}\in L^{q}\), therefore \(\ker_{L^{2}}(\Delta_{k})\subset L^{q}\). Thus, we conclude that \(\ker_{L^{q}}(\Delta_{k})=\ker_{L^{2}}(\Delta_{k})\). The boundedness on \(L^{p}\) Riesz transform on forms on manifolds that are conical at infinity (and, in particular, on ALE manifolds) has also been studied in [27], using the tools of pseudo-differential calculus on manifolds with corners. More precisely, in [27, Corollary 9], the set of \(p\in(1,+\infty)\) such that \(R\) is bounded on \(L^{p}\) is characterized in terms of the decay at infinity of \(L^{2}\) harmonic forms. However, in [27] the authors do not characterize the decay rates, in terms of the degree of the form and/or the topology at infinity of the manifold. In contrast, Proposition 2.3 and Corollary 3.8 yield the complete characterization of the set of \(p\)'s such that \(R\) is bounded on \(L^{p}\): **Corollary 6.3**.: _Let \(M\) be an ALE manifold with dimension \(n\geq 3\), and let \(k\in\{0,\cdots,n\}\). Then the following holds:_ 1. _If_ \(k\notin\{0,1,n-1,n\}\) _then_ \(R\) _is bounded on_ \(L^{p}(\Lambda^{k}M)\) _for every_ \(p\in(1,+\infty)\)_._ 2. _If_ \(k=0\) _or_ \(k=n\)_, then_ \(R\) _is bounded on_ \(L^{p}(\Lambda^{k}M)\) _if and only if_ \(p\in(1,p^{*})\) _where_ \(p^{*}=+\infty\) _if_ \(M\) _has only one end,_ \(p^{*}=n\) _otherwise._ 3. _if_ \(k=1\) _or_ \(k=n-1\)_, then_ \(R\) _is bounded on_ \(L^{p}(\Lambda^{k}M)\) _if and only if_ \(p\in(p_{*},+\infty)\) _where_ \(p_{*}=1\) _if_ \(M\) _has only one end,_ \(p_{*}=\frac{n}{n-1}\) _otherwise._ Remembering that \(R^{2}=\Pi_{>}\), Corollary 6.3 can also be used to recover the fact, which is also a consequence of our results (Theorem 1.1 and Corollary 4.3), that \(\Pi_{>}\) is bounded on \(L^{p}(\Lambda^{k}M)\) if either \(k\notin\{1,n-1\}\), or \(k\in\{1,n-1\}\) and \(M\) has only end, or \(p\in\left(\frac{n}{n-1},n\right)\). The boundedness on \(L^{p}\) of the Hodge projectors could be used to establish an \(L^{p}\) Hodge decomposition, and thus this provides an alternative approach to some of our results; however, this is far less direct than the approach we took in the present paper, and moreover one cannot recover in this way the results of Theorem 5.15. ## 7. \(L^{p}\) Hodge-Sobolev decompositions In this section, we consider in some sense the strongest possible \(L^{p}\) Hodge decompositions, that we call \(L^{p}\)_Hodge-Sobolev_ decompositions. For that, we denote \(\dot{W}^{1,p}(\Lambda^{k}M)\) the homogeneous Sobolev space, that is the closure of \(C_{c}^{\infty}(\Lambda^{k}M)\) under the norm \(||\nabla\omega||_{L^{p}}\). Here, \(\nabla\) is the natural Levi-Civita connection on tensors. **Definition 7.1**.: We say that the \(L^{p}\)_Hodge-Sobolev decomposition_ holds for forms of degree \(k\), if every \(\omega\in L^{p}(\Lambda^{k}M)\) writes in a unique way \[\omega=d\alpha+d^{*}\beta+\eta,\] with \(\alpha\in\dot{W}^{1,p}(\Lambda^{k-1}M)\), \(\beta\in\dot{W}^{1,p}(\Lambda^{k-1}M)\) and \(\eta\in\ker_{L^{p}}(\Delta_{k})\), and moreover the following estimates hold: \[||\alpha||_{\dot{W}^{1,p}}\lesssim||\omega||_{p},\quad||\beta||_{\dot{W}^{1,p} }\lesssim||\omega||_{p}.\] Analogously, we define a _modified \(L^{p}\) Hodge-Sobolev decomposition_, by replacing in the above definition the condition \(\eta\in\ker_{L^{p}}(\Delta_{k})\) by \(\eta\in\ker_{-n}(\Delta_{k})\). T. Iwaniec and G. Martin have proved in [29] that for the Euclidean space itself, the \(L^{p}\) Hodge-Sobolev decomposition holds in all degree and for every \(p\in(1,\infty)\). We prove: **Theorem 7.2**.: _Let \(M\) be a connected, oriented ALE manifold of dimension \(n\geq 3\). Let \(p\in(1,+\infty)\), \(p\neq n\) and \(k\in\{0,\cdots,n\}\). Then, in cases (a), (b), (c) of Theorem 1.6, the \(L^{p}\) Hodge-Sobolev decomposition for forms of degree \(k\) holds. And in case (d) of Theorem 1.6, the modified \(L^{p}\) Hodge-Sobolev decomposition for forms of degree \(k\) holds._ Proof.: The proof follows from Corollary B.10 and the \(L^{p}\) (modified) Hodge decomposition of Theorem 1.6. **Remark 7.3**.: It is possible that the interpolation arguments used in Section 4 can be adapted to show that the result of Theorem 7.2 also holds for \(p=n\). One difficulty is that the homogeneous Sobolev spaces are not known to interpolate by the real method on manifolds that do not support Poincare inequalities (e.g. ALE manifolds with \(N\geq 2\) ends); see [3]. We leave this question for future work. ## Appendix A A uniqueness lemma for Hodge projectors Let \(p\in(1,\infty)\), \(q=p^{\prime}\) its conjugate exponent, and let \(\mathscr{E}=L^{2}(\Lambda T^{*}M)\cap L^{p}(\Lambda T^{*}M)\), \(\mathscr{F}=L^{2}(\Lambda T^{*}M)+L^{q}(\Lambda T^{*}M)\), endowed with the norms \[||\omega||_{\mathscr{E}}:=\max(||\omega||_{2},||\omega||_{p}),\] and \[||\omega||_{\mathscr{F}}:=\inf\{||\varphi||_{2}+||\eta||_{q}\,;\,\omega= \varphi+\eta\}.\] Then, (see [4, Section 3, Exercise 6]), \(\mathscr{E}\) and \(\mathscr{F}\) are Banach spaces, and \(\mathscr{F}\) is the dual of \(\mathscr{E}\). We will denote \(\mathscr{E}_{k}\) and \(\mathscr{F}_{k}\) the subsets of \(\mathscr{E}\) and \(\mathscr{F}\) consisting of forms of degree \(k\). We let \(\mathscr{G}_{k}=dC_{c}^{\infty}(\Lambda^{k-1}T^{*}M)\oplus d^{*}C_{c}^{\infty} (\Lambda^{k+1}T^{*}M)\oplus\ker_{L^{2}}(\Delta_{k})\). If \(\ker_{L^{2}}(\Delta_{k})\subset\ker_{L^{p}}(\Delta_{k})\) then it follows that \(\mathscr{G}_{k}\subset\mathscr{E}_{k}\). We have the following lemma: **Lemma A.1**.: _Let \(M\) be a complete manifold satisfying assumption (\(H_{k}\)), and assume that \(\ker_{L^{q}}(\Delta_{k})\subset\ker_{L^{2}}(\Delta_{k})\). Then, \(\mathscr{G}_{k}\) is a dense subspace of \(\mathscr{E}_{k}\)._ Proof.: Let \(f\in\mathscr{E}_{k}^{*}\) such that \(f|_{\mathscr{G}_{k}}\equiv 0\). It is enough to show that \(f\equiv 0\). Since \(\mathscr{E}_{k}^{*}=\mathscr{F}_{k}\), there exists \(\omega\in\mathscr{F}_{k}\) such that \[f(\eta)=(\eta,\omega),\quad\forall\eta\in\mathscr{E}_{k}.\] Hence, \(d\omega=d^{*}\omega=0\) in the distribution sense. Thus, \(\omega\in\ker_{L^{2}+L^{q}}(\Delta_{k})\). But, since the curvature term \(\mathscr{R}_{k}\) in the Bochner formula for \(k\)-forms is bounded, there exists a constant \(C>0\) such that one as the domination: \[|e^{-t\Delta_{k}}\eta|\leq e^{Ct}e^{-t\Delta_{0}}|\eta|.\] As a consequence of assumption (\(H_{k}\)), the scalar heat semi-group is ultra-contractive, therefore, for all \(s\in[1,\infty]\) and all \(t>0\), \[e^{-t\Delta_{0}}:L^{s}\to L^{s}\cap L^{\infty}.\] Hence, \(e^{-t\Delta_{k}}:L^{s}(\Lambda T^{*}M)\to L^{s}(\Lambda T^{*}M)\cap L^{ \infty}(\Lambda T^{*}M)\) on \(M\). In particular, \[e^{-\Delta_{k}}:L^{2}(\Lambda T^{*}M)+L^{q}(\Lambda T^{*}M)\to L^{\max(2,q)}( \Lambda T^{*}M).\] But since \(\omega\) is harmonic, we have \(e^{-t\Delta_{k}}\omega=\omega\), for all \(t\geq 0\): indeed, for all \(\varphi\in\mathscr{E}_{k}=\mathscr{F}_{k}^{*}\), one has \[\frac{d}{dt}(e^{-t\Delta_{k}}\omega,\varphi)=(e^{-t\Delta}\Delta\omega, \varphi)=0.\] Thus, we conclude that \[\ker_{L^{2}+L^{q}}(\Delta_{k})\subset\ker_{L^{\max(2,q)}}(\Delta_{k}).\] But by assumption, \[\ker_{L^{\max(2,q)}}(\Delta_{k})\subset\ker_{L^{2}}(\Delta_{k}).\] Thus, we conclude that \(\omega\in\ker_{L^{2}}(\Delta_{k})\). However, \(\ker_{L^{2}}(\Delta_{k})\subset\mathscr{G}_{k}\), and \(f\) is assumed to be orthogonal to \(\mathscr{G}_{k}\), therefore we finally conclude that \(\omega=0\), and so \(f\equiv 0\). This achieves the proof. **Corollary A.2**.: _Let \(M\) be a complete manifold satisfying assumption (H), and let \(k\in\{0,\cdots,n\}\) and \(p\in(1,\infty)\), \(q=p^{\prime}\) be such that \(\ker_{L^{q}}(\Delta_{k})=\ker_{L^{p}}(\Delta_{k})=\ker_{L^{2}}(\Delta_{k})\). Let \(\mathscr{L}\) be a bounded operator on \(L^{2}(\Lambda^{k}M)\). Assume that \(\mathscr{L}|_{\mathscr{G}_{k}}\), the restriction to \(\mathscr{G}_{k}\) of \(\mathscr{L}\), extends uniquely to a bounded operator \(\tilde{\mathscr{L}}\) on \(L^{p}(\Lambda^{k}M)\). Then, the restrictions to \(L^{2}(\Lambda^{k}M)\cap L^{p}(\Lambda^{k}M)\) of \(\mathscr{L}\) and \(\tilde{\mathscr{L}}\) coincide._ This applies in particular to the Hodge projectors, under the assumptions of Proposition 4.2. Proof.: Let \(\omega\in L^{2}(\Lambda^{k}M)\cap L^{p}(\Lambda^{k}M)\). According to Lemma A.1, there is a sequence \(\{\omega_{n}\}_{n\in\mathbb{N}}\) of elements of \(\mathscr{G}_{k}\) which converges to \(\omega\) in the \(||\cdot||_{\mathscr{E}}\) norm; in particular, by definition of \(||\cdot||_{\mathscr{E}}\), \(\omega_{n}\to\omega\) both in \(L^{2}\) and in \(L^{p}\) norm. Since \(\mathscr{L}|_{\mathscr{G}_{k}}=\tilde{\mathscr{L}}|_{\mathscr{G}_{k}}\), we have for every \(n\in\mathbb{N}\) that \[\mathscr{L}(\omega_{n})=\tilde{\mathscr{L}}(\omega_{n}).\] Using the convergence of \(\{\omega_{n}\}_{n\in\mathbb{N}}\) to \(\omega\) in \(L^{2}\) (resp. in \(L^{p}\)) and the continuity of \(\mathscr{L}\) in \(L^{2}\) (resp. of \(\tilde{\mathscr{L}}\) in \(L^{p}\)), we get by passing to the limit \(n\to\infty\) in the above equality: \[\mathscr{L}(\omega)=\tilde{\mathscr{L}}(\omega).\] This concludes the proof. ## Appendix B Weighted Sobolev spaces and decay of harmonic forms In this appendix, we briefly present the theory of weighted Sobolev spaces on ALE manifolds, and use them to prove a few results that are used in the present paper. In particular, we will give a proof of Propositions 2.3 and Proposition 5.8. For more details on weigthed Sobolev spaces, we refer to the presentation in the paper [7]. In what follows, we denote \(\mathbb{R}_{*}^{n}=\mathbb{R}^{n}\setminus\{0\}\) and \(r=|x|\), \(\sigma=(1+|x|^{2})^{1/2}\). **Definition B.1**.: _Let \(\delta\in\mathbb{R}\) and \(p\in(1,+\infty)\), then the weighted space Lebesgue \(L^{p}_{\delta}\) (resp. \(L^{\prime p}_{\delta}\)) is defined as the set of functions \(u\) in \(L^{p}_{loc}(\mathbb{R}^{n})\) (resp. \(L^{p}_{loc}(\mathbb{R}_{*}^{n})\)), whose norms \(||u||_{p,\delta}\) (resp. \(||u||^{\prime}_{p,\delta}\)) are finite. Here,_ \[||u||_{p,\delta}=\left(\int_{\mathbb{R}^{n}}|u|^{p}\sigma^{-\delta p-n}\,dx \right)^{1/p},\] _and_ \[||u||^{\prime}_{p,\delta}=\left(\int_{\mathbb{R}^{n}}|u|^{p}r^{-\delta p-n}\, dx\right)^{1/p}.\] _Then, for \(k\in\mathbb{N}\), \(k\geq 1\), the Sobolev spaces \(W^{k,p}_{\delta}(\mathbb{R}^{n})\) and \(W^{\prime k,p}_{\delta}(\mathbb{R}_{*}^{n})\) are defined as using the norms_ \[||u||_{k,p,\delta}=\sum_{j=0}^{k}\sum_{|\alpha|=j}||\partial^{\alpha}u||_{p, \delta-j},\] _and_ \[||u||^{\prime}_{k,p,\delta}=\sum_{j=0}^{k}\sum_{|\alpha|=j}||\partial^{\alpha }u||^{\prime}_{p,\delta-j}.\] If \(M\) is ALE, then using coordinate systems at infinity and a smooth weight function which agrees with \(\sigma\) in the coordinate neighbourhoods, one can define weighted Sobolev spaces \(W^{k,p}_{\delta}\) on \(M\). Their definition is independent of the chosen coordinates and the chosen weight function. One can also define weighted Sobolev spaces \(W^{k,p}_{\delta}(\Lambda^{*}M)\) since the fiber bundle \(\Lambda^{*}M\) can be trivialized at infinity and one can thus look at the regularity componentwise. These weigthed Sobolev spaces satisfy properties that are analogous to classical properties of the usual Sobolev spaces on \(\mathbb{R}^{n}\), for instance Sobolev embeddings, Rellich compactness theorem, etc. See [7] for more details. For the moment, we limit ourselves to point out the following regularity result, which is a consequence of the Sobolev embeddings: **Proposition B.2**.: _Let \(M\) be ALE, \(\delta\in\mathbb{R}\) and \(p\in(1,\infty)\). Let \(\omega\in W^{k,p}_{\delta}(\Lambda^{*}M)\) for all \(k\in\mathbb{N}\), then \(\omega=o_{\infty}(r^{\delta})\)._ Recall from [7, Def. 1.5] and [30, Def. 1.5] the notion of operator _asymptotic to the Euclidean Laplacian or to the Euclidean Dirac operator_. We won't write down explictly the definition (it is quite intuitive) but the main examples are as follows: if \(M\) is ALE then the Hodge-De Rham Laplacian is asymptotic to the Euclidean Hodge Laplacian \(\Delta_{\mathbb{R}^{n}}\), while \(\mathcal{D}:=d+d^{*}\) acting on \(\Lambda^{*}M\) is asymptotic to the Euclidean Dirac operator \(\mathcal{D}_{\mathbb{R}^{n}}=d_{\mathbb{R}^{n}}+d_{\mathbb{R}^{n}}^{*}\). If \(M\) is ALE of order \(\tau\), then this implies that for any \(k\geq 2\), \(\delta\in\mathbb{R}\) and \(p\in(1,\infty)\), \[\Delta-\Delta_{\mathbb{R}^{n}}:W^{k,p}_{\delta}(\Lambda^{*}M)\to W^{k-2}_{ \delta-2-\tau}(\Lambda^{*}M),\] (B.1) and for \(k\geq 1\), \[\mathcal{D}-\mathcal{D}_{\mathbb{R}^{n}}:W^{k,p}_{\delta}(\Lambda^{*}M)\to W^{ k-1}_{\delta-1-\tau}(\Lambda^{*}M).\] (B.2) Also, similarly, \[\Delta-\Delta_{\mathbb{R}^{n}}:\mathcal{O}_{\infty}(r^{\delta})\to\mathcal{O} _{\infty}(r^{\delta-2-\tau}),\] (B.3) and \[\mathcal{D}-\mathcal{D}_{\mathbb{R}^{n}}:\mathcal{O}_{\infty}(r^{\delta})\to \mathcal{O}_{\infty}(r^{\delta-1-\tau}).\] (B.4) In fact, all this relies on \[d^{*}-d_{\mathbb{R}^{n}}^{*}:W^{k,p}_{\delta}(\Lambda^{*}M)\to W^{k-1}_{ \delta-1-\tau}(\Lambda^{*}M).\] (B.5) and \[d^{*}-d_{\mathbb{R}^{n}}^{*}:\mathcal{O}_{\infty}(r^{\delta})\to\mathcal{O}_{ \infty}(r^{\delta-1-\tau}).\] (B.6) We will use the following elliptic regularity result (see [30, Prop. 2.10]): **Proposition B.3**.: _Suppose that \(M\) is an ALE manifold, \(p\in(1,\infty)\) and \(k\geq 2\). Then, for every \(\omega\in C^{\infty}(\Lambda^{*}M)\),_ \[||\omega||_{k,p,\delta}\leq C(||\Delta\omega||_{k-2,p,\delta-2}+||u||_{k-2,p, \delta}),\] (B.7) _and_ \[||\omega||_{k,p,\delta}\leq C(||\mathcal{D}\omega||_{k-1,p,\delta-1}+||u||_{k-1,p, \delta}).\] (B.8) These estimates together with the Sobolev embeddings (B.2) imply easily that: **Corollary B.4**.: _Assume that \(M\) is ALE, and \(\omega\in L^{p}_{\delta}\), \(p\in(1,\infty)\), \(\delta\in\mathbb{R}\), such that \(\Delta\omega=0\) or \(\mathcal{D}\omega=0\). Then,_ \[\omega=o_{\infty}(r^{\delta}).\] The elliptic regularity estimates of Proposition B.3 can further be refined if the weight \(\delta\) is _non-exceptional_. More precisely, when considering the Hodge Laplacian, we say that \(\delta\) is _exceptional_ if \(\delta\in\{2-n,1-n,-n,\cdots\}\cup\mathbb{N}\), while considering the Hodge-Dirac operator, \(\delta\) is exceptional if \(\delta\in\{1-n,-n,\cdots\}\cup\mathbb{N}\). These result is as follows: if \(\delta\) is non-exceptional for the Hodge Laplacian (resp. for the Hodge-Dirac operator), then there exists \(R>0\) such that (B.7) (resp. (B.8)) can be refined into \[||\omega||_{k,p,\delta}\leq C(||\Delta\omega||_{k-2,p,\delta-2}+||u||_{L^{p}( B_{R})}),\] (B.9) resp. \[||\omega||_{k,p,\delta}\leq C(||\mathcal{D}\omega||_{k-1,p,\delta-1}+||u||_{L^ {p}(B_{R})}).\] (B.10) See [30, Prop. 2.7]. Another corollary of these estimates concerns the Fredholmness of \(\Delta\) and \(\mathcal{D}\): **Corollary B.5**.: _Assume that \(M\) is ALE, \(p\in(1,\infty)\) and \(\delta\) is non exceptional for the Hodge Laplacian (resp. for the Hodge-Dirac operator). Then,_ \[\Delta:W^{2,p}_{\delta}(\Lambda^{*}M)\to W^{0,p}_{\delta-2}(\Lambda^{*}M),\] _resp._ \[\mathcal{D}:W^{1,p}_{\delta}(\Lambda^{*}M)\to W^{0,p}_{\delta-1}(\Lambda^{*}M),\] _is a Fredholm operator._ Sketch of the proof.: The estimates (B.9) and (B.10) together with Rellich compactness imply that \(\mathcal{D}\) and \(\Delta\) are semi-Fredholm, i.e. have finite dimensional kernel and closed range. Since these two operators are also self-adjoint, using duality of the weighted Sobolev spaces we see that their adjoint is also semi-Fredholm, hence they also have finite dimensional cokernel. Therefore, they are Fredholm. See the proof of [7, Prop. 1.14] for more details. A key property for investigating the decay of harmonic forms is that the behaviour as \(r\to\infty\) of functions on \(\mathbb{R}^{n}\) which are harmonic outside a compact set is known; for \(\delta\in\mathbb{R}\), we denote by \(k_{-}(\delta)\) the largest exceptional weight \(k\) for the Laplacian such that \(k\leq\delta\). Then, one has: **Proposition B.6**.: _Let \(n\geq 3\), and \(f\) be a harmonic function on \(\mathbb{R}^{n}\setminus B_{R}\), \(R>0\). Assume that \(f=\mathcal{O}_{\infty}(r^{\delta})\). Then, actually \(f=\mathcal{O}_{\infty}(r^{k_{-}(\delta)})\)._ Indeed, this follows by decomposing the function \(f\) into spherical harmonics, and using the fact that the exceptional weights are precisely the powers of \(r\) appearing in this decomposition. Note that the (scalar) components of a harmonic differential form on \(\mathbb{R}^{n}\) are harmonic functions, so Proposition B.6 also applies to differential forms that are harmonic on \(\mathbb{R}^{n}\setminus B_{R}\). Now, let us prepare for the proof of Proposition 2.3; we will need the following two lemmas. The first one is obtained by a minor variation on the proof of [30, Lemma 4.2]: **Lemma B.7**.: _Let \(n\geq 3\), \(\delta<0\) and \(\omega\in\Lambda^{k}(\mathbb{R}^{n}\setminus B_{R})\), \(R>0\), \(k\in\{0,\cdots,n\}\) be such that \(\omega=\mathcal{O}_{\infty}(r^{\delta})\) and \(\Delta_{\mathbb{R}^{n}}\omega=0\). Then,_ 1. \(\omega\in\mathcal{O}_{\infty}(r^{2-n})\)_._ 2. _if_ \(k\in\{1,\cdots,n-1\}\) _and if moreover_ \(d_{\mathbb{R}^{n}}^{*}\omega\in\mathcal{O}_{\infty}(r^{1-n-\epsilon})\) _for some_ \(\epsilon>0\)_, then_ \(\omega\in\mathcal{O}_{\infty}(r^{1-n})\)_._ 3. _if_ \(k\in\{2,\cdots,n-2\}\)_, and if moreover_ \(d\omega\)_,_ \(d_{\mathbb{R}^{n}}^{*}\omega\in\mathcal{O}_{\infty}(r^{-n-\epsilon})\) _for some_ \(\epsilon>0\)_, then_ \(\omega\in\mathcal{O}_{\infty}(r^{-n})\)_._ Sketch of the proof.: Since \(\delta<0\), one has \(k_{-}(\delta)\leq 2-n\), hence by Proposition B.6, \(\omega\in\mathcal{O}_{\infty}(r^{2-n})\), which yields (i). If now \(k\in\{1,\cdots,n-1\}\), then the proof of (ii) and (iii) follows [30, Lemma 4.2]; indeed, the only difference with [30, Lemma 4.2] is that \(\omega\) is not supposed to be closed and co-closed. However, one easily sees that the same proof applies, given the assumed decay rate of \(d\omega,d^{*}\omega\). Details are left to the interested reader. The second lemma, which will be used repeatedly in the proof of Proposition 2.3, is the following: **Lemma B.8**.: _Let \(M\) be ALE to order \(\tau>0\), \(\delta\in\mathbb{R}\) and \(p\in(1,\infty)\). Let \(\omega\in\ker_{\delta}(\Delta)\), then at each end \(E_{i}\), there exists \(R>0\), \(\epsilon>0\) and \(\bar{\omega}\in\Lambda^{*}((\mathbb{R}^{n}\setminus B_{R})/\Gamma_{i})\), \(\Delta_{\mathbb{R}^{n}}\bar{\omega}=0\) on \((\mathbb{R}^{n}\setminus B_{R})/\Gamma_{i}\), such that_ \[\omega=\bar{\omega}+\mathcal{O}_{\infty}(r^{k_{-}(\delta)-\epsilon}).\] _Moreover, \(\bar{\omega}=\mathcal{O}_{\infty}(r^{k_{-}(\delta)})\), and_ \[\omega=\mathcal{O}_{\infty}(r^{k_{-}(\delta)}).\] Proof.: The hypothesis that \(M\) is ALE to order \(\tau>0\) implies by (B.3) that \(\Delta_{\mathbb{R}^{n}}\omega=\mathcal{O}_{\infty}(r^{\delta-2-\tau})\). Hence, in particular \(\Delta_{\mathbb{R}^{n}}\omega=\mathcal{O}_{\infty}(r^{\delta-2-\epsilon})\) for any \(\varepsilon\in(0,\tau]\). Choose \(\epsilon\in[\frac{\tau}{2},\tau]\) such that \(\delta-\epsilon\) is non-exceptional; this is possible since the exceptional set is discrete. Since by [7, Theorem 1.7], \(\Delta_{\mathbb{R}^{n}}:W^{\prime 2,p}_{\delta-\epsilon}\to W^{\prime 0,p}_{ \delta-2-\epsilon}\) is an isomorphism, one can find \(\omega_{0}\in W^{2,p}_{\delta-\epsilon}((\mathbb{R}^{n}\setminus B_{R})/\Gamma_ {i})\) such that \[\Delta_{\mathbb{R}^{n}}\omega_{0}=\Delta_{\mathbb{R}^{n}}\omega\] in restriction to \((\mathbb{R}^{n}\setminus B_{R})/\Gamma_{i}\). Elliptic regularity (Proposition B.3) and Sobolev embeddings (Proposition B.2) imply that \(\omega_{0}=\mathcal{O}_{\infty}(r^{\delta-\epsilon})\), hence \(\omega_{0}=\mathcal{O}_{\infty}(r^{\delta-\frac{\tau}{2}})\) since \(\epsilon\geq\frac{\tau}{2}\). Define \(\bar{\omega}=\omega-\omega_{0}\), then \(\Delta_{\mathbb{R}^{n}}\bar{\omega}=0\) outside \(B_{R}/\Gamma_{i}\). Clearly, one has \(\bar{\omega}\in\mathcal{O}_{\infty}(r^{\delta})\). According to Proposition B.6, in fact \(\bar{\omega}\in\mathcal{O}_{\infty}(r^{k_{-}(\delta)})\). Thus, starting from the assumption that \(\omega\in\ker_{\delta}(\Delta)\), we have obtained the existence of \(\bar{\omega}\in\Lambda^{*}((\mathbb{R}^{n}\setminus B_{R})/\Gamma_{i})\), \(\Delta_{\mathbb{R}^{n}}\bar{\omega}=0\) on \((\mathbb{R}^{n}\setminus B_{R})/\Gamma_{i}\), such that \[\omega=\bar{\omega}+\mathcal{O}_{\infty}(r^{\delta-\frac{\tau}{2}}).\] And furthermore, \(\bar{\omega}=\mathcal{O}_{\infty}(r^{k_{-}(\delta)})\). This implies that \[\omega=\mathcal{O}_{\infty}(r^{\lambda}),\quad\lambda=\max(k_{-}(\delta), \delta-\frac{\tau}{2}).\] One can now iterate this argument, starting with the new decay rate for \(\omega\): for any \(\ell\in\mathbb{N}\), this gives a decomposition \[\omega=\bar{\omega}+\mathcal{O}_{\infty}(r^{\lambda-\frac{\tau}{2}}),\quad \lambda=\max(k_{-}(\delta),\delta-(\ell-1)\frac{\tau}{2}),\] where \(\bar{\omega}=\mathcal{O}_{\infty}(r^{k_{-}(\delta)})\) is harmonic for the Euclidean Laplacian outside a compact set. Now choose \(\ell\) large enough, so that \[\delta-\ell\frac{\tau}{2}<k_{-}(\delta)\leq\delta-(\ell-1)\frac{\tau}{2},\] then we get for \(\epsilon=k_{-}(\delta)-\delta-\ell\frac{\tau}{2}>0\) that \[\omega=\bar{\omega}+\mathcal{O}_{\infty}(r^{k_{-}(\delta)-\epsilon}),\] and this concludes the proof of the lemma. Now we are ready to give the proof of Proposition 2.3. Proof of Proposition 2.3.: Let \(p\in\ker_{L^{p}}(\Delta_{k})\). Note that \(L^{p}=L^{p}_{\delta}\) with \(\delta=-\frac{n}{p}\), so \(\omega\in\ker_{\delta}(\Delta)\). According to Lemma B.8, we get \[\omega=\mathcal{O}_{\infty}(r^{k_{-}(\delta)}).\] Now, \(\delta<0\), so \(k_{-}(\delta)\leq 2-n\), hence \[\omega=\mathcal{O}_{\infty}(r^{2-n}).\] This implies that \(d\omega,d^{*}\omega=\mathcal{O}_{\infty}(r^{1-n})\). Consider for \(R>0\) large enough, relatively compact open sets \(B_{R}\) in \(M\), which have boundary that identifies using the coordinate systems at infinity in each end to the spheres \(\partial B_{\mathbb{R}^{n}}(0,R)\) in \(\mathbb{R}^{n}\) quotiented by the action of \(\Gamma_{i}\). Integration by parts implies that \[0 = \left(\Delta\omega,\omega\right)_{L^{2}}\] \[= \lim_{R\to\infty}\left\{\int_{B_{R}}[|d\omega|^{2}+|d^{*}\omega|^ {2}]\,\mathrm{d}v+\int_{\partial B_{R}}[(d^{*}\omega,\iota_{\nu}\omega)+( \omega,\iota_{\nu}d\omega)]\,\mathrm{d}S\right\}.\] Given the asymptotics of \(\omega,d^{*}\omega,d\omega\), the boundary integral is \(O(R^{2-n})\), hence tends to zero as \(R\to\infty\). Therefore, we conclude that \[0=\lim_{R\to\infty}\int_{B_{R}}[|d\omega|^{2}+|d^{*}\omega|^{2}]\,\mathrm{d}v,\] so \(d\omega=0\) and \(d^{*}\omega=0\). We have thus proved that \(\omega\) is closed and co-closed, hence point (a) of the proposition is proved. Now, let us apply once more Lemma B.8: we obtain a form \(\bar{\omega}\) such that at each end \(E_{i}\), \(\Delta_{\mathbb{R}^{n}}\bar{\omega}=0\) on \((\mathbb{R}^{n}\setminus B_{R})/\Gamma_{i}\), \(\bar{\omega}=\mathcal{O}_{\infty}(r^{2-n})\), and for some \(\epsilon>0\), \[\omega=\bar{\omega}+\mathcal{O}_{\infty}(r^{2-n-\epsilon}).\] (B.11) Since \(\omega\) is closed, the above equation implies that \[d\bar{\omega}=\mathcal{O}_{\infty}(r^{1-n-\epsilon}).\] Moreover, since \(\omega\) is co-closed, one has \(d^{*}\bar{\omega}=\mathcal{O}_{\infty}(r^{1-n-\epsilon})\). Since \(M\) is ALE of order \(\tau>0\), equation (B.6) entails that, up to lowering the value of \(\epsilon>0\), \[d^{*}_{\mathbb{R}^{n}}\bar{\omega}=\mathcal{O}_{\infty}(r^{1-n-\epsilon}).\] All in all, we have obtained that \[d\bar{\omega},\,d^{*}_{\mathbb{R}^{n}}\bar{\omega}=\mathcal{O}_{\infty}(r^{1- n-\epsilon}).\] According to Lemma B.7, we then obtain that \[\bar{\omega}\in\mathcal{O}_{\infty}(r^{1-n}).\] Coming back to (B.11), and lowering the value of \(\epsilon>0\) if necessary, we obtain that \[\omega=\mathcal{O}_{\infty}(r^{2-n-\epsilon}).\] Applying Lemma B.8 with this new decay rate, we obtain \[\omega=\mathcal{O}_{\infty}(r^{k_{-}(2-n-\epsilon)}).\] But \(k_{-}(2-n-\epsilon)\leq 1-n\), therefore \[\omega=\mathcal{O}_{\infty}(r^{1-n}).\] This concludes the proof of point (b) of the proposition. We now assume that \(k\in\{2,\cdots,n-2\}\); we play the same game as before: by Lemma B.8, starting with the decay rate \(\omega=\mathcal{O}_{\infty}(r^{1-n})\), one obtains, instead of (B.11), \[\omega=\bar{\omega}+\mathcal{O}_{\infty}(r^{1-n-\epsilon}).\] (B.12) with \(\bar{\omega}=\mathcal{O}_{\infty}(r^{1-n})\) harmonic for the Euclidean Laplacian outside a compact set. One then shows in a similar way as before that \[d\bar{\omega},\,d^{*}_{\mathbb{R}^{n}}\bar{\omega}=\mathcal{O}_{\infty}(r^{-n -\epsilon}).\] Now, Lemma B.7 yields that \[\bar{\omega}\in\mathcal{O}_{\infty}(r^{-n}),\] and coming back to (B.12), we arrive to \[\omega=\mathcal{O}_{\infty}(r^{1-n-\epsilon}).\] Applying once more Lemma B.8 with this new decay rate, we get \[\omega=\mathcal{O}_{\infty}(r^{k_{-}(1-n-\epsilon)}),\] and noticing finally that \(k_{-}(1-n-\epsilon)\leq-n\), we arrive to \[\omega=\mathcal{O}_{\infty}(r^{-n}).\] This concludes the proof of point (c) of the proposition in the case \(k\in\{2,\cdots,n-2\}\). Finally, one assumes that \(k\in\{1,\cdots,n-1\}\) and \(p\leq\frac{n}{n-1}\). In this case, \(\delta\leq 1-n\). According to Corollary B.4, one has \[\omega\in o_{\infty}(r^{1-n}).\] Therefore, applying Lemma B.8, one finds a form \(\bar{\omega}\) such that \(\Delta_{\mathbb{R}^{n}}\bar{\omega}=0\) on \(\mathbb{R}^{n}\setminus B_{R}\), and \[\omega=\bar{\omega}+\mathcal{O}_{\infty}(r^{1-n-\epsilon}).\] (B.13) Moreover, the decay of \(\omega\) implies that \[\bar{\omega}=o_{\infty}(r^{1-n}).\] A variation on Proposition B.6, which is left to the reader, implies that \[\bar{\omega}=\mathcal{O}_{\infty}(r^{-n})\] (every harmonic function on \((\mathbb{R}^{n}\setminus B_{R})/\Gamma_{i}\) which decays faster than \(r^{1-n}\) has to decay at least to order \(r^{-n}\)). The end of the proof is now the same as in the case \(k\in\{2,\cdots,n-2\}\). The proof of point (c) of the proposition is now complete. Let us now prove point (d). Notice that if \(p>\frac{n}{n-1}\), then \[\ker_{1-n}(\Delta_{k})\subset\ker_{L^{p}}(\Delta_{k}).\] So, using (b), \[\ker_{L^{p}}(\Delta_{k})\subset\ker_{1-n}(\Delta_{k})\subset\ker_{L^{p}}(\Delta_{ k}).\] Hence, \[\ker_{L^{p}}(\Delta_{k})=\ker_{1-n}(\Delta_{k}).\] Finally, concerning point (e), since \(\ker_{-n}(\Delta_{k})\subset\ker_{1-n}(\Delta_{1})\) it is enough to prove that the latter is finite dimensional. But according to point (d), one has \[\ker_{1-n}(\Delta_{k})=\ker_{L^{2}}(\Delta_{k})=\mathcal{H}^{k}(M),\] and the dimension of the latter space is equal to the \(k\)th reduced \(L^{2}\) Betti number of \(M\), which is known to be finite for ALE manifolds (see for instance [8]). Proof of Lemma 2.5.: Lemma B.8 yields a decomposition \(u=u_{0}+u_{1}\), where \(\Delta^{\mathbb{R}^{n}}u_{0}=0\), \(u_{0}\) is bounded, and \(u_{1}\in\mathcal{O}_{\infty}(r^{-\epsilon})\), \(\epsilon>0\). It is well-known that a bounded function \(v\) which is harmonic outside a compact set of \(\mathbb{R}^{n}\) has a limit at infinity. Let us recall briefly a proof of this fact; let \(w:=v-\Delta^{-1}(\Delta v)\), then \(w\) is harmonic and bounded on \(\mathbb{R}^{n}\). Let us consider \(\tilde{w}:=w+c\), where the constant \(c\) is chosen so that \(\tilde{w}\) is non-negative and the infimum of \(\tilde{w}\) (which is attained at infinity) if zero. We claim that \(\tilde{w}\) is zero everywhere, which implies the following representation formula for \(v\): \[v=-c+\Delta^{-1}(\Delta v),\] and using the expression of the Green operator in \(\mathbb{R}^{n}\) and the fact that \(\Delta v\) has compact support, it follows that \(v\) tends to \(c\) at infinity. Thus, let us come back to \(\tilde{w}\) and prove the claim. Denote \(A_{R}\) the annulus \(B(0,2R)\setminus B(0,R)\). Then, \(A_{R}\) can be covered by a number \(C(n)\) of balls of radius \(R\). The Harnack inequality for these balls implies that the annuli \(A_{R}\) satisfy too a Harnack inequality, with a constant independant of \(R\). Hence, there is a constant \(C>0\) such that for every \(R>0\), \[\sup_{A_{R}}\tilde{w}\leq C\inf_{A_{R}}\tilde{w}.\] But the right-hand side tends to \(0\) as \(R\to\infty\), hence the left-hand side as well. However, thanks to the maximum principle, \[\max_{\mathbb{R}^{n}}\tilde{w}=\lim_{R\to\infty}\max_{A_{R}}\tilde{w},\] and we conclude that the maximum of \(\tilde{w}\) is zero. Since \(\tilde{w}\) is non-negative, it follows that \(\tilde{w}\) is identically zero, which achieves the proof of the claim. Coming back to \(u_{0}\), and applying the result of [30, Lemma 4.1], one thus obtains the existence of constants \(c_{i}\) and \(A_{i}\), \(i=1,\cdots,N\) such that as \(r\to\infty\) in the end \(E_{i}\), \[u_{0}=c_{i}+A_{i}r^{2-n}+\mathcal{O}_{\infty}(r^{1-n}),\quad r\to\infty.\] Applying iteratively Lemma B.8 to \(u-c_{i}\) in each end \(E_{i}\), one can in fact obtain a decomposition \(u=u_{0}+u_{1}\) with \(u_{0}\) satisfying the above asymptotics in each end (with constants \(A_{i}\) that may be different), and \(u_{1}\in\mathcal{O}_{\infty}(r^{-\lambda})\), \(\lambda=\max(2-n,-\ell\frac{\tau}{2})\), \(\ell\in\mathbb{N}\). Taking \(\ell\) large enough so that \(\ell\frac{\tau}{2}>n-2\), we obtain \(u-c_{i}=\mathcal{O}_{\infty}(r^{2-n})\) in the end \(E_{i}\). Applying one last time Lemma B.8 gives \(u=u_{0}+u_{1}\) with \(u_{0}\) as above, and \(u_{1}=\mathcal{O}_{\infty}(r^{2-n-\frac{\tau}{2}})\). The lemma is thus proved, with the choice \(\epsilon=\frac{\tau}{2}>0\). Proof of Lemma 2.6.: Because \(\omega\in\ker_{-\alpha}(\Delta_{1})\), we have \(\omega\in L^{p}\) for each \(p>\frac{\alpha}{n}\). By Proposition 2.3, \(\omega\in\ker_{1-n}(\Delta_{1})\) and \(\omega\) is closed and co-closed. By Lemma B.8, one finds a form \(\bar{\omega}\in\mathcal{O}_{\infty}(r^{1-n})\) such that at each end \(E_{i}\), we have \(\Delta_{\mathbb{R}^{n}}\bar{\omega}=0\) on \((\mathbb{R}^{n}\setminus B_{R})/\Gamma_{i}\), and \[\omega=\bar{\omega}+\mathcal{O}_{\infty}(r^{1-n-\epsilon}).\] (B.14) Moreover, \(d\bar{\omega}=0\) and according to (B.6), up to lowering the value of \(\epsilon>0\), \(d_{\mathbb{R}^{n}}^{*}\bar{\omega}=\mathcal{O}_{\infty}(r^{-n-\epsilon})\). Because \(\bar{\omega}\) is a one-form, we have following the proof of [30, Lemma 4.2] that \[\bar{\omega}=A_{i}d(r^{2-n})+\mathcal{O}(r^{-n})=(n-2)A_{i}r^{1-n}dr+\mathcal{ O}(r^{-n}),\] for some constant \(A_{i}\in\mathbb{R}\). Letting \(B_{i}=(n-2)A_{i}\), we get that in each end \(E_{i}\), as \(r\to\infty\), \[\omega=B_{i}r^{1-n}dr+\mathcal{O}_{\infty}(r^{1-n-\epsilon}),\] which concludes the proof of the lemma. The rest of this section consists of _new_ material, which for reasons of clarity of the exposition we choose to present here. It is devoted to the proof of Proposition 5.8. We start with the following new result, which is a variation on point (a) of Proposition 2.3: **Lemma B.9**.: _Let \(\delta<1\) and \(\omega\in\ker_{\delta}(\mathcal{D})\); then, \(\omega\) is \(\mathcal{O}_{\infty}(1)\), and is closed and co-closed._ Note that if \(-n<\delta<0\), one can write \(\delta=-\frac{n}{p}\) for some \(p\in(1,+\infty)\); since \(\Delta\omega=\mathcal{D}^{2}\omega=0\), one has thus that \(\ker_{\delta}(\mathcal{D})\subset\ker_{L^{p}}(\Delta)\), and the result in this case for forms of degree \(\neq 0\) or \(n\) is then a consequence of Proposition 2.3, point (a); the result for any \(\delta<0\) and under the same restriction on the degree follows. The real improvement here is that one can allow \(\delta\) to be (slightly) positive, at the expense of the stronger assumption \(\mathcal{D}\omega=0\) instead of just \(\Delta\omega=0\). Proof.: Let us first prove that \(\omega\in\ker_{0}(\mathcal{D})\), which follows from the iterative argument already used in the proof of Proposition 2.3. First, since \(\mathcal{D}^{2}=\Delta\), we have \(\omega\in\ker_{\delta}(\Delta)\). Lemma B.8 implies that \[\omega=\mathcal{O}_{\infty}(r^{k_{-}(\delta)}).\] Since \(\delta<1\), one has \(k_{-}(\delta)\leq 0\), therefore \[\omega=\mathcal{O}_{\infty}(1).\] Since \(d\) and \(d^{*}\) commute with \(\Delta\), and \(\Delta\omega=0\), we conclude that \[d\omega,\,d^{*}\omega\in\ker_{-1}(\Delta).\] Write \[d^{*}\omega=\sum_{j=0}^{n-1}\theta_{j},\quad\theta_{j}\in\Lambda^{j}M,\] and \[d\omega=\sum_{i=1}^{n}\eta_{i},\quad\eta_{i}\in\Lambda^{i}M.\] Obviously, since \(\Delta\) preserves the degree of differential forms, we have for any \(i\) and \(j\), \[\eta_{i},\,\theta_{j}\in\ker_{-1}(\Delta).\] By the maximum principle for the scalar Laplacian, a decaying harmonic form of degree \(0\) or \(n\) is necessarily identically zero. Therefore, \(\eta_{n}=0\) and \(\theta_{0}=0\). Thus, since \(\ker_{-1}(\Delta)\subset\ker_{L^{p}}(\Delta)\) for any \(p>n\), Proposition 2.3 point (b) implies that \(d\omega,\,d^{*}\omega\in\ker_{1-n}(\Delta)\). We are going to prove, using some results from Section 4, that actually, for any \(i\neq 1\) and \(j\neq n-1\), \(\eta_{i},\,\theta_{j}\in\ker_{-n}(\Delta)\). First, notice that Proposition 2.3 point (c) implies that any \(\theta_{j},\,\eta_{i}\) with \(i,j\in\{2,\cdots,n-2\}\) belongs to \(\ker_{-n}(\Delta)\). It thus remains to prove that \(\theta_{1}\) and \(\eta_{n-1}\) belong to \(\ker_{-n}(\Delta)\). As will be apparent, the proofs of these two facts are completely similar one to another, so we do it only for \(\theta_{1}\). Decompose \[\omega=\sum_{k=0}^{n}\omega_{k},\] then it follows from degree considerations that \[\theta_{n-1}=d^{*}\omega_{n}.\] (B.15) According to Corollary 3.8, there exists \(h\in\ker_{0}(\Delta_{0})\) such that \[\theta_{1}=dh+\mathcal{O}_{\infty}(r^{-n}).\] Let \[\bar{\omega}:=\omega-\omega_{n},\] and define \(\bar{\theta}_{j}\), \(\bar{\eta}_{i}\) analogously to \(\theta_{j}\), \(\eta_{i}\) by decomposing \(d^{*}\bar{\omega}\) and \(d\bar{\omega}\) according to the degrees. Then, by (B.15), \[\bar{\theta}_{j}=\theta_{j},\quad j\neq n-1,\] and \[\bar{\theta}_{n-1}=0.\] Therefore, \[d^{*}\bar{\omega}=dh+\mathcal{O}_{\infty}(r^{-n}).\] Taking the Hodge star of this identity, we get \[*d^{*}\bar{\omega}=*dh+\mathcal{O}_{\infty}(r^{-n}).\] (B.16) But using the well-known facts that \(d^{*}=\pm*d*\) and \(*^{2}=\pm id\) in restriction to forms of a given degree (where the signs depend on the particular degree), we see that \(*d^{*}\bar{\omega}\) is a sum of exact forms (of various degrees). Integrate the degree \(n-1\) component of (B.16) over each Euclidean sphere \(S_{i}(R)\) of radius \(R\) and center \(0\) in the quotient space \(\mathbb{R}^{n}/\Gamma_{i}\), at each end \(E_{i}\) of \(M\). By Stokes formula and the fact that the left-hand side of (B.16) is exact, we obtain for each sphere that \[\int_{S_{i}(R)}*dh=O(\frac{1}{R}),\] as \(R\to\infty\). Furthermore, Lemma 2.5 gives the following expansion of \(h\) in each end \(E_{i}\): \[h=c_{i}+A_{i}r^{2-n}+\mathcal{O}_{\infty}(r^{2-n-\epsilon}),\quad c_{i}\in \mathbb{R},\,\epsilon>0,\] so \[dh=(2-n)A_{i}r^{1-n}dr+\mathcal{O}_{\infty}(r^{1-n-\epsilon}).\] From this expansion, it is easy to see that \[\int_{S_{i}(R)}*dh=(2-n)A_{i}\mathrm{Vol}(S^{n-1}/\Gamma_{i})+o(1),\] as \(R\to\infty\). Therefore, we conclude that for any end \(E_{i}\), one has \(A_{i}=0\). By (ii) in Lemma 3.5, this implies that \(h\) is constant, and so \(dh=0\). Therefore, since \(\theta_{1}=dh+\mathcal{O}_{\infty}(r^{-n})\), we conclude that \[\theta_{1}\in\ker_{-n}(\Delta).\] As indicated above, the proof that \(\eta_{n-1}\in\ker_{-n}(\Delta)\) is completely similar and will be skipped. Define now \[\tilde{\omega}:=\omega-\omega_{0}-\omega_{n},\] and \(\tilde{\theta}_{j}\), \(\tilde{\eta}_{i}\) the associated forms in the decomposition of \(d^{*}\tilde{\omega}\), \(d\tilde{\omega}\) according to the degrees. We have \[d^{*}\tilde{\omega}=d^{*}\omega-d^{*}\omega_{n},\] and \[d\tilde{\omega}=d\omega-d\omega_{0},\] which implies that \[\tilde{\theta}_{j}=\theta_{j},\quad j\neq n-1,\] \[\tilde{\theta}_{n-1}=0,\] \[\tilde{\eta}_{i}=\eta_{i},\quad i\neq 1,\] \[\tilde{\eta}_{1}=0.\] It follows that \(d\tilde{\omega}\), \(d^{*}\tilde{\omega}\in\ker_{-n}(\Delta)\). Moreover, one also has \(\tilde{\omega}\in\ker_{0}(\Delta)\). Let us see that this implies \(d\tilde{\omega}=0\), \(d^{*}\tilde{\omega}=0\). The argument for this is similar to the one already used in the proof of Proposition 2.3: consider for \(R>0\) large enough, relatively compact open sets \(B_{R}\) in \(M\), which have boundary that identifies using the coordinate systems at infinity to the spheres \(\partial B_{\mathbb{R}^{n}}(0,R)\) in \(\mathbb{R}^{n}\) quotiented by the action of \(\Gamma_{i}\). Integration by parts imply that \[0 = (\Delta\tilde{\omega},\tilde{\omega})_{L^{2}}\] \[= \lim_{R\to\infty}\left\{\int_{B_{R}}[|d\tilde{\omega}|^{2}+|d^{* }\tilde{\omega}|^{2}]\,\mathrm{d}v+\int_{\partial B_{R}}[(d^{*}\tilde{\omega}, \iota_{\nu}\tilde{\omega})+(\tilde{\omega},\iota_{\nu}d\tilde{\omega})]\, \mathrm{d}S\right\}.\] Given the asymptotics of \(\tilde{\omega},d^{*}\tilde{\omega},d\tilde{\omega}\), the boundary integral is \(O(\frac{1}{R})\), hence tends to zero, as \(R\to\infty\). Therefore, we conclude that \[0=\lim_{R\to\infty}\int_{B_{R}}[|d\tilde{\omega}|^{2}+|d^{*}\tilde{\omega}|^{2 }]\,\mathrm{d}v,\] so \(d\tilde{\omega}=0\) and \(d^{*}\tilde{\omega}=0\). Let us now come back to \(\omega\). We obtain \[d\omega=d\omega_{0},\quad d^{*}\omega=d^{*}\omega_{n},\] However, by assumption one also has \(0=\mathcal{D}\omega=(d+d^{*})\omega=d\omega_{0}+d^{*}\omega_{n}\). But \(d\omega_{0}\) and \(d^{*}\omega_{n}\) are of degree respectively \(1\) and \(n-1\), and since \(n\geq 3\) one has \(1\neq n-1\). Therefore, one concludes that \(d\omega_{0}=0\) and \(d^{*}\omega_{n}=0\). Hence, \(d\omega=0\) and \(d^{*}\omega=0\), and this concludes the proof. We are now ready for the proof of Proposition 5.8. Proof of Proposition 5.8.: Let \(\omega\in\operatorname{im}_{L^{p}}(d_{k-1})+\operatorname{im}_{L^{p}}(d_{k+1}^{*})\), and consider sequences \(\alpha_{i}\in C_{c}^{\infty}(\Lambda^{k-1}M)\), \(\beta_{i}\in C_{c}^{\infty}(\Lambda^{k+1}M)\), \(i\in\mathbb{N}\) such that we have \(d\alpha_{i}+d^{*}\beta_{i}\to\eta\) in \(L^{p}\). Let \(\eta\in\ker_{L^{q}}(\Delta_{k})\). By integration by parts and Proposition 2.3, \[(d\alpha_{i},\eta)_{L^{2}}=(\alpha_{i},d^{*}\eta)_{L^{2}}=0,\qquad(d^{*}\beta_ {i},\eta)=(\beta_{i},d\eta)=0.\] Therefore, by taking \(i\to\infty\), we have \((\omega,\eta)_{L^{2}}=0\), so \[\operatorname{im}_{L^{p}}(d_{k-1})+\operatorname{im}_{L^{p}}(d_{k+1}^{*}) \subset\operatorname{Ann}_{L^{p}}(\ker_{L^{q}}(\Delta_{k})).\] To prove the converse inclusion, we consider the Hodge-Dirac operator \(\mathcal{D}=d+d^{*}:C^{\infty}(\Lambda^{*}M)\to C^{\infty}(\Lambda^{*}M)\) on the whole exterior algebra, and denote \[\operatorname{im}_{L^{p}}(\mathcal{D})=\overline{\mathcal{D}(C_{0}^{\infty}( \Lambda^{*}M))}^{L^{p}}.\] Because the operator \(\mathcal{D}\) is self-adjoint, \[\ker_{L^{q}}(\mathcal{D})=\operatorname{Ann}_{L^{q}}(\operatorname{im}_{L^{p} }(\mathcal{D})),\] so by reflexivity of \(L^{p}\), \[\operatorname{im}_{L^{p}}(\mathcal{D})=\operatorname{Ann}_{L^{p}}( \operatorname{Ann}_{L^{q}}(\operatorname{im}_{L^{p}}(\mathcal{D})))= \operatorname{Ann}_{L^{p}}(\ker_{L^{q}}(\mathcal{D})).\] But according to Proposition 2.3, \[\ker_{L^{q}}(\mathcal{D})=\ker_{L^{q}}(\Delta),\] therefore \[\operatorname{im}_{L^{p}}(\mathcal{D})=\operatorname{Ann}_{L^{p}}(\ker_{L^{q }}(\Delta)).\] We now regard \(\mathcal{D}\) as an operator between weighted Sobolev spaces \[\mathcal{D}:W^{1,p}_{\delta}(\Lambda^{*}M)\to L^{p}_{\delta-1}(\Lambda^{*}M)\] and by Corollary B.5, this operator is Fredholm if \(\delta\) is non-exceptional, that is, if \(\delta\notin\{0,1,2,\ldots\}\cup\{1-n,-n,-n-1,\ldots\}\). Choose now \(\delta-1=-\frac{n}{p}\), so that \[L^{p}_{\delta-1}(\Lambda^{*}M)=L^{p}(\Lambda^{*}M).\] Observe also that \(\delta=1-\frac{n}{p}\) is non-exceptional as long as \(p\in(1,\infty)\), \(p\neq n\). Therefore, \[\mathcal{D}(W^{1,p}_{1-\frac{n}{p}}(\Lambda^{*}M))\subset L^{p}(\Lambda^{*}M)\] is a closed subspace. But by definition, \[\operatorname{im}_{L^{p}}(\mathcal{D})\subset\overline{\mathcal{D}(W^{1,p}_{1 -\frac{n}{p}}(\Lambda^{*}M))}^{L^{p}},\] so the closeness of the image implies that we have the inclusion \[\operatorname{im}_{L^{p}}(\mathcal{D})\subset\mathcal{D}(W^{1,p}_{1-\frac{n}{ p}}(\Lambda^{*}M)).\] Furthermore, because \[\ker_{W^{1,p}_{1-\frac{n}{p}}}(\mathcal{D})\] is finite-dimensional, one can find a closed space \(V\subset W^{1,p}_{1-\frac{n}{p}}(\Lambda^{*}M)\) such that \[W^{1,p}_{1-\frac{n}{p}}(\Lambda^{*}M)=V\oplus\ker_{W^{1,p}_{1-\frac{n}{p}}}(D),\] and furthermore, \(\mathcal{D}\) restricts to an isomorphism \[\mathcal{D}|_{V}:V\to\mathcal{D}(W^{1,p}_{1-\frac{n}{p}}(\Lambda^{*}M))\subset L ^{p}(\Lambda^{*}M).\] (B.17) Now let \[\omega\in\operatorname{Ann}_{L^{p}}(\ker_{L^{q}}(\Delta_{k}))\subset L^{p}( \Lambda^{k}M)\] If we consider \(\omega\) as sitting in the whole exterior algebra, we have \[\omega\in\operatorname{Ann}_{L^{p}}(\ker_{L^{q}}(\mathcal{D}))=\operatorname {im}_{L^{p}}(\mathcal{D}).\] Let \(\{\omega_{i}\}_{i\in\mathbb{N}}\in C^{\infty}_{0}(\Lambda^{*}M)\) be a sequence such that \(\mathcal{D}\omega_{i}\to\omega\) in \(L^{p}\). Because \[\mathcal{D}\omega_{i}\in\operatorname{im}_{L^{p}}(\mathcal{D})\subset \mathcal{D}(W^{1,p}_{1-\frac{n}{p}}(\Lambda^{*}M)),\] we have by (B.17) unique forms \(\tilde{\omega}_{i}\in V\) such that \(\mathcal{D}\tilde{\omega}_{i}=\mathcal{D}\omega_{i}\). Because \(\tilde{\omega}_{i}-\omega_{i}\) belongs to \(\ker_{\delta}(\mathcal{D})\) and \(\delta=1-\frac{n}{p}<1\), Lemma B.9 implies that it is in fact bounded, and is closed and co-closed. So, we get \[d\omega_{i}=d\tilde{\omega}_{i},\qquad d^{*}\omega_{i}=d^{*}\tilde{\omega}_{i}.\] Now, because \(\mathcal{D}\omega_{i}=\mathcal{D}\tilde{\omega}_{i}\to\omega\) in \(L^{p}\), \(\mathcal{D}\tilde{\omega}_{i}\) is a Cauchy sequence, hence converges, in \(L^{p}=L^{p}_{-\frac{n}{p}}\). By (B.17), \(\tilde{\omega}_{i}\) is thus a Cauchy sequence in \(W^{1,p}_{1-\frac{n}{p}}\). Denote by \(\omega_{\infty}\) its limit. We claim that the following estimate holds: for every \(\alpha\in C^{\infty}(\Lambda^{*}M)\), \[||\nabla\alpha||_{p}\lesssim||\alpha||_{W^{1,p}_{1-\frac{n}{p}}}.\] (B.18) Indeed, the estimate is trivial locally, and thus it is enough to prove it in charts at infinity. We can assume that \(\alpha\) is a \(k\)-form for some \(k\in\{1,\cdots,n\}\). Note then that the fact that the metric is ALE to order \(\tau\) implies that the Christoffel symbols in a chart at infinity satisfy \[\Gamma^{k}_{ij}=\mathcal{O}_{\infty}(\sigma^{-1-\tau}),\quad i,j,k\in\{1, \cdots,n\}\] (recall that \(\sigma=(1+r^{2})^{1/2}\)). Thus, writing \[\nabla_{\partial_{t}}\alpha=\partial_{k}\alpha+\sum_{1\leq i_{1},\cdots,i_{k} \leq n}\alpha_{i_{1}\cdots i_{k}}\sum_{\ell=1}^{n}\sum_{s=1}^{k}(-1)^{\ell} \Gamma^{\ell}_{si}dx_{\ell}\wedge dx_{i_{1}}\wedge\cdots\wedge\widetilde{dx_ {i_{s}}}\wedge\cdots\wedge dx_{i_{k}},\] we conclude that \[||\nabla\alpha||_{p}\lesssim\sum_{|\alpha|=1}||\partial^{\alpha}\alpha||_{p}+|| \sigma^{-\tau-1}\alpha||_{p}.\] It follows by definition of the weighted Sobolev norm that \[\sum_{|\gamma|=1}||\partial^{\gamma}\alpha||_{p}+||\sigma^{-\tau-1}\alpha||_{p} \leq\sum_{|\gamma|=1}||\partial^{\gamma}\alpha||_{p}+||\sigma^{-1}\alpha||_{p} \equiv||\alpha||_{W^{1,p}_{1-\frac{n}{p}}},\] and (B.18) follows. The estimate (B.18) implies \(\nabla\tilde{\omega}_{i}\to\nabla\omega_{\infty}\) in \(L^{p}\), hence following sequences converge strongly in \(L^{p}\): \[d\omega_{i}=d\tilde{\omega}_{i}\to d\omega_{\infty}:=\eta,\qquad d^{*}\omega_{ i}=d^{*}\tilde{\omega}_{i}\to d^{*}\omega_{\infty}:=\xi,\] in particular \(\eta,\xi\in L^{p}(\Lambda^{*}M)\). Since \(\mathcal{D}\omega_{i}\to\omega\) and \(\mathcal{D}\omega_{i}\to\eta+\xi\) in \(L^{p}\), we get by uniqueness of the limit that \(\eta+\xi=\omega\) a.e. Decompose \(\omega_{i}=\sum_{k=0}^{n}\omega_{i}^{k}\) where \(\omega_{i}^{k}\in L^{p}(\Lambda^{k}M)\). Then we also have convergence for the \(L^{p}\)-sequences \[d_{k}\omega_{i}^{k}\to\eta^{k+1}\in L^{p}(\Lambda^{k+1}M)\qquad d_{k}^{*} \omega_{i}^{k}\to\xi^{k-1}\in L^{p}(\Lambda^{k-1}M)\] and in particular, \[d_{k-1}\omega_{i}^{k-1}+d_{k+1}^{*}\omega_{i}^{k+1}\to\eta^{k}+\xi^{k}=\omega \in L^{p}(\Lambda^{k}M)\] so that \[\omega\in\operatorname{im}_{L^{p}}(d_{k-1})+\operatorname{im}_{L^{p}}(d_{k+1 }^{*}).\] This finishes the proof of the proposition. Actually, Proposition 5.8 and its proof imply the following Corollary which is used in Section 7: **Corollary B.10**.: _Assume that \(p\neq n\), then every \(\omega\in\operatorname{im}_{L^{p}}(d)+\operatorname{im}_{L^{p}}(d^{*})\) writes uniquely_ \[\omega=d\alpha+d^{*}\beta,\] _with \(\alpha\in\dot{W}^{1,p}(\Lambda^{k-1}M)\), \(\beta\in\dot{W}^{1,p}(\Lambda^{k-1}M)\), and moreover_ \[||\alpha||_{\dot{W}^{1,p}}\lesssim||\omega||_{p},\quad||\beta||_{\dot{W}^{1,p }}\lesssim||\omega||_{p}.\]
2305.11319
Risk Budgeting Allocation for Dynamic Risk Measures
We define and develop an approach for risk budgeting allocation - a risk diversification portfolio strategy - where risk is measured using a dynamic time-consistent risk measure. For this, we introduce a notion of dynamic risk contributions that generalise the classical Euler contributions and which allow us to obtain dynamic risk contributions in a recursive manner. We prove that, for the class of coherent dynamic distortion risk measures, the risk allocation problem may be recast as a sequence of strictly convex optimisation problems. Moreover, we show that self-financing dynamic risk budgeting strategies with initial wealth of 1 are scaled versions of the solution of the sequence of convex optimisation problems. Furthermore, we develop an actor-critic approach, leveraging the elicitability of dynamic risk measures, to solve for risk budgeting strategies using deep learning.
Sebastian Jaimungal, Silvana M. Pesenti, Yuri F. Saporito, Rodrigo S. Targino
2023-05-18T22:00:32Z
http://arxiv.org/abs/2305.11319v3
# Risk Budgeting Allocation for Dynamic Risk Measures ###### Abstract We define and develop an approach for risk budgeting allocation - a risk diversification portfolio strategy - where risk is measured using a dynamic time-consistent risk measure. For this, we introduce a notion of dynamic risk contributions that generalise the classical Euler contributions and which allow us to obtain dynamic risk contributions in a recursive manner. We prove that, for the class of dynamic coherent distortion risk measures, the risk allocation problem may be recast as a sequence of strictly convex optimisation problems. Moreover, we show that any self-financing dynamic risk budgeting strategy with initial wealth of 1 is a scaled version of the unique solution of the sequence of convex optimisation problems. Furthermore, we develop an actor-critic approach, leveraging the elicitability of dynamic risk measures, to solve for risk budgeting strategy using deep learning. Dynamic Risk Measures, Portfolio Allocation, Risk Parity, Elicitability, Deep Learning ## 1 Introduction The "risk parity" portfolio has been pioneered by Bridgewater Associates, when in 1996 it launched the _All Weather_ asset allocation strategy - a portfolio strategy withstanding all weathers - although the term risk parity was only coined in 2005 in the white paper by Qian (2005). Risk parity originated from the desire of a diversified portfolio and the realisation that an equally weighted portfolio is diversified in asset allocation but not in the extent in which each asset contributes to the overall portfolio risk (Qian, 2011). Emphasised by the 2008 financial crisis, the call for "maximally" diversifying a portfolio's risk was born, see e.g. Choueifaty and Coignard (2008). Risk parity enjoys widespread popularity in industry as numerous portfolio (performance) comparison studies illustrate, see, e.g., Chaves et al. (2011), Lee (2011), and Asness et al. (2012). An early mathematical formalisation of risk parity strategies can be found in Maillard et al. (2010) and Roncalli (2013). Risk parity strategies and more broader risk budgeting strategies are portfolio allocations where the contribution of each asset to the overall portfolio risk is prespecified, e.g. for risk parity each assets contributes equally to the portfolio risk. Thus, central to risk budgeting is the way the risk of a portfolio is quantified. While most of the extant literature measures risk using the portfolio variance and further restrict to assets that follow multivariate Gaussian distributions, recent works relax these assumptions. Bruder et al. (2016) and Jurczenko and Teiletche (2019) study the Expected Shortfall (ES; also called Conditional Value-at-Risk) risk measure under the assumption that assets are multivariate Gaussian distributed, resulting in explicit formulae for risk contributions. Further works on risk budgeting include Ji and Lejeune (2018) who utilise the downside risk measure, Bellini et al. (2021) who consider expectile risk measures, Anis and Kwon (2022) who incorporate asset selection, and Freitas Paulo da Costa et al. (2022) who propose algorithms based on the cutting planes methodology to calculate risk budgeting strategies for coherent risk measures. Haugh et al. (2017) combine risk budgeting of (overlapping) groups of asset with simultaneously maximising return and minimising risk. Variations of risk budgeting portfolio strategies are considered in Bai et al. (2016) who propose alternative optimisation problems to solve for risk parity portfolios. Meucci et al. (2015) and Roncalli and Weisang (2016) construct risk factor budgeting portfolios, that are portfolios where each (uncorrelated) factor, rather than asset, contributes equally to the portfolio variance. Lassance et al. (2022) continues this line of work by including independent component analysis. None of these works, however, address the dynamic nature of investments, i.e., that portfolio strategies are typically holistically considered over a time horizon larger than one period; we will henceforth refer to the latter setting as the "static" setting. In this paper, we develop a dynamic setting in which an investor trades over a finite time horizon using a self-financing risk budgeting strategy. As investment decisions impact the portfolio value over time, we employ a dynamic time-consistent risk measure. While there is a growing literature on dynamic time-consistent risk measures (e.g., see Cheridito et al. (2006), Ruszczynski (2010), Bielecki et al. (2022) and Coache et al. (2022)), the literature on dynamically allocating a portfolio's risk is sparse. In particular, we are interested is how much an asset \(i\) at time \(t\) contributes to the future risk of the strategy. An early work for allocations of dynamic coherent risk measures is Cherny (2009) and for BSDE-based dynamic time-consistent risk measures we refer to Kromer and Overbeck (2014, 2017), and Mastrogiacomo and Rosazza-Gianin (2022). Related but conceptually different is the work of Schilling et al. (2020) who axiomatically study how to decompose a risk dynamically. While working in a dynamic setting, their risk is the portfolio loss itself and not a dynamic risk measure applied to it. In this work, we consider the class of dynamic time-consistent risk measures that arise from conditional one-step distortion risk measures. A case in point is the ES whose security level may depend on the investor's wealth or asset price. For this class, we define their dynamic risk contributions via Gateaux derivatives and derive explicit formulae. While most of our results hold for conditional distortion risk measures, we focus on the subset of conditional coherent distortion risk measures, as defining risk allocations for non-coherent risk measures provide an "incentive for infinite fragmentation of portfolios" (Tsanakas 2009). In the static setting, Gateaux derivatives enjoy a long history as risk contributions and also in connection to cooperative game theory. We provide a detailed literature review in Section 3. With this definition of dynamic risk contributions at hand, we define a dynamic risk budgeting portfolio as a strategy whose risk contributions at each point in time are a predefined percentage of the future risk of the strategy. We prove under mild conditions, that any self-financing dynamic risk budgeting strategy with initial wealth of 1 is a scaled version of unique solution of a sequence of strictly convex optimisation problems. Finally, we develop an actor-critic approach to solve the sequence of optimisation problems using deep learning techniques and provide examples. This manuscript is organised as follows. Section 2 introduces dynamic time-consistent risk measures and in Section 2.2, we apply a time-consistent risk measure to a self-financing strategy and derive a recursive representation. In Section 3 we define dynamic risk contributions via the Gateaux derivative and derive explicit formulae for the class of dynamic distortion risk measures. Section 4 is devoted to dynamic risk budgeting portfolio strategies and we show in Theorem 4 that any self-financing dynamic risk budgeting strategy with initial wealth of 1 is a scaled version of the unique solution to a collection of strictly convex optimisation problems. Section 5 discusses how to solve this family of optimisation problems using neural networks leveraging elicitability of conditional risk measures (Subsection 5.1). Illustrations of risk budgeting strategies are provided in Section 6. ## 2 Dynamic Risk Assessment We work on a filtered and completed probability space \((\Omega,\mathcal{F},(\mathcal{F}_{t})_{t\in\overline{\mathcal{T}}},\mathbb{P})\), where \(\overline{\mathcal{T}}:=\{0,1,\ldots,T+1\}\) and \(T\in\mathds{N}\) is a known and finite time horizon. The information available to the investor is encapsulated in the filtration \((\mathcal{F}_{t})_{t\in\overline{\mathcal{T}}}\). We further denote the spaces of square-integrable random variables (rvs) and sequences by \(\mathcal{Z}:=\{Z\in\mathcal{F}:\mathbb{E}[Z^{2}]<\infty\}\), \(\mathcal{Z}_{t}:=\{Z_{t}\in\mathcal{Z}:Z_{t}\in\mathcal{F}_{t}\}\), and \(\mathcal{Z}_{t:T+1}:=\{(Z_{t},Z_{t+1},\ldots,Z_{T+1})\in\mathcal{Z}_{t}\times \mathcal{Z}_{t+1}\times\cdots\times\mathcal{Z}_{T+1}\}\), for all \(t\in\overline{\mathcal{T}}\). Similarly, we define the spaces of \(n\)-dimensional random vectors and sequences by \(\boldsymbol{\mathcal{Z}}:=\{\boldsymbol{Z}=(Z_{1},\ldots,Z_{n}):Z_{i}\in \mathcal{Z}\,,\,\forall i=1,\ldots,n\}\) \(\mathbf{\mathcal{Z}}_{t}:=\{\mathbf{Z}_{t}\in\mathbf{\mathcal{Z}}:\mathbf{Z}_{t}\in\mathcal{F}_{t}\}\), and \(\mathbf{\mathcal{Z}}_{t:T+1}:=\{(\mathbf{Z}_{t},\mathbf{Z}_{t+1},\ldots,\mathbf{Z}_{T+1})\in\mathbf{ \mathcal{Z}}_{t}\times\mathbf{\mathcal{Z}}_{t+1}\times\cdots\times\mathbf{\mathcal{Z}}_ {T+1}\}\), for all \(t\in\overline{\mathcal{T}}\). Unless otherwise stated, all (in)equalities of random vectors are to be understood component-wise and in a \(\mathbb{P}\)-a.s. sense. ### Dynamic Risk Measures The agent assesses the risk associated with a trading strategy by a dynamic risk measure. While dynamic risk measures map sequences of rvs to a rv, we first introduce the notion of (conditional) one-step risk measures that map \(\mathcal{F}_{t+1}\)-measurable rvs to \(\mathcal{F}_{t}\)-measurable rvs, for each \(t\in\mathcal{T}\). Under certain assumptions, stated later, dynamic risk measures and one-step risk measures are related to one another, although, a priori, the relationship is not obvious. We adopt the setting of Cheridito et al. (2006) and Ruszczynski (2010) for dynamic risk measures and refer the interested reader to those works and reference therein. **Definition 1** (One-step Risk Measures): A one-step (conditional) risk measure on \(\mathcal{T}\) is a family of maps \(\{\rho_{t}\}_{t\in\mathcal{T}}\), where for each \(t\in\mathcal{T}\), \(\rho_{t}\colon\mathcal{Z}_{t+1}\to\mathcal{Z}_{t}\). A one-step risk measure may possess the following properties, if for all \(t\in\mathcal{T}\): 1. **Normalisation:**\(\rho_{t}(0)=0\). 2. **Monotonicity:**\(\rho_{t}(Z)\leq\rho_{t}(Y)\), for all \(Z,Y\in\mathcal{Z}_{t+1}\) with \(Z\leq Y\). 3. **Translation invariance:**\(\rho_{t}(Y+Z)=Y+\rho_{t}(Z)\), for all \(Y\in\mathcal{F}_{t}\) and \(Z\in\mathcal{F}_{t+1}\). 4. **Convexity:**\(\rho_{t}(\lambda\,Z+(1-\lambda)\,Y)\leq\lambda\,\rho_{t}(Z)+(1-\lambda)\,\rho_{t}(Y)\), for all \(\lambda\in\mathcal{F}_{t}\) with \(0\leq\lambda\leq 1\) and \(Y,Z\in\mathcal{Z}_{t+1}\). 5. **Positive homogeneity:**\(\rho_{t}(\lambda\,Z)=\lambda\,\rho_{t}(Z)\), for all \(\lambda\in\mathcal{F}_{t}\) with \(\lambda\geq 0\) and \(Z\in\mathcal{Z}_{t+1}\). 6. **Coherency:**\(\rho_{t}\) is monotone, translation invariant, convex, and positive homogeneous. Now that one-step risk measures have been established, we next define a dynamic risk measure. **Definition 2** (Dynamic Risk Measures): A dynamic risk measure on \(\overline{\mathcal{T}}\) is a family \(\{\rho_{t,T+1}\}_{t\in\overline{\mathcal{T}}}\), where for each \(t\in\overline{\mathcal{T}}\), \(\rho_{t,T+1}\colon\mathcal{Z}_{t:T+1}\to\mathcal{Z}_{t}\). A dynamic risk measure may possess the following properties, if for all \(t\in\overline{\mathcal{T}}\): 1. **Normalisation:**\(\rho_{t,T+1}(0,\ldots,0)=0\). 2. **Monotonicity:**\(\rho_{t,T+1}(Z_{t:T+1})\leq\rho_{t,T+1}(Y_{t:T+1})\), for all \(Z_{t:T+1},Y_{t:T+1}\in\mathcal{Z}_{t:T+1}\) with \(Z_{t:T+1}\leq Y_{t:T+1}\). 3. **Translation invariance:**\(\rho_{t,T+1}(Z_{t:T+1})=Z_{t}+\rho_{t,T+1}(0,Z_{t+1},\ldots,Z_{T+1})\), for all \(Z_{t:T+1}\in\mathcal{Z}_{t:T+1}\). * **Convexity:**\(\rho_{t,T+1}(\lambda\,Z_{t:T+1}+(1-\lambda)\,Y_{t:T+1})\leq\lambda\,\rho_{t,T+1}(Z_{t:T+1 })+(1-\lambda)\,\rho_{t,T+1}(Y_{t:T+1})\), for all \(\lambda\in\mathcal{F}_{t}\) with \(0\leq\lambda\leq 1\) and \(Y_{t:T+1},Z_{t:T+1}\in\mathcal{Z}_{t:T+1}\). * **Positive homogeneity:**\(\rho_{t,T+1}(\lambda\,Z_{t:T+1})=\lambda\,\rho_{t,T+1}(Z_{t:T+1})\), for all \(\lambda\in\mathcal{F}_{t}\) with \(\lambda\geq 0\) and \(Z_{t:T+1}\in\mathcal{Z}_{t:T+1}\). * **Coherency:**\(\rho_{t,T+1}\) is monotone, translation invariant, convex, and positive homogeneous. The mapping \(\rho_{t,T+1}\) thus assesses the risk of the sequence \(Z_{t:T+1}\in\mathcal{Z}_{t:T+1}\) viewed from time \(t\), by mapping it to an \(\mathcal{F}_{t}\)-measureable rv. The investor may view it as the \(\mathcal{F}_{t}\)-measurable quantity they are willing to exchange in place of the sequence of future risks. Next, we define what it means for a dynamic risk measure to be (strongly) time-consistent. Time-consistency is a property that leads to a dynamic programming principle for optimising dynamic risk measures and results in optimal decisions that are consistent when optimised at different points in time. [Strong time-consistency - Cheridito et al.(2006)] A dynamic risk measure \(\{\rho_{t,T+1}\}_{t\in\overline{\mathcal{T}}}\) is (strong) time-consistent if for all \(Z_{t:T+1},Y_{t:T+1}\in\mathcal{Z}_{t:T+1}\) that satisfy for some \(s\in\{t,\ldots,T+1\}\) \[Z_{t:s}=Y_{t:s}\quad\text{and}\quad\rho_{s,T+1}(Z_{s:T+1})\leq\rho_{s,T+1}(Y_{ s:T+1})\] it holds that \[\rho_{t,T+1}(Z_{t:T+1})\leq\rho_{t,T+1}(Y_{t:T+1})\,,\] and where \(Z_{t:s}:=(Z_{t},\ldots,Z_{s},0,\ldots,0)\) is understood as the projection of \(Z_{t:T+1}\) onto \(\mathcal{Z}_{t}\times\cdots\times\mathcal{Z}_{s}\). We henceforth refer to strong time-consistency as time-consistency. While not apparent at first, the theorem below shows that (strong) time-consistency creates a connection between dynamic risk measures and one-step risk measures. In particular, the theorem shows that a dynamic risk measure induces a one-step risk measure and conversely, a one-step risk measure defines a dynamic risk measure. The following theorem is due to Cheridito et al.(2006) and Ruszczynski(2010). [Recursive Relation] Let \(\{\rho_{t,T+1}\}_{t\in\overline{\mathcal{T}}}\) be a dynamic risk measure which is monotone, normalised, and translation invariant. Then \(\{\rho_{t,T+1}\}_{t\in\overline{\mathcal{T}}}\) is time-consistent if and only if there exists a one-step risk measure \(\{\rho_{t}\}_{t\in\mathcal{T}}\) that is monotone, normalised, and translation invariant, such that the following recursive representation holds: \[\rho_{t,T+1}(Z_{t},\ldots,Z_{T+1})=Z_{t}+\rho_{t}\Bigg{(}Z_{t+1}+\rho_{t+1} \bigg{(}Z_{t+2}+\cdots+\rho_{T-1}\Big{(}Z_{T}+\rho_{T}\big{(}Z_{T+1}\big{)} \Big{)}\cdots\bigg{)}\Bigg{)}\,. \tag{1}\] By Theorem 1, any family of mappings \(\rho_{t}\colon\mathcal{F}_{t+1}\to\mathcal{F}_{t}\) that are monotone, normalised, and translation invariant, for all \(t\in\mathcal{T}\), gives rise to a dynamic time-consistent risk measure and vice-versa. Thus, without loss of generalisation, we make a slight abuse of terminology and call \(\{\rho_{t}\}_{t\in\mathcal{T}}\) a dynamic time-consistent risk measure with representation (1). For defining risk budgeting strategies we further require that the dynamic risk measure is homogeneous. Thus, we consider the following assumption throughout. **Assumption 1**.: _We assume that \(\{\rho_{t}\}_{t\in\mathcal{T}}\) is monotone, normalised, translation invariant, and homogeneous._ Next, we provide a class of one-step risk measures that are central to our exposition. Specifically, we consider the class of one-step distortion risk measures, which are a generalisation of the class of distortion risk measures to our dynamic setting. For this we first define for each \(t\in\mathcal{T}\) the (cumulative) distribution function of \(Z\in\mathcal{Z}_{t+1}\) conditional on \(\mathcal{F}_{t}\) as \(F_{Z|\mathcal{F}_{t}}(z):=\mathbb{P}(Z\leq z\ |\ \mathcal{F}_{t})\), and moreover, define \(U_{Z|\mathcal{F}_{t}}:=F_{Z|\mathcal{F}_{t}}(Z)\) which is an \(\mathcal{F}_{t+1}\)-measurable rv. Note that when we condition on \(\mathcal{F}_{t}\), \(U_{Z|\mathcal{F}_{t}}\) is uniform. **Definition 4** (One-step Distortion Risk Measures).: _For each \(t\in\mathcal{T}\), let \(\gamma_{t}:[0,1]\times\Omega\longrightarrow\mathds{R}_{+}\) be a (state dependent) distortion weight function. This means that \(\int_{0}^{1}\gamma_{t}(u,\omega)\,du=1\), for all \(\omega\in\Omega\), and that the rv\(\gamma_{t}(u,\cdot):\Omega\longrightarrow\mathds{R}_{+}\) is \(\mathcal{F}_{t}\)-measurable for every \(u\in[0,1]\) and for all \(t\in\mathcal{T}\). Then, the one-step (conditional) distortion risk measure with weight functions \(\{\gamma_{t}\}_{t\in\mathcal{T}}\) is the family \(\{\rho_{t}\}_{t\in\mathcal{T}}\), where for each \(t\in\mathcal{T}\) and \(Z\in\mathcal{Z}_{t+1}\), \(\rho_{t}\) is defined as_ \[\rho_{t}(Z):=\mathbb{E}\Big{[}\,Z\,\gamma_{t}\left(F_{Z|\mathcal{F}_{t}}(Z) \right)\,\Big{|}\,\mathcal{F}_{t}\,\Big{]}=\mathbb{E}\Big{[}\,Z\,\gamma_{t} \left(U_{Z|\mathcal{F}_{t}}\right)\,\Big{|}\,\mathcal{F}_{t}\Big{]}\,, \tag{2}\] _where, as usual, we suppress the dependence of \(\gamma_{t}\) on its second argument._ This class of risk measures includes, e.g., Expected Shortfall at level \(\alpha_{t}\in\mathcal{F}_{t}\), \(\alpha_{t}\in[0,1)\), (ES\({}_{\alpha_{t}}\)) in which case \(\gamma_{t}(u)=\frac{1}{1-\alpha_{t}}\mathds{1}_{u\geq\alpha_{t}}\). The level \(\alpha_{t}\) may, e.g., decrease as wealth decreases to express the fact that the investor becomes more risk averse if their wealth drops significantly. One-step distortion risk measures are monotone, normalised, translation invariant, and homogeneous, and thus give raise to a dynamic time-consistent risk measure via representation (1). Moreover, if for all \(t\in\mathcal{T}\), the distortion weight functions \(\gamma_{t}(\cdot;\omega)\) are increasing for all \(\omega\in\Omega\), then it is also convex, making it coherent. ### Risk-to-go of a Strategy We denote by \(\boldsymbol{X}=\left(\boldsymbol{X}_{t}\right)_{t\in\overline{\mathcal{T}}}\) the \(n\)-dimensional price process of the universe of assets and consider an investor who invokes a long-only self-financing trading strategy and invests in all assets. We also denote by \(\boldsymbol{\theta}=(\boldsymbol{\theta}_{t})_{t\in\mathcal{T}}\) a (not necessarily self-financing) strategy, where \(\boldsymbol{\theta}_{t}=(\theta_{t,1},\ldots,\theta_{t,n})\in\mathcal{F}_{t}\) is an \(n\)-dimensional, positive almost sure random vector representing the amount of shares invested in each asset at time \(t\). In the sequel, we often use the "slice notation" \(\boldsymbol{\theta}_{t_{1}:t_{2}}:=(\boldsymbol{\theta}_{t_{1}},\boldsymbol{ \theta}_{t_{1}+1},\ldots,\boldsymbol{\theta}_{t_{2}})\) for \(0\leq t_{1}<t_{2}\leq T\). A strategy \(\boldsymbol{\theta}\) induces a self-financing strategy \(\boldsymbol{\vartheta}=(\boldsymbol{\vartheta}_{t})_{t\in\mathcal{T}}\) - referred to as the _induced self-financing strategy_ - as follows \[\boldsymbol{\vartheta}_{0}:=\boldsymbol{\theta}_{0}\quad\text{and}\quad \boldsymbol{\vartheta}_{t}:=\frac{\boldsymbol{\vartheta}_{t-1}^{\intercal} \boldsymbol{X}_{t}}{\boldsymbol{\theta}_{t}^{\intercal}\boldsymbol{X}_{t}} \,\boldsymbol{\theta}_{t}\,,\quad\forall\,t\in\mathcal{T}/\{0\}.\] Recall that the investor invokes a long-only strategy and invests in all assets, thus \(\theta_{t,i}>0\), a.s., for all \(i\in\mathcal{N}:=\{1,\ldots,n\}\) and \(t\in\mathcal{T}\), and hence \(\boldsymbol{\vartheta}_{0:T}\) is well-defined. The strategy \(\boldsymbol{\vartheta}\) is self-financing, i.e. it satisfies \((\boldsymbol{\vartheta}_{t}-\boldsymbol{\vartheta}_{t-1})^{\intercal}\, \boldsymbol{X}_{t}=0\), for all \(t\in\mathcal{T}/\{0\}\). To simplify the notation, we define the weight process \(\boldsymbol{w}^{\boldsymbol{\theta}}=(\boldsymbol{w}_{t}^{\boldsymbol{\theta}} )_{t\in\mathcal{T}}\): \[w_{t}^{\boldsymbol{\theta}}:=\frac{\boldsymbol{\theta}_{t}^{\intercal} \boldsymbol{X}_{t+1}}{\boldsymbol{\theta}_{t+1}^{\intercal}\boldsymbol{X}_{t+ 1}}\,,\quad\forall\,t\in\mathcal{T}\,,\] and notice that, \[\boldsymbol{\vartheta}_{t}=\left(\prod_{s=0}^{t-1}w_{s}^{\boldsymbol{\theta}} \right)\,\boldsymbol{\theta}_{t}\,,\quad\forall\,t\in\mathcal{T}/\{0\}.\] We assume throughout that the self-financing strategy \(\boldsymbol{\vartheta}_{0:T}\) belongs to \(\boldsymbol{\mathcal{Z}}_{0:T}\), so that the below dynamic risk measures are well-defined. By setting the (negative) price increment to \(\Delta\boldsymbol{X}_{t}:=-(\boldsymbol{X}_{t+1}-\boldsymbol{X}_{t})\), the investor assesses the risk at time \(t=0\) associated with an induced self-financing strategy with a dynamic time-consistent risk measure \(\{\rho_{t}\}_{t\in\mathcal{T}}\) by \[\mathfrak{R}[\boldsymbol{\theta}_{0:T}]= \,\rho_{0}\bigg{(}\boldsymbol{\theta}_{0}^{\intercal}\,\Delta \boldsymbol{X}_{0}+\rho_{1}\bigg{(}w_{0}^{\boldsymbol{\theta}}\, \boldsymbol{\theta}_{1}^{\intercal}\,\Delta\boldsymbol{X}_{1}+\rho_{2}\bigg{(} w_{0}^{\boldsymbol{\theta}}w_{1}^{\boldsymbol{\theta}}\,\boldsymbol{\theta}_{2}^{ \intercal}\,\Delta\boldsymbol{X}_{2}+\cdots\] \[\qquad\qquad\qquad\qquad+\,\rho_{T-1}\left(\prod_{s=0}^{T-2}w_{s }^{\boldsymbol{\theta}}\,\boldsymbol{\theta}_{T-1}^{\intercal}\Delta \boldsymbol{X}_{T-1}+\rho_{T}\left(\prod_{s^{\prime}=0}^{T-1}w_{s^{\prime}}^{ \boldsymbol{\theta}}\,\boldsymbol{\theta}_{T}^{\intercal}\Delta\boldsymbol{X}_{ T}\right)\right)\cdots\bigg{)}\bigg{)}\] \[= \,\rho_{0}\bigg{(}\boldsymbol{\vartheta}_{0}^{\intercal}\,\Delta \boldsymbol{X}_{0}+\rho_{1}\bigg{(}\boldsymbol{\vartheta}_{1}^{\intercal}\, \Delta\boldsymbol{X}_{1}+\cdots+\rho_{T-1}\left(\boldsymbol{\vartheta}_{T-1}^{ \intercal}\Delta\boldsymbol{X}_{T-1}+\rho_{T}\left(\boldsymbol{\vartheta}_{T}^ {\intercal}\Delta\boldsymbol{X}_{T}\right)\right)\cdots\bigg{)}\bigg{)}\,.\] Hence, \(\mathfrak{R}[\boldsymbol{\theta}_{0:T}]\) is the dynamic risk of the induced self-financing strategy, but parameterised by \(\boldsymbol{\theta}_{0:T}\). We can view the risk recursively by defining the risk-to-go process \((\mathfrak{R}_{t}[\boldsymbol{\theta}_{t:T}])_{t\in\mathcal{T}}\) via \[\mathfrak{R}_{T+1} := 0\quad\text{and} \tag{3a}\] \[\mathfrak{R}_{t}[\boldsymbol{\theta}_{t:T}] := \rho_{t}\left(\boldsymbol{\theta}_{t}^{\intercal}\Delta\boldsymbol {X}_{t}+w_{t}^{\boldsymbol{\theta}}\,\mathfrak{R}_{t+1}[\boldsymbol{\theta}_{ t+1:T}]\right)\,,\qquad\forall\,t\in\mathcal{T}\,. \tag{3b}\] At time \(t=0\), it holds that \(\mathfrak{R}_{0}[\boldsymbol{\theta}_{0:T}]=\mathfrak{R}[\boldsymbol{\theta}_ {0:T}]\). The risk-to-go of the induced self-financing strategy satisfies a slightly simpler recursion: \[\mathfrak{R}_{t}[\boldsymbol{\vartheta}_{t:T}]=\rho_{t}\left(\boldsymbol{ \vartheta}_{t}^{\intercal}\Delta\boldsymbol{X}_{t}+\mathfrak{R}_{t+1}[ \boldsymbol{\vartheta}_{t+1:T}]\right),\qquad\forall\,t\in\mathcal{T}.\] The next proposition connects the risk-to-go process of \(\boldsymbol{\theta}_{0:T}\) with the risk-to-go process of its induced self-financing strategy \(\boldsymbol{\vartheta}_{0:T}\). **Proposition 1**: _Let \(\boldsymbol{\theta}_{0:T}\) be a strategy and denote by \(\boldsymbol{\vartheta}_{0:T}\) its induced self-financing strategy. Then the following holds_ \[\mathfrak{R}_{0}[\boldsymbol{\vartheta}_{0:T}] = \mathfrak{R}_{0}[\boldsymbol{\theta}_{0:T}]\quad\text{and}\] \[\mathfrak{R}_{t}[\boldsymbol{\vartheta}_{t:T}] = \left(\prod_{s=0}^{t-1}w_{s}^{\boldsymbol{\theta}}\right)\, \mathfrak{R}_{t}[\boldsymbol{\theta}_{t:T}]\,,\quad\forall\,t\in\mathcal{T}/ \{0\}\,. \tag{4}\] The equation for \(t=0\) follows by definition. To show the equalities for \(t\in\mathcal{T}/\{0\}\), we proceed by induction starting backwards in time. Clearly, at time \(T\), by homogeneity of the conditional risk measures and since \(w_{t}\geq 0\), for all \(t\in\mathcal{T}\), we have \[\mathfrak{R}_{T}[\boldsymbol{\vartheta}_{T}]=\rho_{T}(\boldsymbol{\vartheta}_ {T}^{\intercal}\Delta\boldsymbol{X}_{T})=\left(\prod_{s=0}^{T-1}w_{s}^{ \boldsymbol{\theta}}\right)\rho_{T}(\boldsymbol{\theta}_{T}^{\intercal}\Delta \boldsymbol{X}_{T})=\left(\prod_{s=0}^{T-1}w_{s}^{\boldsymbol{\theta}}\right) \mathfrak{R}_{T}[\boldsymbol{\theta}_{T}]\,.\] Assume Equation (4) holds for \(s=t+1\) and note that \(w_{s}^{\boldsymbol{\theta}}\) is \(\mathcal{F}_{t}\)-measurable for all \(0\leq s\leq t\). Then at time \(t\), we have \[\mathfrak{R}_{t}[\boldsymbol{\vartheta}_{t:T}] = \rho_{t}\left(\boldsymbol{\vartheta}_{t}^{\intercal}\Delta \boldsymbol{X}_{t}+\mathfrak{R}_{t+1}[\boldsymbol{\vartheta}_{t+1:T}]\right)\] \[= \rho_{t}\left(\left(\prod_{s=0}^{t-1}w_{s}^{\boldsymbol{\theta}} \right)\boldsymbol{\theta}_{t}^{\intercal}\Delta\boldsymbol{X}_{t}+\left( \prod_{s=0}^{t}w_{s}^{\boldsymbol{\theta}}\right)\,\mathfrak{R}_{t+1}[ \boldsymbol{\theta}_{t+1:T}]\right)\] \[= \left(\prod_{s=0}^{t-1}w_{s}^{\boldsymbol{\theta}}\right) \mathfrak{R}_{t}[\boldsymbol{\theta}_{t:T}]\,,\] where the second equality holds from the induction assumption, the third from homogeneity of the conditional risk measures and the last equality follows from Equation (3). Here, as in the static risk budgeting problem, homogeneity of the risk measure plays an important role. Therefore, we next discuss the homogeneity of the risk-to-go process. For this, it is convenient to split the arguments of \(\mathfrak{R}_{t}\) into two parts, specifically we write \(\mathfrak{R}_{t}[\boldsymbol{\theta}_{t:T}]=\mathfrak{R}_{t}[(\boldsymbol{ \theta}_{t},\boldsymbol{\theta}_{t+1:T})]\) to emphasise the difference of the position at \(t\), \(\boldsymbol{\theta}_{t}\), and the remaining ones, \(\boldsymbol{\theta}_{t+1:T}\). **Proposition 2** (Homogeneity of Risk-to-go Process).: _The risk-to-go process is homogeneous viewed as a function of \(\boldsymbol{\theta}_{t}\) and also viewed as a function of \(\boldsymbol{\theta}_{t:T}\), that is for all \(t\in\mathcal{T}\) and for all \(a_{t}\in\mathcal{F}_{t}\), \(a_{t}\geq 0\),_ \[a_{t}\,\mathfrak{R}_{t}\left[\boldsymbol{\theta}_{t:T}\right]=\mathfrak{R}_{t} \left[\left(a_{t}\,\boldsymbol{\theta}_{t},\boldsymbol{\theta}_{t+1:T}\right) \right]=\mathfrak{R}_{t}\left[a_{t}\left(\boldsymbol{\theta}_{t},\boldsymbol{ \theta}_{t+1:T}\right)\right]\,. \tag{5}\] _Proof:_ The first equality, i.e. homogeneity of \(\mathfrak{R}_{t}[\boldsymbol{\theta}_{t:T}]\) in \(\boldsymbol{\theta}_{t}\), follows from representation (3), linearity of \(w_{t}^{\boldsymbol{\theta}}\) in \(\boldsymbol{\theta}_{t}\), noting that \(\mathfrak{R}_{t+1}[\boldsymbol{\theta}_{t+1:T}]\) does not depend on \(\boldsymbol{\theta}_{t}\), and from \(\rho_{t}(\cdot)\) being homogeneous. To see the second equality, we proceed by induction. First, \(\mathfrak{R}_{T}[\boldsymbol{\theta}_{T}]=\rho_{T}(\boldsymbol{\theta}_{T}^{ \intercal}\Delta\boldsymbol{X}_{T})\) is homogeneous in \(\boldsymbol{\theta}_{T}\). Next, as \(w_{t}^{\boldsymbol{\theta}}\) is invariant under scaling of \(\boldsymbol{\theta}_{t}\) and \(\boldsymbol{\theta}_{t+1}\), i.e. \(w_{t}^{\boldsymbol{\theta}}=\frac{\boldsymbol{\theta}_{t}^{\intercal} \boldsymbol{X}_{t+1}}{\boldsymbol{\theta}_{t+1}^{\intercal}\boldsymbol{X}_{t+ 1}}=\frac{a_{t}\,\boldsymbol{\theta}_{t}^{\intercal}\boldsymbol{X}_{t+1}}{a_{ t}\,\boldsymbol{\theta}_{t+1}^{\intercal}\boldsymbol{X}_{t+1}}=w_{t}^{\boldsymbol{ \theta}^{\prime}}\), where \(\boldsymbol{\theta}^{\prime}\) is s.t. \(\boldsymbol{\theta}_{t}^{\prime}=a_{t}\boldsymbol{\theta}_{t}\), \(\boldsymbol{\theta}_{t+1}^{\prime}=a_{t}\boldsymbol{\theta}_{t+1}\), with all remaining \(\boldsymbol{\theta}_{s}^{\prime}=\boldsymbol{\theta}_{s}\) for \(s\notin\{t,t+1\}\). Moreover, if \(a_{t}\in\mathcal{F}_{t}\), \(a_{T}\geq 0\), then \(\boldsymbol{\theta}_{t}^{\prime}\in\mathcal{F}_{t}\) and \(\boldsymbol{\theta}_{t+1}^{\prime}\in\mathcal{F}_{t+1}\) so that \(\boldsymbol{\theta}^{\prime}\) is an admissible long-only strategy. Now, assume the second equality in (5) holds for \(s=t+1\), then we have \[\mathfrak{R}_{t}\left[a_{t}(\,\boldsymbol{\theta}_{t},\boldsymbol{ \theta}_{t+1:T})\right] =\rho_{t}\left(a_{t}\,\boldsymbol{\theta}_{t}^{\intercal}\Delta \boldsymbol{X}_{t}+w_{t}^{\boldsymbol{\theta}^{\prime}}\,\,\mathfrak{R}_{t+1} [a_{t}\,\boldsymbol{\theta}_{t+1:T}]\right)\] \[=\rho_{t}\left(a_{t}\,\boldsymbol{\theta}_{t}^{\intercal}\Delta \boldsymbol{X}_{t}+w_{t}^{\boldsymbol{\theta}}\,a_{t}\,\mathfrak{R}_{t+1}[ \boldsymbol{\theta}_{t+1:T}]\right)=a_{t}\,\mathfrak{R}_{t}\left[\boldsymbol{ \theta}_{t};\boldsymbol{\theta}_{t+1:T}\right]\,,\] where the first equality follows from (3), the second equality follows from the inductive assumption and that \(w_{t}^{\boldsymbol{\theta}}=w_{t}^{\boldsymbol{\theta}^{\prime}}\), and the last equality follows by homogeneity of \(\rho_{t}(\cdot)\). \(\square\) ## 3 Dynamic Risk Contributions The literature on risk contribution - also called capital (cost) allocation - in the static setting is extensive. Approaches ranging from performance measurement (Tasche, 1999), cooperative game theory including Aumann-Shapley allocation (see e.g., Mirman and Tauman (1982) and Billera and Heath (1982) for early works on cost allocation and Denault (2001) in a risk management setting) and allocation in the fuzzy core (Tsanakas and Barnett (2003)), as well as Gateaux derivatives and Euler allocations (Kalkbrener, 2005). Using an axiomatic approach, Kalkbrener (2005) showed that for any positive homogeneous and sub-additive static risk measure, the only linear and diversifying capital allocation rule is the Gateaux derivative. Here, we proceed inline with Kalkbrener (2005) by defining risk contributions as a sub-differential, specifically through the Gateaux derivative. We note that in case of coherent risk measures, the allocation defined via the Gateaux derivatives is the same as the Aumann-Shapley allocation (Tsanakas, 2009). An advantage of defining risk contributions through Gateaux derivatives of a distortion risk measure is that they satisfy _full allocation_, the property that the sum of the risk contributions adds up to the total risk. In the sequel, we show that our dynamic risk contributions also satisfies a dynamic version of the full allocation property. As we work in a dynamic setting, at each time \(t\in\mathcal{T}\), the investor faces the future risk of the induced self-financing strategy and aims to allocate the risk-to-go to each asset \(i\). Thus for each \(t\in\mathcal{T}\), we first define the risk contribution of asset \(i\) as the Gateaux derivative of the risk-to-go processes \(\mathfrak{R}_{t}[\boldsymbol{\theta}_{t:T}]\) in direction \(\theta_{t,i}\). This allows to measure the degree to which the risk-to-go is impacted by the investor's position in the \(i^{\text{th}}\) asset at time \(t\). Moreover, as we show in Corollary 1, this approach allows for full allocation. Second, we relate the risk contribution of \(\theta_{t,i}\) to that of \(\vartheta_{t,i}\). We start by recalling the definition of a Gateaux derivative of a functional and then provide the definition of risk contributions. **Definition 5**.: For a functional \(F_{t}:\boldsymbol{\mathcal{Z}}_{t:T}\to\mathcal{Z}_{t}\), \(t\in\mathcal{T}\), we denote by \(\mathcal{D}_{i}^{\zeta}\,F_{t}\) its Gateaux derivative of the \(i^{\text{th}}\) component in direction \(\zeta\in\mathcal{Z}_{t}\). That is, for \(t\in\mathcal{T}\), and \(\boldsymbol{Z}_{t:T}\in\boldsymbol{\mathcal{Z}}_{t:T}\) \[\mathcal{D}_{i}^{\zeta}\,F_{t}[\boldsymbol{Z}_{t:T}]:=\lim_{\varepsilon\to 0 }\,\frac{1}{\varepsilon}\Big{(}F_{t}[\boldsymbol{Z}_{t:T}+\varepsilon\, \boldsymbol{1}_{t,i}\zeta]-F_{t}[\boldsymbol{Z}_{t:T}]\Big{)}\,,\] where \((\boldsymbol{1}_{t,i})_{t\in\mathcal{T}}\) is the stochastic process taking value \(1\) in component \(i\) at time \(t\), and \(0\) otherwise. **Definition 6**.: For each \(t\in\mathcal{T}\), we define the risk contribution of the risk-to-go to the \(i^{\text{th}}\) investment as \[RC_{t,i}[\boldsymbol{\theta}_{t:T}]:=\mathcal{D}_{i}^{\theta_{t,i}}\,\mathfrak{ R}_{t}[\boldsymbol{\theta}_{t:T}]\,.\] Note that the risk contributions \(RC_{t,i}[\boldsymbol{\theta}_{t:T}]\) are \(\mathcal{F}_{t}\)-measurable rvs. The next result is central to prove uniqueness of the dynamic risk budgeting strategy (see Theorem 4), see, e.g., Freitas Paulo da Costa et al. (2022) for the static case. **Proposition 3** (Homogeneity of Risk Contributions).: _The risk contributions of a strategy \(\boldsymbol{\theta}_{0:T}\) to the \(i^{\text{th}}\) investment at time \(t\in\mathcal{T}\) are homogeneous in the following way. For all \(t\in\mathcal{T}\) and for all \(a_{t}\in\mathcal{F}_{t}\), \(a_{t}\geq 0\), we have that_ \[a_{t}\,RC_{t,i}\,[\boldsymbol{\theta}_{t:T}]=RC_{t,i}\,[(a_{t}\,\boldsymbol{ \theta}_{t},\,\boldsymbol{\theta}_{t+1:T})]=RC_{t,i}\,[(a_{t}\,\boldsymbol{ \theta}_{t:T})]\;. \tag{6}\] _Proof:_ This follows as \(RC_{t,i}\) are the Gateaux derivatives of a homogeneous function. For completeness we provide a short proof. For any \(t\in\mathcal{T}\) and \(i\in\mathcal{N}\), we obtain from homogeneity of \(\mathfrak{R}_{t}[\boldsymbol{\theta}_{t:T}]\) in \(\boldsymbol{\theta}_{t}\): \[RC_{t,i}[a_{t}\,\boldsymbol{\theta}_{t:T}]=\mathcal{D}_{i}^{a_{t}\,\theta_{t,i}}\,\mathfrak{R}_{t}[(\,a_{t}\boldsymbol{\theta}_{t},\boldsymbol{\theta}_{t +1:T})]\] \[= \lim_{\varepsilon\to 0}\frac{1}{\varepsilon}\big{(}\mathfrak{R}_{t} \left[\left(a_{t}\left(\boldsymbol{\theta}_{t}+\varepsilon\,\boldsymbol{e}_{i} \,\theta_{t,i}\right),\,\boldsymbol{\theta}_{t+1:T}\right)\right]-\mathfrak{R}_ {t}\big{[}\big{(}a_{t}\,\boldsymbol{\theta}_{t},\boldsymbol{\theta}_{t+1:T} \big{)}\big{]}\big{)}\] \[= \lim_{\varepsilon\to 0}a_{t}\frac{1}{\varepsilon}\big{(}\mathfrak{R}_{t} \left[\left(\left(\boldsymbol{\theta}_{t}+\varepsilon\,\boldsymbol{e}_{i}\, \theta_{t,i}\right),\,\boldsymbol{\theta}_{t+1:T}\right)\right]-\mathfrak{R}_ {t}\big{[}\big{(}\boldsymbol{\theta}_{t},\boldsymbol{\theta}_{t+1:T}\big{)} \big{]}\big{)}\] \[= a_{t}\,RC_{t,i}[\boldsymbol{\theta}_{t:T}]\,,\] where \(\boldsymbol{e}_{i}\) is the unit vector having value \(1\) at position \(i\) (and \(0\) otherwise). The second equality in (6) follows via similar arguments using homogeneity of \(\mathfrak{R}_{t}[\boldsymbol{\theta}_{t:T}]\) in \(\boldsymbol{\theta}_{t:T}\), see Proposition 2. By homogeneity of the risk-to-go process, we obtain for all \(t\in\mathcal{T}\) an Euler-like theorem, as stated in the next corollary, which guarantees full allocation. **Corollary 1** (Full Allocation).: _Let \(\{\rho_{t}\}_{t\in\mathcal{T}}\) be a dynamic time-consistent risk measure. Then it holds that_ \[\mathfrak{R}_{t}[\,\boldsymbol{\theta}_{t:T}]=\sum_{i\in\mathcal{N}}RC_{t,i}[ \boldsymbol{\theta}_{t:T}]\,,\quad\mathbb{P}\text{-a.s.}\,,\quad\forall\,t \in\mathcal{T}\,.\] _Proof:_ By assumption \(\mathfrak{R}_{t}\) is Gateaux differentiable and its Gateaux derivatives is equal to its Frechet differential, with the Frechet differential defined as in Kennison (1934). Next, applying Theorem 2 in Kennison (1934), we obtain that \[\mathfrak{R}_{t}[\boldsymbol{\theta}_{t:T}]=\sum_{i\in\mathcal{N}}D_{F}^{ \theta_{t,i}}\mathfrak{R}_{t}[\boldsymbol{\theta}_{t:T}]\,,\] using the definition of the risk contribution, Definition 5, concludes the statement. The author of Tsanakas (2004) derive, in the static setting, a closed form formula for the risk contributions of distortion risk measures (see Definition 4). Our next result extends this to the dynamic setting. **Theorem 2** (Risk Contributions).: _Let \(\{\rho_{t}\}_{t\in\mathcal{T}}\) be a dynamic time-consistent distortion risk measure with weight functions \(\{\gamma_{t}\}_{t\in\mathcal{T}}\). Then, the risk contribution of a strategy \(\boldsymbol{\theta}_{0:T}\) to the \(i^{\text{th}}\)-investment at time \(t\in\mathcal{T}\) is given by_ \[\text{RC}_{t,i}[\boldsymbol{\theta}_{t:T}]=\mathbb{E}\left[\theta_{t,i}\left( \Delta X_{t,i}+\frac{X_{t+1,i}}{\boldsymbol{\theta}_{t+1}^{\intercal}\!\! \!X_{t+1}}\mathfrak{R}_{t+1}[\boldsymbol{\theta}_{t+1:T}]\right)\gamma_{t} \Big{(}U_{t}[\boldsymbol{\theta}_{t:T}]\Big{)}\,\,\Big{|}\,\,\mathcal{F}_{t} \right]\,,\] _where \(U_{t}[\boldsymbol{\theta}_{t:T}]\) is a uniform rv comonotonic to \(\boldsymbol{\theta}_{t}^{\intercal}\Delta\boldsymbol{X}_{t}+w_{t}^{\theta} \,\mathfrak{R}_{t+1:T}[\boldsymbol{\theta}_{t+1:T}]\)._ _Proof_ By Proposition 3.2 in Pesenti et al. (2021) it holds for \(\boldsymbol{Y},\boldsymbol{Y}^{\prime}\in\boldsymbol{\mathcal{Z}}_{t+1}\), a differentiable function \(h\colon\mathds{R}^{d}\to\mathds{R}\), and a one-step conditional distortion risk measure \(\rho_{t}\), that \[\lim_{\varepsilon\to 0}\frac{\rho_{t}\left(h(\boldsymbol{Y}^{\prime}+ \varepsilon\,\boldsymbol{e}_{i}\boldsymbol{Y})\right)-\rho_{t}(h(\boldsymbol {Y}^{\prime}))}{\varepsilon}=\mathbb{E}\left[\,Y_{i}\,\tfrac{\partial}{ \partial y_{i}}h(\boldsymbol{Y}^{\prime})\,\gamma_{t}\left(U_{h(\boldsymbol{Y} ^{\prime})|\mathcal{F}_{t}}\right)|\mathcal{F}_{t}\,\right]\,, \tag{7}\] where \(U_{h(\boldsymbol{Y}^{\prime})|\mathcal{F}_{t}}\) is a uniform rv that is comonotonic to the rv \(h(\boldsymbol{Y}^{\prime})\) conditional on the information \(\mathcal{F}_{t}\), see also Equation (2). In Appendix A Proposition 11, we provide an alternative proof of Equation (7). Next, note that the risk contributions are \[RC_{t,i}[\boldsymbol{\theta}_{t:T}] =D_{i}^{\theta_{t,i}}\,\mathfrak{R}_{t}[\boldsymbol{\theta}_{t:T}]\] \[=D_{i}^{\theta_{t,i}}\,\rho_{t}\left(\boldsymbol{\theta}_{t}^{ \intercal}\Delta\boldsymbol{X}_{t}+w_{t}^{\boldsymbol{\theta}}\,\mathfrak{R}_ {t+1}[\boldsymbol{\theta}_{t+1:T}]\right)\] \[=D_{i}^{\theta_{t,i}}\,\rho_{t}\left(\sum_{i\in\mathcal{N}}\theta _{t,i}\left\{\Delta X_{t,i}+\frac{X_{t+1,i}}{\boldsymbol{\theta}_{t+1}^{ \intercal}\boldsymbol{X}_{t+1}}\,\mathfrak{R}_{t+1}[\boldsymbol{\theta}_{t+1: T}]\right\}\right)\,.\] Applying Equation (7) and noting that \(\mathfrak{R}_{t+1}[\boldsymbol{\theta}_{t+1:T}]\) is a function of \(\boldsymbol{\theta}_{t+1:T}\) only, and not a function of \(\theta_{t,1}\), concludes the proof. The next representation of the risk contribution of a strategy illustrates that a decision at time \(t\), via \(\boldsymbol{\theta}_{t}\), cascades through time and impacts all later times. This is because restricting to self-financing strategies implies that the investor's future wealth, and thus also possible investment decisions, depend on the current choice of \(\boldsymbol{\theta}_{t}\). The proof can be found in Appendix A.1. [Impact of a Decision] Let \(\{\rho_{t}\}_{t\in\mathcal{T}}\) be a dynamic time-consistent distortion risk measure with respective weight functions \(\{\gamma_{t}\}_{t\in\mathcal{T}}\). Then, the risk contribution of a strategy \(\boldsymbol{\theta}_{0:T}\) to the \(i^{\text{th}}\)-investment at time \(t\in\mathcal{T}\) may be written as \[RC_{t,i}[\boldsymbol{\theta}_{t:T}] =\mathbb{E}\left[\theta_{t,i}\,\Delta X_{t,i}\,\Gamma_{t}^{ \boldsymbol{\theta}}\ \Big{|}\ \mathcal{F}_{t}\right]\] \[\quad+\mathbb{E}\left[\frac{\theta_{t,i}\,X_{t+1,i}}{\boldsymbol {\theta}_{t+1}^{\intercal}\boldsymbol{X}_{t+1}}\,w_{t+1}^{\boldsymbol{\theta} }\left(\boldsymbol{\theta}_{t+2}^{\intercal}\Delta\boldsymbol{X}_{t+2}\right) \Gamma_{t}^{\boldsymbol{\theta}}\,\Gamma_{t+1}^{\boldsymbol{\theta}}\,\Big{|} \ \mathcal{F}_{t}\right]\] \[\quad+\cdots+\mathbb{E}\left[\frac{\theta_{t,i}\,X_{t+1,i}}{ \boldsymbol{\theta}_{t+1}^{\intercal}\boldsymbol{X}_{t+1}}\,w_{t+1}^{ \boldsymbol{\theta}}\,\cdots w_{T-1}^{\boldsymbol{\theta}}\left(\boldsymbol{ \theta}_{T}^{\intercal}\Delta\boldsymbol{X}_{T}\right)\Gamma_{t}^{\boldsymbol{ \theta}}\,\cdots\Gamma_{T}^{\boldsymbol{\theta}}\ \Big{|}\ \mathcal{F}_{t}\right]\] where for all \(s\in\mathcal{T}\) \[\Gamma_{s}^{\boldsymbol{\theta}}:=\gamma_{s}\big{(}U_{s}[\boldsymbol{\theta}_{ s:T}]\big{)},\] with \(U_{s}[\boldsymbol{\theta}_{s:T}]\) as defined in Theorem 2.2. From the above proposition, we can interpret the first expectation on the right hand side as the time \(t\) impact of \(\theta_{t,i}\), the second expectation as the effect the choice \(\theta_{t,i}\) has at time \(t+1\), and so on. Note that the risk contribution of investment-\(i\) at time \(t\) of a self-financing strategy \(\vartheta_{0:T}\) is \[RC_{t,i}[\boldsymbol{\vartheta}_{t:T}]:=D_{i}^{\theta_{t,i}}\,\mathfrak{R}_{t}[ \boldsymbol{\theta}_{t:T}]\Big{|}_{\boldsymbol{\theta}=\boldsymbol{\theta}}\,.\] Next we relate the risk contributions of a strategy with those of its induced self-financing strategy. **Proposition 5**.: _Let \(\{\rho_{t}\}_{t\in\mathcal{T}}\) be a dynamic time-consistent distortion risk measure with weight functions \(\{\gamma_{t}\}_{t\in\mathcal{T}}\). Let \(\boldsymbol{\theta}_{0:T}\) be a strategy and \(\boldsymbol{\vartheta}_{0:T}\) its induced self-financing strategy. Then the following holds:_ \[RC_{0,i}[\boldsymbol{\vartheta}_{0:T}] = RC_{0,i}[\boldsymbol{\theta}_{0:T}]\quad\text{and}\] \[RC_{t,i}[\boldsymbol{\vartheta}_{t:T}] = \left(\prod_{s=0}^{t-1}w_{s}^{\boldsymbol{\theta}}\right)RC_{t,i }[\boldsymbol{\theta}_{t:T}]\,,\qquad\forall t\in\mathcal{T}/\{0\}\,.\] _Proof:_ Case \(t=0\) follow immediately from Proposition 1. For \(t\in\mathcal{T}/\{0\}\), define the scalar \(c_{t}^{\boldsymbol{\theta}}:=\prod_{s=0}^{t-1}w_{s}^{\boldsymbol{\theta}}\) and recall that \(\boldsymbol{\vartheta}_{t}=c_{t}^{\boldsymbol{\theta}}\,\boldsymbol{\theta}_ {t}\). Then, we have that \[RC_{t,i}[\boldsymbol{\vartheta}_{t:T}] = \mathcal{D}_{i}^{\theta_{t,i}}\left.\mathfrak{R}_{t}[ \boldsymbol{\theta}_{t:T}]\right|_{\boldsymbol{\theta}=\boldsymbol{\vartheta}}\] (8) \[\text{(by Theorem \ref{thm:eq:eq:eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq eq ## 4 Dynamic Risk Budgeting Portfolios Using the dynamic risk contributions defined in the last section, we now define a dynamic risk budgeting portfolio. **Definition 7**: _Let \(\{\rho_{t}\}_{t\in\mathcal{T}}\) be a dynamic time-consistent risk measure. A strategy \(\boldsymbol{\theta}_{0:T}\) is called a dynamic risk budgeting strategy with budget\(B=(b_{t,i})_{t\in\mathcal{T},i\in\mathcal{N}}\) satisfying \(b_{t,i}>0\) and \(\sum_{i\in\mathcal{N}}b_{t,i}=1\), for all\(t\in\mathcal{T}\), if_ \[RC_{t,i}[\boldsymbol{\theta}_{t:T}]=b_{t,i}\,\mathfrak{R}_{t}[\boldsymbol{ \theta}_{t:T}]\,,\quad\forall t\in\mathcal{T}\quad\mbox{and}\quad i\in \mathcal{N}\,. \tag{9}\] A dynamic risk budgeting strategy is therefore a strategy such that at each time \(t\in\mathcal{T}\) the risk contribution of investment \(i\in\mathcal{N}\) is equal to \(b_{t,i}\,\%\) of the risk-to-go at time \(t\). For example, if the risk budget is \(b_{t,i}=\frac{1}{n}\) for all \(i\in\mathcal{N}\) and \(t\in\mathcal{T}\), then we call the risk budgeting strategy _risk parity_, which means equal risk contributions, since it satisfies \[RC_{t,i}[\boldsymbol{\theta}_{t:T}]=RC_{t,j}[\boldsymbol{\theta}_{t:T}]\,, \quad\forall i,j\in\mathcal{N}\mbox{ and }\forall t\in\mathcal{T}\,.\] **Proposition 6**: _Let \(\{\rho_{t}\}_{t\in\mathcal{T}}\) be a dynamic time-consistent risk measure, \(\boldsymbol{\theta}_{0:T}\) be a strategy and \(\boldsymbol{\vartheta}_{0:T}\) its corresponding induced self-financing strategy. Then \(\boldsymbol{\theta}_{0:T}\) is a risk budgeting strategy with risk budget \(B\) if and only if \(\boldsymbol{\vartheta}_{0:T}\) is a risk budgeting strategy with risk budget \(B\)._ _Proof:_ The case when \(t=0\) is trivial since \(\mathfrak{R}_{0}[\boldsymbol{\theta}_{0:T}]=\mathfrak{R}_{0}[\boldsymbol{ \vartheta}_{0:T}]\) and \(RC_{0,i}[\boldsymbol{\theta}_{0:T}]=RC_{0,i}[\boldsymbol{\vartheta}_{0:T}]\). Next, let \(t>0\) and assume that \(\boldsymbol{\theta}_{0:T}\) is a risk budgeting strategy. Then, for each \(t\in\mathcal{T}/\{0\}\) and \(i\in\mathcal{N}\), it holds that \[RC_{t,i}[\boldsymbol{\vartheta}_{t:T}]=\left(\prod_{s=0}^{t-1}w_{s}^{ \boldsymbol{\theta}}\right)RC_{t,i}[\boldsymbol{\theta}_{t:T}]=\left(\prod_{ s=0}^{t-1}w_{s}^{\boldsymbol{\theta}}\right)b_{t,i}\,\mathfrak{R}_{t}[ \boldsymbol{\theta}_{t:T}]=b_{t,i}\left(\prod_{s=0}^{t-1}w_{s}^{\boldsymbol{ \theta}}\right)\mathfrak{R}_{t}[\boldsymbol{\theta}_{t:T}]=b_{t,i}\,\mathfrak{ R}_{t}[\,\boldsymbol{\vartheta}_{t:T}]\,,\] where we used Proposition 5 in the first equation, then the fact that \(\boldsymbol{\theta}_{0:T}\) is a risk budgeting strategy, and finally Proposition 1. For the converse direction assume that \(\boldsymbol{\vartheta}_{0:T}\) is a risk budgeting portfolio, then (following along as above) \[RC_{t,i}[\boldsymbol{\theta}_{t:T}]=\prod_{s=0}^{t-1}\left(w_{s}^{\boldsymbol {\theta}}\right)^{-1}RC_{t,i}[\boldsymbol{\vartheta}_{t:T}]=\prod_{s=0}^{t-1} \left(w_{s}^{\boldsymbol{\theta}}\right)^{-1}b_{t,i}\,\mathfrak{R}_{t}[ \,\boldsymbol{\vartheta}_{t:T}]=b_{t,i}\,\mathfrak{R}_{t}[\,\boldsymbol{ \theta}_{t:T}]\,,\] which concludes the proof. \(\square\) The next series of results requires to restrict the class of dynamic time-consistent distortion risk measures to those that are convex, i.e., the weight functions \(\{\gamma_{t}\}_{t\in\mathcal{T}}\) are non-decreasing. The first result pertains to the characterisation of self-financing risk budgeting strategies as a unique solution of a series of convex and recursive (backward in time) optimisation problems. The second states that if a self-financing risk budgeting strategy with initial wealth of 1 exists, then it is given by a rescaled version of the solution to the series of convex optimisation problems. For both these results, convexity of the one-step conditional distortion risk measure plays a central role. **Theorem 3**: _Let \(\{\rho_{t}\}_{t\in\mathcal{T}}\) be a dynamic time-consistent distortion risk measure with non-decreasing weight functions \(\{\gamma_{t}\}_{t\in\mathcal{T}}\) and \(B=(b_{t,i})_{t\in\mathcal{T},i\in\mathcal{N}}\) a risk budget. Consider the recursive optimisation problems_ \[\boldsymbol{\theta}_{t}^{*}:=\operatorname*{arg\,min}_{\boldsymbol{\theta}_{t }\in\mathcal{Z}_{t}}\mathbb{E}\left[\mathfrak{R}_{t}[(\boldsymbol{\theta}_{t},\boldsymbol{\theta}_{t+1:T}^{*})]-\sum_{i\in\mathcal{N}}b_{t,i}\,\log\theta _{t,i}\ \bigg{|}\ \ \mathcal{F}_{0}\right]\,,\qquad\forall t\in\mathcal{T}\,.\] ( \[P\] ) _Then:_ 1. _there exists a unique solution to (_\(P\)_);_ 2. _the self-financing strategy_ \(\boldsymbol{\vartheta}_{0:T}^{*}\)_, induced by_ \(\boldsymbol{\theta}_{0:T}^{*}\)_, is a self-financing risk budgeting strategy with budget_ \(B\)_;_ 3. _the normalised strategy_ \(\boldsymbol{\vartheta}_{0:T}^{\dagger}=\frac{1}{\boldsymbol{\vartheta}_{0}^{ \star}\boldsymbol{X}_{0}}\boldsymbol{\vartheta}_{0:T}^{*}\) _is a self-financing risk budgeting strategy with the same budget_ \(B\) _and initial wealth_ \(1\)_._ Let \(t\in\mathcal{T}\), and denote the objective function by \[L_{t}[\boldsymbol{\theta}_{t}]:=\mathbb{E}\left[\mathfrak{R}_{t}[(\boldsymbol{ \theta}_{t},\boldsymbol{\theta}_{t+1:T}^{*})]-\sum_{i\in\mathcal{N}}b_{t,i}\, \log\theta_{t,i}\ \Big{|}\ \mathcal{F}_{0}\right]\,. \tag{10}\] Taking the Gateaux derivative of \(L_{t}[\boldsymbol{\theta}_{t}]\) in the \(i^{\text{th}}\)-component in direction \(\delta\theta\in\mathcal{Z}_{t}\), we have \[\lim_{\varepsilon\to 0}\frac{1}{\varepsilon} \Big{(}L_{t}[\boldsymbol{\theta}_{t}+\varepsilon\,\boldsymbol{e} _{i}\delta\theta]-L_{t}[\boldsymbol{\theta}_{t}]\Big{)}\] \[=\mathbb{E}\left[\,\mathcal{D}_{i}^{\delta\theta}\,\mathfrak{R}_ {t}[(\boldsymbol{\theta}_{t},\boldsymbol{\theta}_{t+1:T}^{*})]-b_{t,i}\,\frac {\delta\theta}{\theta_{t,i}}\ \bigg{|}\ \mathcal{F}_{0}\right]\,.\] \[=\mathbb{E}\left[\,\mathbb{E}\left[\,\delta\theta\,\left(\Delta X _{t,i}+\frac{X_{t+1,i}}{\boldsymbol{\theta}_{t+1}^{*}\boldsymbol{X}_{t+1}} \mathfrak{R}_{s+1}\left[\boldsymbol{\theta}_{t+1:T}^{*}\right]\right)\gamma_{t }\big{(}U_{t}[(\boldsymbol{\theta}_{t},\boldsymbol{\theta}_{t+1:T}^{*})]\big{)} \ \Big{|}\ \mathcal{F}_{t}\right]-b_{t,i}\,\frac{\delta\theta}{\theta_{t,i}}\ \bigg{|}\ \mathcal{F}_{0}\right]\] \[=\mathbb{E}\left[\,\delta\theta\ \mathbb{E}\left[\,\left(\Delta X _{t,i}+\frac{X_{t+1,i}}{\boldsymbol{\theta}_{t+1}^{*}\boldsymbol{X}_{t+1}} \mathfrak{R}_{s+1}\left[\boldsymbol{\theta}_{t+1:T}^{*}\right]\right)\gamma_{t }\big{(}U_{t}[(\boldsymbol{\theta}_{t},\boldsymbol{\theta}_{t+1:T}^{*})]\big{)} -\frac{b_{t,i}}{\theta_{t,i}}\ \Big{|}\ \mathcal{F}_{t}\right]\ \bigg{|}\ \mathcal{F}_{0}\right]\,.\] As the one-step risk measure is convex, and log is strictly convex, the functional \(L_{t}[\boldsymbol{\theta}_{t}]\) is strictly convex and coercive. Hence, the unique optima is attained where the Gateaux derivative vanishes. To ensure the Gateaux derivative vanishes for all \(\delta\theta\in\mathcal{F}_{t}\), the \(\mathcal{F}_{t}\)-conditional expectation in the above expression must vanish for all \(i\in\mathcal{N}\). Hence, the inner expectation in the last expression above must vanish \(\mathbb{P}\)-almost surely. Thus, multiplying with \(\theta_{t,i}\), the optimal \(\boldsymbol{\theta}_{t}^{*}\) satisfies for all \(i\in\mathcal{N}\), \[RC_{t,i}[\boldsymbol{\theta}_{t:T}^{*}]=b_{t,i},\qquad\mathbb{P}\text{-a.s.} \tag{11}\] Next, by Corollary 1, it holds that \[\mathfrak{R}_{t}[\boldsymbol{\theta}_{t:T}^{*}]=\sum_{i\in\mathcal{N}}RC_{t,i}[ \boldsymbol{\theta}_{t:T}^{*}]=\sum_{i\in\mathcal{N}}b_{t,i}=1\quad\text{which implies that}\quad RC_{t,i}[\boldsymbol{\theta}_{t:T}^{*}]=b_{t,i}\, \mathfrak{R}_{t}[\boldsymbol{\theta}_{t:T}^{*}]\] and thus \(\boldsymbol{\theta}_{t:T}^{*}\) satisfies Equation (9) for all \(t\in\mathcal{T}\). It remains to show that the induced self-financing strategy has the correct properties. By Proposition 6, the self-financing strategy \(\boldsymbol{\vartheta}^{*}_{0:T}\) induced by \(\boldsymbol{\theta}^{*}_{0:T}\) is a risk budgeting strategy with budget \(B\). Next, define \(\boldsymbol{\vartheta}^{\dagger}_{0:T}=\frac{1}{\boldsymbol{\vartheta}^{*}_{0} \boldsymbol{X}_{0}}\boldsymbol{\vartheta}^{*}_{0:T}\), clearly \(\boldsymbol{\vartheta}^{\dagger}_{0:T}\) has initial wealth of \(1\), is self-financing, and we claim that it is a risk budgeting strategy with budget \(B\). Indeed, we have for all \(t\in\mathcal{T}\) and \(i\in\mathcal{N}\) \[RC_{t,i}[\boldsymbol{\vartheta}^{\dagger}_{t:T}]=\frac{RC_{t,i}[\boldsymbol{ \vartheta}^{*}_{t:T}]}{\boldsymbol{\vartheta}^{*\,\intercal}_{0}\boldsymbol {X}_{0}}=\frac{b_{t,i}}{\boldsymbol{\vartheta}^{*\,\intercal}_{0}\boldsymbol {X}_{0}}\,\mathfrak{R}_{t}[\boldsymbol{\vartheta}^{*}_{t:T}]=b_{t,i}\left( \frac{1}{\boldsymbol{\vartheta}^{*\,\intercal}_{0}\boldsymbol{X}_{0}} \mathfrak{R}_{t}[\boldsymbol{\vartheta}^{*}_{t:T}]\right)=b_{t,i}\,\mathfrak{ R}_{t}[\boldsymbol{\vartheta}^{\dagger}_{t:T}]\,,\] where we applied homogeneity of the risk contributions, see Proposition 3, and the fact that \(\boldsymbol{\vartheta}^{*}_{0:T}\) is a risk-budgeting strategy. Thus, \(\boldsymbol{\vartheta}^{\dagger}_{0:T}\) is a self-financing and risk budgeting strategy with risk budget \(B\) and initial wealth of \(1\). The above result states that the unique optimiser of (\(P\)) is a risk budgeting strategy. In the next theorem, we show that any self-financing risk budgeting strategy with initial wealth of \(1\) is a rescaled version to the solution of the optimisation problem (\(P\)), and in particular given in Theorem 3(c). [Uniqueness] Let \(\{\rho_{t}\}_{t\in\mathcal{T}}\) be a dynamic time-consistent conditional distortion risk measure with non-decreasing weight functions \(\{\gamma_{t}\}_{t\in\mathcal{T}}\). If a self-financing dynamic risk budgeting strategy with initial wealth 1 and budget \(B\) exists, and the corresponding risk-to-go are all non-negative, then the risk budgeting strategy is unique and it is characterised by Theorem 3(c). Proof.: Let \(\boldsymbol{\varphi}_{0:T}\) denote a self-financing risk budgeting strategy with budget \(B\) and initial wealth of \(1\). We show that \(\boldsymbol{\varphi}_{0:T}=\frac{1}{\boldsymbol{\vartheta}^{*}_{0}\boldsymbol {1}\boldsymbol{X}_{0}}\boldsymbol{\vartheta}^{*}_{0:T}\), where \(\boldsymbol{\vartheta}^{*}_{0:T}\) is the induced self-financing strategy of the unique solution to optimisation problem (\(P\)). For this we proceed by contradiction. Assume \(\boldsymbol{\varphi}_{0:T}\neq\frac{1}{\boldsymbol{\vartheta}^{*}_{0} \boldsymbol{1}\boldsymbol{X}_{0}}\boldsymbol{\vartheta}^{*}_{0:T}\). Next, define the (not necessarily self-financing) strategy \(\boldsymbol{\psi}_{0:T}\) via \[\boldsymbol{\psi}_{t}:=\frac{1}{\mathfrak{R}_{t}[\boldsymbol{\varphi}_{t:T}]} \,\boldsymbol{\varphi}_{t}\,,\qquad\forall t\in\mathcal{T}\,. \tag{12}\] Next, we show that the risk-to-go process of \(\boldsymbol{\psi}_{0:T}\) satisfies \[\mathfrak{R}_{t}[\boldsymbol{\psi}_{t:T}]=1\,,\qquad\forall t\in\mathcal{T}\,. \tag{13}\] We proceed by induction. At time \(T\), by homogeneity of \(\mathfrak{R}_{T}\): \[\mathfrak{R}_{T}[\boldsymbol{\psi}_{T}]=\frac{1}{\mathfrak{R}_{T}[ \boldsymbol{\varphi}_{T}]}\,\mathfrak{R}_{T}[\boldsymbol{\varphi}_{T}]=1\,.\] Assume Equation (13) holds for all \(s=t+1\), then \[\mathfrak{R}_{t}[\boldsymbol{\psi}_{t:T}] =\rho_{t}\left(\boldsymbol{\psi}^{\intercal}_{t}\Delta\boldsymbol {X}_{t}+w^{\Psi}_{t}\,\mathfrak{R}_{t+1}[\boldsymbol{\psi}_{t+1:T}]\right)\] \[=\rho_{t}\left(\boldsymbol{\psi}^{\intercal}_{t}\Delta\boldsymbol {X}_{t}+\frac{\boldsymbol{\psi}^{\intercal}_{t}\boldsymbol{X}_{t+1}}{ \boldsymbol{\psi}^{\intercal}_{t+1}\boldsymbol{X}_{t+1}}\right)\] \[=\rho_{t}\left(\frac{1}{\mathfrak{R}_{t}[\boldsymbol{\varphi}_{t:T }]}\boldsymbol{\varphi}^{\intercal}_{t}\Delta\boldsymbol{X}_{t}+\frac{ \mathfrak{R}_{t+1}[\boldsymbol{\varphi}_{t+1:T}]}{\mathfrak{R}_{t}[ \boldsymbol{\varphi}_{t:T}]}\,\frac{\boldsymbol{\varphi}^{\intercal}_{t} \boldsymbol{X}_{t+1}}{\boldsymbol{\varphi}^{\intercal}_{t+1}\boldsymbol{X}_{t+ 1}}\right)\] (by homogeneity of \(\rho_{t}(\cdot)\) and \(\varphi\) is self-financing) \[=\frac{1}{\mathfrak{R}_{t}[\boldsymbol{\varphi}_{t:T}]}\rho_{t} \left(\boldsymbol{\varphi}^{\intercal}_{t}\Delta\boldsymbol{X}_{t}+\mathfrak{R} _{t+1}[\boldsymbol{\varphi}_{t+1:T}]\right)\] \[=1\,,\] Thus, Equation (13) holds for all \(t\in\mathcal{T}\). Next, we show that \(\boldsymbol{\psi}_{0:T}\) is a risk budgeting strategy with budget \(B\). By homogeneity of risk contributions (Proposition 3) we obtain \[RC_{t,i}[\boldsymbol{\psi}_{t:T}]=\frac{1}{\mathfrak{R}_{t}[\boldsymbol{\varphi }_{t:T}]}RC_{t,i}[\boldsymbol{\varphi}_{t:T}]=\frac{1}{\mathfrak{R}_{t}[ \boldsymbol{\varphi}_{t:T}]}b_{t,i}\,\mathfrak{R}_{t}[\boldsymbol{\varphi}_{t: T}]=b_{t,i}\,\mathfrak{R}_{t}[\boldsymbol{\psi}_{t:T}]\,.\] Thus, \(\boldsymbol{\psi}_{0:T}\) is not only a risk budgeting strategy but also a solution to optimisation problem (\(P\)), that is for all \(t\in\mathcal{T}\) is satisfies Equations (11). As \(\boldsymbol{\psi}_{0:T}\) is a solution to optimisation problem (\(P\)), it induces a self-financing, risk budgeting strategy, with initial wealth of \(1\) as given in Theorem 3(c), and denoted here by \(\boldsymbol{\vartheta}_{0:T}\). Specifically, we have \[\boldsymbol{\vartheta}_{0}:=\frac{1}{\boldsymbol{\psi}_{0}^{\intercal}\boldsymbol {X}_{0}}\boldsymbol{\psi}_{0}\,,\quad\text{and}\quad\boldsymbol{\vartheta}_{t }:=\frac{1}{\boldsymbol{\psi}_{0}^{\intercal}\boldsymbol{X}_{0}}\left(\prod_{ s=0}^{t-1}w_{s}^{\boldsymbol{\psi}}\right)\,\boldsymbol{\psi}_{t}\,,\quad \forall t\in\mathcal{T}/\{0\}\,,\] Finally, we show that \(\boldsymbol{\vartheta}_{0:T}=\boldsymbol{\varphi}_{0:T}\). For this recall that \(\boldsymbol{\varphi}_{0:T}\) is a self-financing strategy, thus \[\boldsymbol{\vartheta}_{0}=\frac{1}{\boldsymbol{\psi}_{0}^{\intercal} \boldsymbol{X}_{0}}\boldsymbol{\psi}_{0}=\frac{\mathfrak{R}_{0}[\boldsymbol{ \varphi}_{0:T}]}{\boldsymbol{\varphi}_{0}^{\intercal}\boldsymbol{X}_{0}}\, \frac{\boldsymbol{\varphi}_{0}}{\mathfrak{R}_{0}[\boldsymbol{\varphi}_{0:T}] }=\boldsymbol{\varphi}_{0}\,.\] For \(t\in\mathcal{T}/\{0\}\), we have \[\boldsymbol{\vartheta}_{t} =\frac{1}{\boldsymbol{\psi}_{0}^{\intercal}\boldsymbol{X}_{0}} \left(\prod_{s=0}^{t-1}\frac{\boldsymbol{\psi}_{s}^{\intercal}\boldsymbol{X} _{s+1}}{\boldsymbol{\psi}_{s+1}^{\intercal}\boldsymbol{X}_{t+1}}\right)\, \boldsymbol{\psi}_{t}\] \[=\frac{\mathfrak{R}_{0}[\boldsymbol{\varphi}_{0:T}]}{\boldsymbol {\varphi}_{0}^{\intercal}\boldsymbol{X}_{0}}\left(\prod_{s=0}^{t-1}\frac{ \mathfrak{R}_{s+1}[\boldsymbol{\varphi}_{s+1:T}]}{\mathfrak{R}_{s}[ \boldsymbol{\varphi}_{s:T}]}\,\frac{\boldsymbol{\varphi}_{s}^{\intercal} \boldsymbol{X}_{s+1}}{\boldsymbol{\varphi}_{s+1}^{\intercal}\boldsymbol{X}_{t+ 1}}\right)\frac{\boldsymbol{\varphi}_{t}}{\mathfrak{R}_{t}[\boldsymbol{\varphi} _{t:T}]}\] \[\text{(as $\boldsymbol{\varphi}_{0:T}$ is self-financing)} =\mathfrak{R}_{0}[\boldsymbol{\varphi}_{0:T}]\,\left(\prod_{s=0}^ {t-1}\frac{\mathfrak{R}_{s+1}[\boldsymbol{\varphi}_{s+1:T}]}{\mathfrak{R}_{s}[ \boldsymbol{\varphi}_{s:T}]}\,\right)\frac{\boldsymbol{\varphi}_{t}}{ \mathfrak{R}_{t}[\boldsymbol{\varphi}_{t:T}]}\] \[=\boldsymbol{\varphi}_{t}\,.\] Thus, \(\boldsymbol{\varphi}_{0:T}=\boldsymbol{\vartheta}_{0:T}\) and as \(\boldsymbol{\vartheta}_{0:T}\) is given by Theorem 3(c), we arrive at a contradiction. Uniqueness follows from Theorem 3(a). ## 5 Approximation of Risk Budgeting Strategies While Theorems 3 and 4 provide a full characterisation of risk budgeting strategies as solutions to a sequence of convex optimisation problems, they do not provide a methodology for finding them. Thus, we develop a deep learning approach that leverages the flexibility of neural networks to approximate high dimensional functions, together with new techniques that have been developed for optimising dynamic time-consistent convex risk measures in Coache and Jaimungal (2023) and Coache et al. (2022). The approach we take, however, uses the analytical results developed here for the class of dynamic time-consistent coherent distortion risk measures, and is distinct from these earlier works. First, let us assume that the strategy \(\mathbf{\theta}_{0:T}\) is parameterised by a set of vectors \((\mathbf{\beta}_{t})_{t\in\mathcal{T}}\), where for all \(t\in\mathcal{T}\), we have \(\mathbf{\beta}_{t}\in\mathcal{A}\), \(\mathcal{A}\subset\mathds{R}^{m}\). We write, with a slight abuse of notation, \(\mathbf{\theta}_{t}^{\mathbf{\beta}_{t}}=\mathbf{\theta}_{t}(X_{0},\ldots,X_{t};\mathbf{ \beta}_{t})\), where \(\mathbf{\theta}_{t}\colon\mathds{R}^{n\times t}\times\mathcal{A}\to\mathds{R}\). We call \(\mathbf{\beta}_{0:T}\) a policy, as it parametrises the investor's strategy. Next, for \(t\in\mathcal{T}\) we view the criterion in Equation (10) as a function of \(\mathbf{\beta}_{t}\), and aim to minimise it over these parameters. To this end, we write the time \(t\) loss function as \[L_{t}[\mathbf{\beta}_{t};\,\mathbf{\beta}_{t+1:T}^{*}]\!:=\!\mathbb{E}\!\left[\mathfrak{ R}_{t}\big{[}(\mathbf{\theta}_{t}^{\mathbf{\beta}_{t}},\mathbf{\theta}_{t+1}^{\mathbf{\beta}_{t+ 1}^{\mathbf{\beta}_{t+1}^{\mathbf{\beta}_{t+1}^{\mathbf{\beta}_{t}^{\mathbf{\beta}_{t}^{\mathbf{ \beta}_{t}^{\mathbf{\beta}_{t}^{\mathbf{\beta}_{t}^{\mathbf{\beta}_{t}^{\mathbf{\beta}_{t}^{ \mathbf{\beta}_{t}^{\mathbf{\beta}_{t}^{\mathbf{\beta}_{t}^{\mathbf{\beta}_{t}^{\mathbf{\beta}_{t }^{\mathbf{\beta}_{t}^{\mathbf{\beta}_{t}^{\mathbf{\beta}_{t}^{\mathbf{\beta}_{t}^{\mathbf{\beta}_{t }^{\mathbf{\beta}_{t}^{\mathbf{\beta}_{t}^{\mathbf{\beta}_{t}^{\mathbf{\beta}_{t}^{\mathbf{\beta }_{t}^{\mathbf{\beta}_{t}^{\mathbf{\beta}_{t}^{\mathbf{\beta}_{t}^{\mathbf{\beta}_{t}^{\mathbf{ \beta}_{t}^{\mathbf{\beta_{t}}^{\mathbf{\beta_{t}}^{\mathbf{\beta_{t}}^{\mathbf{\beta_{t}}^{ \mathbf{\beta_{t}}^{\mathbf{\beta_{t}}^{\mathbf{\beta_{t}}^{\mathbf{\beta_{t}}^{\mathbf{\beta_{t} }^{\mathbf{\beta_{t}}^{\mathbf{\beta_{t}}^{\mathbf{\beta_{t}}^{\mathbf{\beta_{t}}^{\mathbf{\beta_{t }}^{\mathbf{\beta_{t}}^{\mathbf{\beta_{t}}^{\mathbf{\beta_{t}}^{\mathbf{\beta_{t}}^{\mathbf{ \beta_{t}}}^{\mathbf{\beta_{t}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\} this work we implement a subclass of dynamic time-consistent distortion risk measures given by the weighted average of the ES and mean, but our approach can be easily generalised using ideas from Coache et al. (2022). The elicitability of the conditional distortion risk measures are invaluable for efficiently solving optimisation problem (3) using deep learning algorithms. Thus, we first recall the notion of scoring functions and elicitability, adapted to our notation. Let \(t\in\mathcal{T}\) and define by \(\mathcal{M}_{t}:=\{F_{Y|\mathcal{F}_{t}}\mid Y\in\mathcal{F}_{t+1}\}\) the set of distribution functions of \(\mathcal{F}_{t+1}\)-measurable rvs conditional on \(\mathcal{F}_{t}\). **Definition 8** (Elicitability): A functional \(\mathfrak{T}\colon\mathcal{M}_{t}\to A\), \(A\subset\mathds{R}^{k}\), is called \(k\)-elicitable on \(\widetilde{\mathcal{M}}_{t}\subseteq\mathcal{M}_{t}\), if there exists a measurable function \(S\colon A\times\mathds{R}\to[0,\infty]\) - called a strictly consistent scoring function - if for all \(F\in\widetilde{\mathcal{M}}_{t}\) and for all \(z\in A\) \[\int S\bigl{(}\mathfrak{T}(F),y\bigr{)}\,dF(y)\leq\int S(z,y)\,dF(y)\,, \tag{15}\] and equality in (15) holds only if \(z=\mathfrak{T}(F)\). In the numerical examples, we consider the family of dynamic time-consistent risk measures \((\rho_{t})_{t\in\mathcal{T}}\) parametrised by \(p\in[0,1]\) \[\rho_{t}(Z):=p\,\mathrm{ES}_{\alpha}(Z\,|\,\mathcal{F}_{t})+(1-p)\,\mathbb{E} [\,Z\,|\,\mathcal{F}_{t}]\,, \tag{16}\] which for each \(t\in\mathcal{T}\) is a conditional distortion risk measure with weight function \(\gamma_{t}(u)=p\,\frac{1}{1-\alpha}\mathds{1}_{u\geq\alpha}+(1-p)\). For each \(p\in[0,1]\), the conditional distortion risk measure is coherent since the distortion weight function \(\gamma_{t}(\cdot)\) is increasing. If \(p=0\), then \(\rho_{t}(\cdot)=\mathbb{E}[\cdot|\,\mathcal{F}_{t}\,]\) and if \(p=1\), then \(\rho_{t}(\cdot)=\mathrm{ES}_{\alpha}(\cdot|\,\mathcal{F}_{t}\,)\). While the mean is well-known to be 1-elicitable, the \(\mathrm{ES}_{\alpha}\) is only jointly elicitable together with \(\mathrm{VaR}_{\alpha}\) at the same \(\alpha\)-level. We recall these well-known results. [Mean - Gneiting (2011)] Let \(\mathcal{M}_{t}^{\dagger}\subset\mathcal{M}_{t}\) be the class of conditional distributions with finite mean. If \(\phi\) is strictly convex with subgradient \(\phi^{\prime}\), then \[S_{\mathbb{E}}(z,y):=\phi^{\prime}(z)(z-y)-\phi(z)+\phi(y)\,,\qquad z,y\in \mathds{R}\,,\] is strictly \(\mathcal{M}_{t}^{\dagger}\)-consistent for the mean, if \(\int|\phi(y)|\,\mathrm{d}F(y)<\infty\) for all \(F\in\mathcal{M}_{t}^{\dagger}\). [(VaR, ES) - Acerbi and Szekely (2014), Fissler and Ziegel (2016)] Let \(\alpha\in(0,1)\), \(A:=\{(z_{1},z_{2})\in\mathds{R}^{2}\,:\,z_{1}\geq z_{2}\}\) and define the scoring functions \(S_{\mbox{\tiny VaR},\mbox{\tiny ES}}:A\times\mathds{R}\to\mathds{R}\) by \[S_{\mbox{\tiny VaR},\mbox{\tiny ES}}(z_{1},z_{2},y) =\bigl{(}\mathds{1}_{\{y\leq z_{1}\}}-\alpha\bigr{)}\bigl{(}g(z_{ 1})-g(y)\bigr{)}\] \[\quad+\Phi^{\prime}(z_{2})\Bigl{(}z_{2}-\tfrac{1}{1-\alpha}S_{ \alpha}^{+}(z_{1},y)\Bigr{)}-\Phi(z_{2})+\Phi(y)\,,\] _where \(S_{\alpha}^{+}(z_{1},y)=(\mathds{1}_{\{y\leq z_{1}\}}-\alpha)z_{1}-\mathds{1}_{\{y \leq z_{1}\}}y+y\), \(\Phi\colon\mathds{R}\to\mathds{R}\) is strictly convex with subgradient \(\Phi^{\prime}\) and \(g\colon\mathds{R}\to\mathds{R}\) is such that for all \(z_{2}\in\mathds{R}\)_ \[z_{1}\mapsto g(z_{1})-z_{1}\Phi^{\prime}(z_{2})/(1-\alpha)\] _is strictly increasing._ _Let \(\mathcal{M}_{t}^{\ddagger}\subset\mathcal{M}_{t}\) be the conditional distribution functions with unique \(\alpha\)-quantile, finite mean, and such that \(\int|g(y)|\,dF(y)<\infty\) and \(\int|\Phi(y)|\,dF(y)<\infty\) for all \(F\in\mathcal{M}_{t}^{\ddagger}\). Then \(S_{\text{\tiny VaR},\text{\tiny ES}}\) is strictly \(\mathcal{M}_{t}^{\ddagger}\)-consistent for the couple \((\text{\tiny VaR}_{\alpha},\text{\tiny ES}_{\alpha})\)._ Next, we show that if \(p\in(0,1)\), then \(\rho_{t}\) given in (16) is 3-elicitable, that is jointly elicitable together with VaR\({}_{\alpha}\) and ES\({}_{\alpha}\). The following proposition is different to Corollary 5.4 in Fissler and Ziegel (2016), which states that \(\rho_{t}\) is 4-elicitable, specifically they show that \((\text{\tiny VaR}_{\alpha},\text{\tiny ES}_{\alpha},\mathbb{E},\rho_{t})\) is jointly elicitable. Proposition 9** (Scoring functions): _Let \(\rho\) be given in (16) for \(p\in(0,1)\) and let the assumptions of Propositions 7 and 8 be enforced. Then the function \(S_{\rho}\colon A\times\mathds{R}^{2}\to[0,\infty]\) given by_ \[S_{\rho}(z_{1},z_{2},z_{3},y)\,\colon=S_{\text{\tiny VaR},\text{\tiny ES}}(z _{1},z_{2},y)+S_{\mathbb{E}}\left(\frac{z_{3}-p\,z_{2}}{1-p},\,y\right)\,,\] _is a strictly \(\mathcal{M}_{t}^{\ddagger}\)-consistent scoring function for the triplet \((\text{\tiny VaR}_{\alpha},\text{\tiny ES}_{\alpha},\rho)\)._ _Proof:_ From Propositions 7 and 8, the functionals \((\text{\tiny VaR}_{\alpha},\text{\tiny ES}_{\alpha})\) and \(\mathbb{E}\) are elicitable. Using Lemma 2.6 in Fissler and Ziegel (2016), we obtain that \((\text{\tiny VaR}_{\alpha},\text{\tiny ES}_{\alpha},\mathbb{E})\) is elicitable with consistent scoring function given by \[S_{\text{\tiny VaR},\text{\tiny ES},\mathbb{E}}(z_{1},z_{2},z_{3},y)=S_{\text {\tiny VaR},\text{\tiny ES}}(z_{1},z_{2},y)+S_{\mathbb{E}}(z_{3},y)\,.\] Next, we apply Osband's _revelation principle_, see e.g., Theorem 4 in Gneiting (2011). First, define the bijective function \(g\colon\mathds{R}^{3}\to\mathds{R}^{3}\) with \(g(z_{1},z_{2},z_{3})=\big{(}z_{1},\,z_{2},\,pz_{2}+(1-p)z_{3}\big{)}^{\intercal}\) with inverse \(g^{-1}(a_{1},a_{2},a_{3})=\Big{(}a_{1},a_{2},\frac{a_{3}-p\,a_{2}}{1-p}\Big{)}^ {\intercal}\). Next, the revelation principle states that \(g(\text{\tiny VaR},\text{\tiny ES},\mathbb{E})=(\text{\tiny VaR},\text{\tiny ES },\rho)^{\intercal}\) is elicitable with scoring function \[S_{\text{\tiny VaR},\text{\tiny ES},\rho}(z_{1},z_{2},z_{3},y)=S_{\text{\tiny VaR },\text{\tiny ES},\mathbb{E}}\left(g^{-1}(z_{1},z_{2},z_{3})^{\intercal},y \right)=S_{\text{\tiny VaR},\text{\tiny ES}}(z_{1},z_{2},y)+S_{\mathbb{E}} \left(\frac{z_{3}-p\,z_{2}}{1-p},\,y\right)\,.\] Moreover, if the scoring functions \(S_{\text{\tiny VaR},\text{\tiny ES}}(z_{1},z_{2},y)\) and \(S_{\mathbb{E}}(z_{3},y)\) are strictly consistent for \((\text{\tiny VaR},\text{\tiny ES})\) and \(\mathbb{E}\), respectively, then \(S_{\text{\tiny VaR},\text{\tiny ES},\rho}(z_{1},z_{2},z_{3},y)\) is strictly consistent for \((\text{\tiny VaR},\text{\tiny ES},\rho)\). \(\square\) Finally, we need to elicit the conditional distribution function \(F_{Y|\boldsymbol{X}}\colon\mathds{R}\to[0,1]\), defined by \(F_{Y|\boldsymbol{X}}(y):=\mathbb{P}(Y\leq y\,|\,\boldsymbol{X}=\boldsymbol{x})\), for any \(Y\in\mathcal{Z}_{t+1}\), \(\boldsymbol{X}\in\boldsymbol{\mathcal{Z}}_{t}\), and \(\boldsymbol{x}\in\mathds{R}^{n}\). Distribution functions are known to be elicitable with the _continuous ranked probability score_, see e.g. Equation (20) in Gneiting and Raftery (2007). Here we provide the key result. Proposition 10 (Distribution Function - Gneiting and Raftery (2007)): _Let \(Y,\,\mathbf{X}\,\) be random vectors in \(\mathds{R}\) and \(\mathds{R}^{d}\) respectively. Denote by \(\mathfrak{F}\) the set of all functions \(F\colon\mathds{R}\times\mathds{R}^{d}\to[0,1]\) such that \(F(\cdot,\mathbf{x})\) is a distribution function, for all \(\mathbf{x}\in\mathds{R}^{d}\). Then the function \(S\colon\mathfrak{F}\times\mathds{R}\times\mathds{R}^{d}\to[0,\infty]\) given by_ \[S(F,y,\mathbf{x})=\int\left(F(z,\mathbf{x})-\mathds{1}_{z\geq y}\right)^{2}\,dz\] _is a strictly consistent scoring function for \(F_{Y|\mathbf{X}}\). In particular, it holds that_ \[\operatorname*{arg\,min}_{F\in\mathfrak{F}}\mathbb{E}[\,S(F,Y,\mathbf{X})\,],\] _is attained by the conditional distribution function \(F_{Y|\mathbf{X}}\colon\mathds{R}\times\mathds{R}^{d}\to[0,1]\)._ ### Neural Network Approximators This section focuses on application of neural network (NN) function approximations for the strategy \(\mathbf{\theta}_{0:T}\), the risk-to-go process \(\mathfrak{R}_{0:T}\), and the uniform rvs \(U_{0:T}\). In machine learning (ML), the strategy \(\mathbf{\theta}_{0:T}\) is referred to as the actor/policy while the risk-to-go \(\mathfrak{R}_{0:T}\) is referred to as the critic. As it is not immediately clear whether the optimal strategy or the risk-to-go process are Markovian in asset prices, we proceed using non-Markovian parameterisations. In the context of NN approximations, recurrent neural networks (RNNs) can be used to accomplish this goal. Our implementation employs gated recurrent units (GRUs) to encode non-Markovian features, though long short-term memory (LSTM) networks or attention networks are also viable alternatives. Below we describe the architecture for the actor critic approach in detail. First, the actor (strategy) network (visualised in Figure 1) consists of a five layered GRU, with each layer consisting of hidden states of dimension \(n\) (recall that \(n\) is the asset dimension). The input features into the GRU are time, the wealth process of the induced self-financing strategy, and asset prices. We denote them by \(y_{t}=(t,\,\mathbf{\vartheta}_{t-1}^{\intercal}\mathbf{X}_{t},\,\mathbf{X}_{t})\in \mathds{R}^{n+2}\), \(t\in\mathcal{T}\), and call them the state. At each time \(t\in\mathcal{T}\), the output from all hidden layers from the previous time step, denoted by Figure 1: Directed graph representation for encodings and parameterisation of \(\mathbf{\theta}_{0:T}\) functions. \(h_{t-1}\), and the state from the current time step, \(y_{t}\), are concatenated and passed through a five layer feed forward neural network (FFN) to produce an \(n\)-dimensional output corresponding to \(\boldsymbol{\theta}_{t}\). The internal layers of the FFN have sigmoid linear units (SiLU) activation functions, while, to ensure the strategy is long only, the last layer has a softplus activation function. Next, the (critic) risk-to-go network (visualised in Figure 2) has the same GRU and FFN structure as the strategy network, however, the final output of the FFN is three dimensional corresponding to the conditional VaR (\(\psi\) in Figure 2), the difference between the conditional ES and the conditional VaR (\(\chi\) in Figure 2), and the conditional risk measure (\(\mathfrak{R}\) in Figure 2). There is no activation function in the final layer for VaR and the risk measure, while we have a softplus activation for the difference of ES and VaR to ensure ES is always larger or equal to VaR. Finally, to compute the gradient in (14) we require the conditional distribution function of \(g_{t}\!:=\boldsymbol{\theta}_{t}^{\intercal}\Delta\boldsymbol{X}_{t}+w_{t} \,\mathfrak{R}_{t+1}\) denoted \(F_{t}(z)\!:=\!\mathbb{P}(g_{t}\!\leq\!z|\mathcal{F}_{t})\). The neural network architecture for approximating \(F_{t}(z)\) is provided in Figure 3 and is similar to that of the risk-to-go network. Two important differences are (i) we concatenate not only the hidden layers from the previous time step and the state, but also the value \(z\) corresponding to \(F_{t}(z)\), and (ii) the output activation function is a sigmoid to ensure that \(F_{t}(z)\!\in\!(0,1)\). Figure 3: Directed graph representation for encodings and parameterisation of \(F_{t}(z)\). Figure 2: Directed graph representation for encodings and parameterisation of \(\mathfrak{R}_{t}\), \(\psi_{t}\), and \(\chi_{t}\). To train the various networks, we perform actor-critic update steps by sequentially minimising the losses associated with (i) the risk-to-go, then (ii) the conditional distribution function, then (iii) the policy network, and iterate steps (i) to (iii). For the risk-to-go and the conditional distribution function, we minimise the expected value of the scoring function given in Propositions 9 and 10, respectively. For (iii) we use the gradient formula provided in (14). ## 6 Numerical Illustrations In this section, we explore the results for a stochastic volatility market model and where the investor is searching for a risk parity allocation - i.e., where \(b_{t,i}=\frac{1}{n}\) for all \(t\) and \(i\), for the time-consistent dynamic risk measure given in (16). We consider \(n=5\) assets and a time horizon of \(T=2\) (one time-step is one month). Thus the investor aims to find the self-financing risk budgeting strategy \(\boldsymbol{\vartheta}_{0:2}\). First, we describe the market model and then the optimal portfolio allocation. ### Market Model We use a discrete time version of a Heston inspired market model, where asset returns have a student-t copula dependence. Specifically, using a Milstein discretization, \[\log\frac{X_{t+1,i}}{X_{t,i}} = \left(\mu_{i}-0.5(v_{t,i})_{+}^{2}\right)\Delta t+\sqrt{(v_{t,i}) _{+}}\,\Delta W_{t,i}^{X}\,,\] \[v_{t+1,i} = \theta_{i}+\left((v_{t,i})_{+}-\theta_{i}\right)e^{-\kappa_{i} \Delta t}+\eta_{i}\,\sqrt{(v_{t,i})_{+}}\,\Delta W_{t,i}^{v}+\tfrac{1}{4}\eta _{i}^{2}\big{(}(\Delta W_{t,i}^{v})^{2}-\Delta t\big{)}\,.\] Here, \((\cdot)_{+}:=\max(\cdot,0)\), \((\Delta W_{t,i}^{X},\Delta W_{t,i}^{v})_{i\in\mathcal{N}}\) are independent across \(t\) but not \(i\) rvs. They are marginally normal with mean zero and variance \(\Delta t\). For \(i\neq j\) and \(t\in\mathcal{T}\), we have that (a) \(\Delta W_{t,i}^{X}\) and \(\Delta W_{t,j}^{v}\) are independent, and (b) \(\Delta W_{t,i}^{v}\) and \(\Delta W_{t,j}^{v}\) are independent. Moreover, \((\Delta W_{t,i}^{X},\Delta W_{t,j}^{X})_{i,j\in\mathcal{N}}\) have a student-t copula with \(d=4\) degrees of freedom and \((\Delta W_{t,i}^{X},\Delta W_{t,i}^{v})_{i\in\mathcal{N}}\) follow a Gaussian copula. The corresponding correlation matrix for the dependence structure and the market model parameters and are provided in Appendix B. Figure 4 shows the distribution of the terminal log return while Table 1 provides basic statistics of the asset's total return. As can be seen in Figure 4, the distributions are all left skewed and volatility and expected return increases with the asset index label \(i\). Figure 4: Distribution of log returns of various assets. ### Results In right panel of Figure 5 we show the risk-to-go of \(\boldsymbol{\theta}\) for the three time steps \(t=0,1,2\) and up to \(5,000\) iterations of the algorithm for the case when \(p=0.5\) and \(\alpha=0.75\), and an equal risk budget case: \(\boldsymbol{b}_{t}=(\frac{1}{5},\frac{1}{5},\frac{1}{5},\frac{1}{5},\frac{1}{5})\). The shaded region shows a measure of the confidence in the estimator, that is the standard deviation of the last 100 estimates (resulting from the learnt neural network approximation of the risk-to-go), while the solid lines show the moving average using the last 100 estimates. As the figure shows, all converge to the value of 1, a result of Theorem 3 and in particular Equation (11) and Euler's Theorem. The left panel of Figure 5 shows the sum of the expected value of the risk contributions. At each iteration, they are estimated using 500 sample paths and computed via the empirical mean of the expression for the risk contributions provided in Theorem 2. As before, the shaded region shows a measure of the confidence in the estimator, that is the standard deviation of the last 100 estimates, while the solid lines show the moving average using the last 100 estimates. In this case, as each estimate of the risk contributions are estimated always with 500 simulations there is no narrowing of the confidence band as we increase iterations. As Figure 5 shows, the sums of the risk contributions converge rather quickly to their theoretical value of 1. Figure 6 shows the analogous plots for the individual risk contributions. To gain a deeper understanding of the learnt risk budgeting strategy, we present histograms in Figure 7 showing the percentage held in each asset across the three time steps. Each column in the figure represents a fixed choice of \(p\) of the risk measure, while the rows correspond to different assets. The different strategies were developed using transfer learning techniques, where we first learn the optimal strategy, risk-to-go, and conditional distribution function for \(p\!=\!50\%\). Then, we learn the strategy, risk-to-go, and conditional distribution function for the \(p\!=\!60\%\), by initialising the neural networks with the values from \(p\!=\!50\%\) and continuing to learn the new strategies. We repeated this process for \(p\!=\!70\%\) and beyond. In Figure 7, we observe a general trend, whereby the investment in asset-\(i\) decreases as \(i\) increases. This trend is consistent with the fact that assets become increasingly volatile as the index-\(i\) increases, making it reasonable to allocate less capital to the riskier assets to generate an equal risk budgeting portfolio. For the less risky assets \(i\!=\!1,2,3\), as time increases, investments become more disperse. Contrastingly, for the more risky assets \(i\!=\!4,5\), as time increases, investments become more concentrated. This is sensible, as the investor aims to have a risk parity portfolio at all point in times and hence needs to deleverage the more risky assets. It is more challenging to provide a full description of how the distributions vary with \(p\), as there are a number of competing factors that are difficult to disentangle. If we fix a row, e.g., the fifth row, i.e. \(i\!=\!5\), we see that as \(p\) increases, corresponding to the investor putting more weight on the ES, the investment becomes more left skewed meaning that they invest less and less in the most risky asset at the last time step. If we fix, e.g., the fourth row, i.e. \(i\!=\!4\), we observe that as \(p\) increases, the distribution of the Figure 6: Individual risk contributions versus iterations for the optimal strategy \(\theta\) when \(p\!=\!50\%\) and \(\alpha\!=\!75\%\). \(RC_{t,i}[\mathbf{\theta}_{t:T}]\) are estimated using \(500\) simulations at each iteration. percentage of wealth at time 2 becomes more variable, but shifts to the left; once again indicating a deleveraging. Figure 8 shows the percentage held in each asset across the three time steps, when the risk budget is unequal: \(\boldsymbol{b}_{t}=(\frac{1}{15},\frac{2}{15},\frac{3}{15},\frac{4}{15},\frac{5 }{15})\). In this case, the investor now generally increases the percentage of wealth invested in asset-\(i\) as \(i\) increases, as they are willing to take on more risk in the assets with a higher index. Figure 7: Percentage of wealth invested in each asset for each point in time as we vary \(p\) and for \(\alpha\) fixed at \(75\%\) - with \(b_{t}=(\frac{1}{5},\frac{1}{5},\frac{1}{5},\frac{1}{5},\frac{1}{5})\). Figure 8: Percentage of wealth invested in each asset for each point in time as we vary \(p\) and for \(\alpha\) fixed at \(75\%\) – with \(b_{i}=(\frac{1}{15},\frac{2}{15},\frac{3}{15},\frac{4}{15},\frac{5}{15})\). ## 7 Conclusion In this work, we show how an investor can allocate investments in risky assets to attain a predefined risk budget in a dynamic setting. To do so, we first propose a notion of risk contributions for dynamic time-consistent risk measures and demonstrate that they satisfy the full allocation property. For the class of time-consistent coherent distortion risk measures, we derive explicit formulae for risk contributions and prove that strategies that attain a particular risk budget are uniquely specified by the solution to a collection of convex optimisation problems. Leveraging the elicitability of dynamic time-consistent coherent distortion risk measures, we further provide a deep learning approach for solving those optimisation problems. Finally, we demonstrate the stability of the numerical scheme through several examples using a stochastic volatility market model. ## Appendix A Auxiliary Results **Proposition 11**: _Let \(\{\rho_{t}\}_{t\in\mathcal{T}}\) be a dynamic time-consistent distortion risk measure with respective square-integrable weight functions \(\{\gamma_{t}\}_{t\in\mathcal{T}}\). For \(t\in\mathcal{T}\) and \(Y,W\in\mathcal{Z}_{t}\), where, we assume that \((Y,W)\) has a joint density, though the proof can be generalised to include point masses. Then it holds that_ \[\lim_{\varepsilon\to 0}\frac{\rho_{t}\left(Y+\varepsilon\,W\right)-\rho_{t}(Y)}{ \varepsilon}=\mathbb{E}\left[\,W\,\gamma_{t}\left(U_{Y|\mathcal{F}_{t}}\, \right)|\mathcal{F}_{t}\,\right], \tag{18}\] _Proof:_ First we define the conditional distribution functions \(F(y):=\mathbb{P}(Y\leq y\,|\,\mathcal{F}_{t})\) and \(F(y,\varepsilon):=\mathbb{P}(Y+\varepsilon\,W\leq y\,|\,\mathcal{F}_{t})\) and their corresponding densities by \(f(y)\) and \(f(y,\varepsilon)\). We further write \(F^{-1}(u)\) and \(F^{-1}(u,\varepsilon)\) for the quantile functions of \(F(\cdot)\) and \(F(\cdot,\varepsilon)\), respectively. Next note that \(\rho_{t}\left(Y+\varepsilon\,W\right)=\mathbb{E}\left[F^{-1}(U,\varepsilon)\, \gamma_{t}(U)\,|\,\mathcal{F}_{t}\right]\), for a uniform rv \(U\in\mathcal{F}_{t}\), therefore \[\lim_{\varepsilon\to 0}\frac{\rho_{t}\left(Y+\varepsilon\,W\right)-\rho_{t}(Y^{ \prime})}{\varepsilon}=\mathbb{E}\left[\partial_{\varepsilon}F^{-1}(U, \varepsilon)\,\gamma_{t}(U)\,|\,\mathcal{F}_{t}\right]\Big{|}_{\varepsilon=0}\,. \tag{19}\] By taking a derivative with respect to \(\varepsilon\) of the equation \(F(F^{-1}(u,\varepsilon),\varepsilon)=u\), we obtain for all \(u\in(0,1)\), \[\partial_{\varepsilon}F^{-1}(u,\varepsilon)=-\frac{\partial_{ \varepsilon}F(y,\varepsilon)}{f(y,\varepsilon)}\Big{|}_{y=F^{-1}(u, \varepsilon)} \tag{20}\] Next, we calculate the derivative \(\partial_{\varepsilon}F(y,\varepsilon)\). For this note that \[\partial_{\varepsilon}F(y,\varepsilon)= \,\partial_{\varepsilon}\mathbb{E}[\mathds{1}_{Y+\varepsilon W \leq y}\,|\,\mathcal{F}_{t}]=\lim_{\varepsilon\to 0}\frac{1}{ \varepsilon}\,\mathbb{E}\left[\mathds{1}_{Y+\varepsilon W\leq y}-\mathds{1}_ {Y\leq y}\,|\,\mathcal{F}_{t}\right]\] \[= \lim_{\varepsilon\to 0}\frac{1}{\varepsilon}\,\mathbb{E}\left[ \,\mathbb{E}\left[\mathds{1}_{Y\in(y-\varepsilon W,y]}\,|\,W\right]\,|\, \mathcal{F}_{t}\right]=\lim_{\varepsilon\to 0}\frac{1}{\varepsilon}\,\mathbb{E}\left[\, \int_{y-\varepsilon W}^{y}dF_{Y|W}(y^{\prime})\,\Bigg{|}\,\,\mathcal{F}_{t}\right]\] \[= -\,\mathbb{E}\big{[}W\,f_{Y|W}(y)\,|\,\mathcal{F}_{t}\big{]}\,, \tag{21}\] where, \(F_{Y|W}\) and \(f_{Y|W}\) are the distribution and density, respectively, of \(Y\) conditional on \(W\). Plugging (20) and (21) into Equation (19), we obtain \[\lim_{\varepsilon\to 0}\frac{\rho_{t}\left(Y+\varepsilon\,W\right)-\rho_{t}(Y)}{ \varepsilon}= \,\mathbb{E}\left[\,\frac{\mathbb{E}\left[W\,f_{Y|W}(y)\right]}{f (y)}\Big{|}_{y=F^{-1}(U)}\,\gamma_{t}(U)\,\Bigg{|}\,\,\mathcal{F}_{t}\right]\] \[= \,\mathbb{E}\left[\,\mathbb{E}\left[W\,|\,Y=y\right]\Big{|}_{y=F^ {-1}(U)}\,\gamma_{t}(U)\,\Bigg{|}\,\,\mathcal{F}_{t}\right]\] \[= \,\mathbb{E}\left[\,\mathbb{E}\left[W\,|\,F(Y)=U\right]\,\gamma_{t }(U)\,|\,\,\mathcal{F}_{t}\right]\] \[= \,\mathbb{E}\left[W\,\gamma_{t}(U_{Y|\mathcal{F}_{t}})\,|\, \mathcal{F}_{t}\right]\,.\] ### Proof of Proposition 4: First we generalise the Gateaux derivative as follows. For a functional \(F_{t}:\boldsymbol{\mathcal{Z}}_{t:T}\to\mathcal{Z}_{t}\), \(t\in\mathcal{T}\) and \(s\geq t\), we denote by \(\mathcal{D}^{\zeta}_{s,i}\,F_{t}\), its Gateaux derivative of the \(i^{\text{th}}\) component at time \(s\) in direction \(\zeta\in\mathcal{Z}_{s}\). That is, for \(s,t\in\mathcal{T}\), \(s\geq t\), and \(\boldsymbol{Z}_{t:T}\in\boldsymbol{\mathcal{Z}}\) \[\mathcal{D}^{\zeta}_{s,i}\,F_{t}[\boldsymbol{Z}_{t:T}]:=\lim_{\varepsilon\to 0 }\frac{1}{\varepsilon}\Big{(}F_{t}[\boldsymbol{Z}_{t:T}+\varepsilon\,\boldsymbol {1}_{s,i}\zeta]-F_{t}[\boldsymbol{Z}_{t:T}]\Big{)}\,.\] Next, note that \[RC_{t,i}[\boldsymbol{\theta}_{t:T}] =\mathcal{D}^{\theta_{t,i}}_{i}\,\rho_{t}\,\big{(}\boldsymbol{ \theta}^{\intercal}_{t}\Delta\boldsymbol{X}_{t}+\,\mathfrak{R}_{t+1}[w^{ \boldsymbol{\theta}}_{t}\,\boldsymbol{\theta}_{t+1:T}]\big{)}\] \[=\lim_{\varepsilon\to 0}\frac{1}{\varepsilon}\Big{\{}\rho_{t} \Big{(}\varepsilon\theta_{t,i}\Delta X_{t,i}+\varepsilon\mathcal{D}^{\theta_ {t,i}}_{t,i}\mathfrak{R}_{t+1}[w^{\boldsymbol{\theta}}_{t}\,\boldsymbol{ \theta}_{t+1:T}]\] \[\qquad\qquad\qquad+\boldsymbol{\theta}^{\intercal}_{t}\Delta \boldsymbol{X}_{t}+\,\mathfrak{R}_{t+1}[w^{\boldsymbol{\theta}}_{t}\, \boldsymbol{\theta}_{t+1:T}]\Big{)}-\mathfrak{R}_{t}[\boldsymbol{\theta}_{t:T }]\Big{\}}\] \[\overset{\text{by \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eq where we use the induction argument in fourth equation and that \(\Gamma_{s}^{\theta}\in\mathcal{F}_{s}\) in the last equality. This concludes the proof of Equation (23). Finally combining (23) with (22), noticing that \(\Gamma_{t}^{\theta}\in\mathcal{F}_{t}\), and using the law of iterated expectations concludes the proof. ### Parameters used in Market Model Simulation. This section contains further details on the simulated market model simulation. In particular, Table 2 specifies the market model parameters and Table 3 gives the correlation matrix of the dependence structure. ### Acknowledgments SJ and SP acknowledge support from the Natural Sciences and Engineering Research Council of Canada (grants RGPIN-2018-05705, RGPAS-2018-522715, and DGECR-2020-00333, RGPIN-2020-04289). RT acknowledges the support from CNPq (200293/2022-2) and FAPERJ (E-26/201.350, E-26/211.426, E-26/211.578). YS acknowledges the support from CNPq (306695/2021-9) and FAPERJ (E-26/201.375/2022 27260) \begin{table} \begin{tabular}{c r r r r r r} \hline \hline & \(i=1\) & \(i=2\) & \(i=3\) & \(i=4\) & \(i=5\) \\ \hline \(\kappa_{i}\) & 4 & 4.5 & 5 & 5.5 & 6 \\ \(\theta_{i}\) & 0.01 & 0.0225 & 0.04 & 0.0625 & 0.09 \\ \(\eta_{i}\) & 0.5 & 0.875 & 1.25 & 1.625 & 2 \\ \(\mu_{i}\) & 0.05 & 0.0575 & 0.065 & 0.0725 & 0.08 \\ \hline \hline \end{tabular} \end{table} Table 2: Market model parameters. \begin{table} \begin{tabular}{c r r r r r r r r r} \hline \hline & \(X_{1}\) & \(X_{2}\) & \(X_{3}\) & \(X_{4}\) & \(X_{5}\) & \(v_{1}\) & \(v_{2}\) & \(v_{3}\) & \(v_{4}\) & \(v_{5}\) \\ \hline \(X_{1}\) & 1.0 & 0.3 & 0.3 & 0.3 & 0.3 & -0.5 & & & & \\ \(X_{2}\) & 0.3 & 1.0 & 0.3 & 0.3 & 0.3 & & -0.5 & & & \\ \(X_{3}\) & 0.3 & 0.3 & 1.0 & 0.3 & 0.3 & & -0.5 & & \\ \(X_{4}\) & 0.3 & 0.3 & 0.3 & 1.0 & 0.3 & & & -0.5 & & \\ \(X_{5}\) & 0.3 & 0.3 & 0.3 & 1.0 & & & & & -0.5 & \\ \(v_{1}\) & -0.5 & & & & & 1.0 & & & & -0.5 \\ \(v_{2}\) & & -0.5 & & & & & 1.0 & & & \\ \(v_{3}\) & & & -0.5 & & & & & 1.0 & & \\ \(v_{4}\) & & & & -0.5 & & & & & 1.0 & \\ \(v_{5}\) & & & & & -0.5 & & & & & 1.0 \\ \hline \hline \end{tabular} \end{table} Table 3: Correlation matrix of dependence structure for the market model. Only non-zero entries are shown.
2305.09556
Adapting Sentence Transformers for the Aviation Domain
Learning effective sentence representations is crucial for many Natural Language Processing (NLP) tasks, including semantic search, semantic textual similarity (STS), and clustering. While multiple transformer models have been developed for sentence embedding learning, these models may not perform optimally when dealing with specialized domains like aviation, which has unique characteristics such as technical jargon, abbreviations, and unconventional grammar. Furthermore, the absence of labeled datasets makes it difficult to train models specifically for the aviation domain. To address these challenges, we propose a novel approach for adapting sentence transformers for the aviation domain. Our method is a two-stage process consisting of pre-training followed by fine-tuning. During pre-training, we use Transformers and Sequential Denoising AutoEncoder (TSDAE) with aviation text data as input to improve the initial model performance. Subsequently, we fine-tune our models using a Natural Language Inference (NLI) dataset in the Sentence Bidirectional Encoder Representations from Transformers (SBERT) architecture to mitigate overfitting issues. Experimental results on several downstream tasks show that our adapted sentence transformers significantly outperform general-purpose transformers, demonstrating the effectiveness of our approach in capturing the nuances of the aviation domain. Overall, our work highlights the importance of domain-specific adaptation in developing high-quality NLP solutions for specialized industries like aviation.
Liya Wang, Jason Chou, Dave Rouck, Alex Tien, Diane M Baumgartner
2023-05-16T15:53:24Z
http://arxiv.org/abs/2305.09556v2
# Adapting Sentence Transformers for the Aviation Domain ###### Abstract Learning effective sentence representations is crucial for many Natural Language Processing (NLP) tasks, including semantic search, semantic textual similarity (STS), and clustering. While multiple transformer models have been developed for sentence embedding learning, these models may not perform optimally when dealing with specialized domains like aviation, which has unique characteristics such as technical jargon, abbreviations, and unconventional grammar. Furthermore, the absence of labeled datasets makes it difficult to train models specifically for the aviation domain. To address these challenges, we propose a novel approach for adapting sentence transformers for the aviation domain. Our method is a two-stage process consisting of pre-training followed by fine-tuning. During pre-training, we use Transformers and Sequential Denoising AutoEncoder (TSDAE) with aviation text data as input to improve the initial model performance. Subsequently, we fine-tune our models using a Natural Language Inference (NLI) dataset in the Sentence Bidirectional Encoder Representations from Transformers (SBERT) architecture to mitigate overfitting issues. Experimental results on several downstream tasks show that our adapted sentence transformers significantly outperform general-purpose transformers, demonstrating the effectiveness of our approach in capturing the nuances of the aviation domain. Overall, our work highlights the importance of domain-specific adaptation in developing high-quality NLP solutions for specialized industries like aviation. ## 1 Introduction In recent years, deep learning has revolutionized the field of Natural Language Processing (NLP) with the development of powerful sentence representation techniques like sentence embeddings. These techniques enable NLP models to capture contextual information about words and their relationships within sentences, making them useful for various artificial intelligence (AI) applications such as semantic search, semantic textual similarity (STS), sentiment analysis, and machine translation. Two popular approaches for learning sentence embeddings are supervised and unsupervised learning. Supervised learning methods exploit labels for sentence pairs which provide the information about the relation between the sentences, while unsupervised methods rely on large amounts of unannotated data to learn sentence representations without explicit guidance. Supervised methods include the well-known Sentence Bidirectional Encoder Representations from Transformers (SBERT) [1], which uses Siamese [2] and triplet network structures to derive semantically meaningful sentence embeddings. High-quality sentence embeddings can be derived via supervised training; however, the labeling cost is a major concern in practice, especially for specialized domains. In contrast, unsupervised methods do not need data labels, and have been dominant in sentence embedding learning. There are several types of unsupervised methods, including flow-based, contrastive learning, denoise autoencoder, and prompt-based methods. Flow-based methods include BERT-flow [3] and BERT-whitening [4]. BERT-flow transforms the BERT [5] sentence embedding distribution into a smooth and isotropic Gaussian distribution through normalizing flow [6]. BERT-whitening [4] uses a whitening post-processing method to transform the BERT-based sentence to a standard orthogonal basis while reducing its size. Contrastive learning methods are popular in sentence embedding learning. The Contrastive Framework for Self-Supervised SEntence Representation Transfer (ConSERT) adopts contrastive learning to fine-tune BERT in an unsupervised way. ConSERT solves the collapse issue [7] of BERT-derived sentence representations to make them more applicable for downstream tasks. Contrastive Tension (CT) [8] treats identical and different sentences as positive and negative pairs and constructs the training objective as a noise-contrastive task between the final layer representations of two independent models, in turn forcing the final layer representations suitable for feature extraction. The Simple Contrastive Learning of Sentence Embeddings (SimCSE) [9] uses contrastive learning to learn sentence embedding from either unlabeled or labeled datasets. SimCSE uses dropout to create identical sentence pairs. Enhanced SimCSE (ESimCSE) [10] further improves the unsupervised learning capability of SimCSE by carefully crafting positive and negative pairs. Difference-based Contrastive Learning for Sentence Embeddings (DiffCSE) [11] learns sentence embeddings from the difference between an original and edited sentence, where the edited sentence is created by stochastically masking out the original sentence and then sampling from a masked language model. Information-aggregated Contrastive learning of Sentence Embeddings (InfoCSE) [12] also derives the sentence embeddings with an additional masked language model task and a well-designed network. Contrastive learning for unsupervised Sentence Embedding with Soft Negative samples (SNCSE) [13], takes the negation of original sentences as soft negative samples and adds Bidirectional Margin Loss (BML) into the traditional contrastive learning framework. The Entity-Aware Contrastive Learning of Sentence Embedding (EASE) [14] learns sentence embeddings via contrastive learning between sentences and their related entities. Contrastive learning method with Prompt-derived Virtual semantic Prototypes (ConPVP) [15] constructs virtual semantic prototypes for each instance, and derives negative prototypes by using the negative form of the prompts. ConPVP uses a prototypical contrastive loss to drive the anchor sentence embedding closer to its corresponding semantic prototypes, and further away from the negative prototypes and the prototypes of other sentences. Denoise autoencoder and prompt are also be used for unsupervised sentence representation learning. For example, Transformers and Sequential Denoising AutoEncoder (TSDAE) [16], was designed to encode corrupted sentences into fixed-sized embedding vectors and then let the decoder reconstruct the original sentences from this sentence embedding in an unsupervised way. PromptBERT [17] uses prompts to improve BERT sentence embeddings. It should be mentioned that the models mentioned above were trained on general corpora without considering specific domains, resulting in poor performance when applied directly to domains like aviation. This work seeks to resolve this issue by tailoring pretrained sentence transformers for the aviation domain. Aviation text data are characterized by numerous intricacies like technical jargon, unconventional grammar, and inconsistent abbreviations. In addition, aviation text data have no labels. With those limitations in mind, we designed a two-stage approach comprising pre-training and fine-tuning. We leverage TSDAE during pre-training to enhance the base model's capabilities before refining it further via fine-tuning on the Natural Language Inference (NLI) dataset. By doing so, we ensure better performance than general-purpose pre-trained sentence transformers while minimizing overfitting concerns. Our experiments demonstrate the efficacy of our technique, paving the way for more sophisticated NLP solutions for the aviation sector. We hope that our findings foster further investigation in this promising direction. The remainder of this paper is organized as follows: Section II gives a short introduction to our input data sources used in the research. Section III provides details of our adaptation modeling process. The results are shown in Section IV. Finally, we conclude in Section V. ## II Data Sources and Pre-processing In aviation, various types of text data are accumulated to support safety and daily operations as depicted in Fig. 1. For example, Federal Aviation Administration (FAA) has a Comprehensive Electronic Data Analysis and Reporting (CEDAR) database [18], which provides access to several principal aviation safety data and information sources. The Electronic Occurrence Report (EOR) [19] provides an alert identified by an automated system such as Traffic Analysis and Review Program (TARP) or Operational Error Detection Patch (OEDP) that automatically uploads into the CEDAR tool. The Mandatory Occurrence Report (MOR) [19] reports an occurrence involving air traffic services for which collecting associated safety-related data and conditions is mandatory. Notices to Air Men (NOTAM) [20] are electronic communications to alert aircraft pilots of potential hazards along a flight route or at a location that could affect the safety of the flight. METeorological Aerodrome Report (METAR) [21] reports hourly airport surface weather observations. These datasets can generally be classified into two main categories based on their linguistic characteristics: domain-specific and everyday language (see Fig. 1). The first group consists of texts written in specialized language often containing technical terms, abbreviations, and acronyms commonly used within the aviation industry, as shown in Table 1. In contrast, the second category encompasses texts that adhere to standard writing conventions without excessive use of jargon or unusual abbreviations. Our study focuses on analyzing domain specific texts. Given this focus, we chose the Digital Automatic Terminal Information Service (DATIS) as our primary training data source because it consists exclusively of abbreviated texts from the aviation domain. Since DATIS lacks labels, making supervised fine-tuning impossible, we decided to supplement it with a Natural Language Inference (NLI) dataset. The NLI dataset serves as input during the fine-tuning process, helping us overcome potential overfitting issues. In the subsequent sections, we will describe both datasets in more detail. \begin{table} \begin{tabular}{|l|} \hline ANPDAXA \\.CHIXCXA 170000 \\ FF KANPXAAT \\ 170000 EDDMZTZW \\ -ATIS EDDM W SPECI 162359 \\ -EXPECT VECTORS FOR INDEPENDENT PARALLEL ILS APPROACH \\ -RWY 26R 26L \\ -NEW ATC SYSTEM IN OPERATION, EXPECT POSSIBLE DELAY \\ -TRL 60 \\ -RMVY 26 LFFT CLSD FM 2100 TILL 0400 UTC,, \\ -19001KT \\ -9999 4000 \\ -RVR RWY26R TDZ P2000 MID 1400 MID P2000 END 1700 \\ -BR BKN024 \\ -T02 DP02 \\ -QNH1020 \\ - \\ - \\ -COMMENTS: TG:00 \\ \hline \end{tabular} \end{table} Table 1: Abbreviated aviation text data example Figure 1: Aviation domain text data sources. ## Appendix A Digital Automatic Terminal Information Service (DATIS) Dataset DATIS systems are widely utilized in busy airports to disseminate information quickly and efficiently [22]. Supported by ARINC [23], DATIS digitally transmits essential Air Traffic Information System (ATIS) notifications, presenting them in an easily comprehensible, written form to flight crews. By doing so, DATIS supports safe and efficient aircraft operation in challenging aeronautical environments. DATIS communications primarily relay important airport circumstances, such as available landing and departing runways, current meteorological updates, runway closures, taxiway closures, malfunctioning equipment, surface conditions like ice, and other relevant alerts about birds, construction cranes, drones, lasers, etc. This information is combined into a centralized dataset with associated metadata, including timestamps, originating sources, and event dates. This integrated view enables researchers to explore patterns in DATIS usage and assess its effectiveness for various purposes. Data residing within the MITRE DATIS archive come directly from the Federal Aviation Administration (FAA) via ARINC. Hourly updates take place around the clock, with a one-hour time lag relative to live events. MITRE's database maintains files containing 300 to 400 entries per hour. Table 2 shows examples extracted directly from the logs. This information is a crucial resource for subsequent analysis and investigations related to the use of DATIS information within complex aeronautic contexts. To gain an in-depth understanding of the DATIS dataset, we performed exploratory data analysis (EDA) for the year 2022. This allowed us to assess the characteristics of the data, identify patterns and trends, and determine any potential issues that might affect our analysis and interpretation. Through this process, we were able to obtain valuable insights into the properties of the data and develop informed hypotheses about its structure. Our findings from this EDA will serve as a foundation for further analysis and modeling efforts. As shown in Fig. 2, the EDA analysis entailed examining 208 airports featured in the 2022 DATIS dataset. Notably, we observed variations in reporting frequency among the airports, with some updating every 20-30 minutes and others updating their messages irregularly. For example, Hong Kong International Airport did not generate any additional datasets after February 2022. Additionally, there are three primary categories of DATIS messages: combined, arrival, and departure. Smaller airports frequently integrate both arrival and departure details into a single consolidated message, while larger airports, like the Hartsfield-Jackson Atlanta International Airport (ATL), generate separate messages for arrival and departure information. As raw DATIS messages are manually entered by air traffic controllers, they can often contain transcription mistakes. Such errors may result from misspellings, inconsistent abbreviation (e.g., interchangeable use of RY, RWY, or RUNWAY), formatting irregularities (e.g., RWY32L, 18 L, or NOSIG=), improper grammar, extraneous spaces, or omissions. To ensure successful model training using these messages as input, one must thoroughly scrub and cleanse the data prior to analysis. We developed a set of error correction rules summarized in the green section of Fig. 3. These rules use Python's _re_ module [24] to locate specific patterns and make corrections where appropriate. As shown in Table 3, the preprocessing steps lead to cleaner and better organized data, resulting in a significant improvement over the raw messages presented in Table 2. The enhanced quality of the data allows for more accurate and efficient processing and analysis, ultimately leading to better outcomes. These improvements highlight the importance of effective preprocessing techniques when working with text data. After that, we employed the spaCy library [25] to segment DATIS messages into individual sentences, allowing us to gather a corpus consisting of roughly 2,624,012 distinct sentences drawn from the 2022 data files. These sentences constitute our training dataset for future machine learning initiatives. ## Appendix B Natural Language Inference (NLI) Dataset Natural Language Inference (NLI) involves assessing the truth value of hypotheses based on provided premises. Specifically, NLI categorizes each hypothesis as true (entailment), false (contradiction), or neutral (undetermined). For this study, we obtained the NLI dataset from [https://sbert.net/datasets/AllNLItsv.gz](https://sbert.net/datasets/AllNLItsv.gz). This collection contains unions of Stanford Natural Language Inference (SNLI) [26] and MultiNLI [27], resulting in a comprehensive resource with 961,725 records. Having readied the necessary datasets, we proceeded to the next step of model training, detailed in the next section. ## Appendix C Modeling Method DATIS text data have no labels. With that limitation in mind, we followed the paradigm model training process: pre-training plus fine-tuning (see Fig. 4). During pre-training, we used TSDAE to enhance the base model's capabilities on our aviation dataset. We choose TSDAE because of its relatively better performance reported in [16]. For fine-tuning, we used SBERT to tune sentence transformers with the NLI dataset. This ensures that we achieve better performance than general-purpose pre-trained sentence transformers while minimizing overfitting problems. ## Appendix A Tsdae TSDAE is an unsupervised sentence embedding method; it uses a denoise autoencoder [28] as the architecture (see Stage 1 of Fig. 4). During training, TSDAE adds noise to the original sentence, and then feeds it to an encoder which transforms the corrupted sentence into a fixed-sized sentence embedding vector (indicated by yellow in Stage 1 of Fig. 4). Then, the decoder reconstructs the original sentence from this sentence embedding. A good reconstruction denotes that the sentence embedding from the encoder captures the sentence's semantics well. During inference, the encoder is only used for creating sentence embeddings. TSDAE has modified the conventional encoder-decoder transformer [29]. In TSDAE, the key and value of the cross-attention are both confined to the sentence embedding. Formally, the formulation of the modified cross-attention is: \[H^{(k)}=Attention(H^{(k-1)},[S^{T}],[S^{T}])\] \[Attention(Q,K,V)=softmax(\frac{QK^{\tau}}{\sqrt{d}})V\] where \(H^{(k)}\in\mathbb{R}^{t\times d}\) represents the decoder hidden state at time step \(t\) at the \(k\)-th layer; \(d\) is the dimension size of sentence embedding vector; \([S^{T}]\in\mathbb{R}^{1\times d}\) is sentence embedding vector; and \(Q,K,V\) are query, key, and value, respectively. TSDAE determined an effective approach for training based on three components: (1) using deletion with a deletion ratio of 0.6 as the input noise; (2) employing the output from the [CLS] token as a fixed-size sentence representation; and (3) tying encoder and decoder weights during training. This combination has proven to be highly successful in promoting learning. ## Appendix B Sentence-Bert (SBERT) The Sentence-BERT (SBERT) [1] model was developed by modifying the pre-trained BERT network [30]. S-BERT involves training the model on a labeled dataset like NLI to generate sentence embeddings that are more accurate and efficient than those produced by standard BERT or RoBERTa [31] models. Specifically, SBERT uses a combination of Siamese and triplet network architecture to create semantically meaningful sentence representations, as shown in Fig. 5. Using SBERT can significantly decrease inference time from approximately 65 hours with BERT or RoBERTa to just 5 seconds without sacrificing accuracy. We fine-tuned the sentence transformers with the labeled NLI dataset to overcome potential overfitting problems resulting from stage 1 of pre-training. Figure 4: **Aviation sentence transformer training pipeline.** ## IV Results In this section, we present the results of our experiments in applying the aviation sentence transformer to several tasks including STS, clustering, semantic search, and paraphrase mining. ### Pretrained Sentence Transformers STS Evaluation We tested the suitability of pre-trained general-purpose sentence transformer models from the Hugging Face website ([https://huggingface.co/sentence-transformers](https://huggingface.co/sentence-transformers)) for use on our selected aviation domain text data. We sought to find the best performing model based on its ability to discern differences between sets of similar or dissimilar sentences. For evaluation purposes, we constructed four test cases in the aviation domain, and computed the cosine similarity score for each sentence pair. We compiled the resulting scores into Table 4. The bert-base-cased model did not effectively differentiate between sentences in the aviation corpus. As such, it did not meet our requirements, so we excluded it from further consideration. The bert-base-nli-mean-tokens model also fell short of expectations due to its tendency to treat disparate sentences (the Index 2 row in Table 4) with a high cosine similarity score. Conversely, the Index 3 row in Table 4 had highly comparable phrasing, thus providing an ideal test case to measure the capability of the remaining models to generate analogous output. All-MiniLM-L6-v2, all-distilroberta-v1, and all-mpnet-base-v2 also underperformed in this case and were therefore eliminated. Therefore, all-MiniLM-L12-v2 is the final candidate for aviation domain adaptation. The following sections contain additional details about the adaptation experiments and their corresponding results. ### Experiment Settings In this section, we describe the training environment used for our model. Table 5 lists our hardware equipment setup. We cloned the entire sentence transformers development package from [https://github.com/UKPLab/sentence_transformers](https://github.com/UKPLab/sentence_transformers). These resources enabled us to effectively train our model and achieve the desired results. Prior to beginning the training process, we prepared the DATIS training data by formatting each sentence onto a separate line, as needed by the software package being used. We used <sentence-transformers/examples/unsupervised_learning/TSDAE/train_tsdae_from_file.py> as our training script, and we adjusted the training parameters according to those presented in the second column of Table 6. With this configuration, we began the stage 1 training phase. After completing stage 1, we proceeded to stage 2 of fine-tuning using NLI dataset, using the script <sentence-transformers/examples/training/nli/training_nli_v2.py> and the parameters listed in the third column of Table 6. The script uses the Multiple Negative Ranking Loss strategy [32] where entailment pairs are considered positive while contradictions are treated as hard negatives. Every 10% of the training process, we evaluated the performance of the model on the STS benchmark dataset. When stage 2 was complete, the model was ready to be applied to practical tasks. ### Adapted Sentence Transformer STS Evaluation After completing the two-part training process, we applied the aviation variant of the sentence transformer, named aviation-all-MiniLM-L12-v2, to the same set of text data used in Table 4. The results, listed in Table 7, demonstrate that the adapted aviation-all-MiniLM-L12-v2 model outperforms the general-purpose all-MiniLM-L12-v2. This shows that the adaptation process effectively tailored the model for the domain-specific language patterns prevalent in aviation text. ### Clustering Results We next used aviation-all-MiniLM-L12-v2 model to perform clustering on the DATIS sentences about NOTAM reports from January 1, 2022 to January 9, 2022. The resulting clusters are detailed in Table 8 and visualized using a t-Distributed Stochastic Neighbor Embedding (t-SNE) [26] plot in **Error! Reference source not found.**, which fully demonstrates our adapted sentence transformer was able to identify meaningful patterns in the data. For instance, \begin{table} \begin{tabular}{|l|l|l|} \hline **Parameters** & **Pre-training parameter values** & **Fine-turning parameter values** \\ \hline epochs & 1 & 1 \\ \hline weight decay & 1e-5 & 1e-6 \\ \hline scheduler & constant & constant \\ \hline learning rate & 1e-4 & 1e-5 \\ \hline evaluation steps & 500 & 500 \\ \hline save best model & True & True \\ \hline show progress bar & True & True \\ \hline use amp & False & False \\ \hline batch size & 128 & 128 \\ \hline \end{tabular} \end{table} Table 6: Stage 1 training parameter settings \begin{table} \begin{tabular}{|l|l|l|} \hline **Index** & **Sentence1** & **Sentence2** & **all-MiniLM-M-** \\ \hline 0 & NOTAMS. & NOTICE TO AIR MISSIONS. & 0.207 \\ \hline 1 & TDWR OTS. & RWY 2R GS OTS. & 0.580 \\ \hline 2 & HAZDUS WX INFO FOR PHX AREA & CLEARANCE FREQUENCY IS 121.9. & 0.311 \\ & AVBL ON FSS FREOS. & & \\ \hline 3 & BIRD ACTIVITY INVOF ARPT. & WARNING, BIRD ACTIVITY IN VCY & 0.756 \\ & & OF ARPT & \\ \hline \end{tabular} \end{table} Table 7: Adapted model performance comparison \begin{table} \begin{tabular}{|l|l|} \hline **Operation System** & **Linux** \\ \hline CPU & 2xAMD EPYC 7262 8-Core Processor \\ \hline Memory & 250 GB \\ \hline Framework & PyTorch 2.0 \\ \hline GPUs & 4xA100 \\ \hline \end{tabular} \end{table} Table 5: Experimental hardware environment cluster 0 focuses on runway surface conditions (RSC), while cluster 2 highlights bird activities. Cluster 3 deals with equipment being out of service (OTS), cluster 4 pertains to tower operations, cluster 5 discusses closed taxiways, cluster 6 centers around runway closures, cluster 7 concerns hazardous weather situations, cluster 8 alerts pilots that the tower must call for release from other facilities before allowing them to depart, cluster 9 warns about possible threats from lasers shining into aircraft windows, and cluster 10 provides information on snow. Cluster 1 is a miscellaneous category, containing a broad range of uncommon messages. ## Appendix E Semantic Search In addition to clustering, we used our newly adapted aviation-all-MiniLM-L12-v2 model to perform semantic searches. By providing a query sentence such as "BIRD ACTIVITY IN THE VICINITY OF THE AIRPORT," the model rapidly identified the ten most similar sentences within the dataset based on their cosine similarity scores; the count column in Table 9 represents how many of the same sentences are included in the searched dataset. Notably, the use of our adapted model allowed for more precise and accurate retrieval of relevant sentences, reflecting its enhanced comprehension of domain-specific language patterns. Furthermore, it underscores the variety of language expressions in the aviation domain. \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Query** & **Sentence** & **Score** & **Count** \\ \hline BIRD ACTIVITY IN VCY OF ARPT & BIRD ACTIVITY IN THE VCNT OF THE ARPT & 0.9744 & 2 \\ \hline BIRD ACTIVITY IN VCY OF ARPT & BIRD ACTIVITY RPDT IN THE VC OF THE ARPT & 0.9661 & 1 \\ \hline BIRD ACTIVITY IN VCY OF ARPT & BIRD ACTIVITY VC OF ARPT & 0.9604 & 2 \\ \hline BIRD ACTIVITY IN VCY OF ARPT & BIRD ACTIVITY VCNTY ARPT & 0.9596 & 2 \\ \hline BIRD ACTIVITY IN VCY OF ARPT & BIRD ACTIVITY VICINITY ARPT & 0.9067 & 2 \\ \hline BIRD ACTIVITY IN VCY OF ARPT & BIRD ACTIVITY VICINITY OF ARPT & 0.8973 & 1 \\ \hline BIRD ACTIVITY IN VCY OF ARPT & BIRD ACTIVITY INVOF ARPT & 0.8806 & 1 \\ \hline BIRD ACTIVITY IN VCY OF ARPT & BIRD ACTIVITY & 0.8518 & 2 \\ \hline BIRD ACTIVITY IN VCY OF ARPT & BIRD ACTIVITY YICINITY DAL ARPT & 0.8353 & 1 \\ \hline BIRD ACTIVITY IN VCY OF ARPT & BIRD ACTIVITY VICINITY ALB ARPT & 0.8196 & 2 \\ \hline \end{tabular} \end{table} Table 9: Semantic search example Figure 6: t-SNE plot of sentence embedding. ## Appendix F Paraphrase Mining To perform paraphrase mining of DATIS messages, we again turned to our aviation-all-MiniLM-L12-v2 model. Unlike previous methods involving brute force comparison, our approach uses the power of the sentence transformer package to quickly and accurately identify duplicate content across larger datasets. Our implementation is guided by the principles introduced in [33]. Table 10 demonstrates the efficacy of this approach, where the scores represent cosine similarity values. When score equals to 1, it means that two messages are identical. This refinement process enables us to streamline the detection of repetitive information while accounting for industry-specific jargon and nuances. ## Appendix F Paraphrase Mining To perform paraphrase mining of DATIS messages, we again turned to our aviation-all-MiniLM-L12-v2 model. Unlike previous methods involving brute force comparison, our approach uses the power of the sentence transformer package to quickly and accurately identify duplicate content across larger datasets. Our implementation is guided by the principles introduced in [33]. Table 10 demonstrates the efficacy of this approach, where the scores represent cosine similarity values. When score equals to 1, it means that two messages are identical. This refinement process enables us to streamline the detection of repetitive information while accounting for industry-specific jargon and nuances. ## Appendix F Van Damay This study describes our novel two-stage training approach utilizing TSDAE and SBERT models to adapt sentence transformers for use on aviation domain text datasets. Experimental evaluation demonstrates significant improvements in various NLP tasks such as STS, clustering, semantic search, and paraphrase mining from methods using general-purpose sentence transformers. Specifically, the adapted model effectively parses DATIS messages, enabling updates regarding weather conditions and other critical landing and departure information to be processed more efficiently. Our experiment results confirm that the adapted model performs well in extracting comprehensible information from text that is dense with abbreviations and domain-specific jargon. Our ongoing research is focused on using the adapted model to support applications that can continuously check for spatial and temporal patterns in reported events to enhance situational awareness and enable proactive mitigation strategies for potential threats to aviation safety. Our proposed adaptation methodology could also be applied to other areas that use a lot of domain-specific language. ## Appendix F Van Damay The authors thank Dr. Jonathan Hoffman, Dennis Sawyer, Dr. Craig Wanke, Dave Hamrick, Dr. Tom Becher, Mike Robinson, Dr. Lixia Song, Erik Vargo, Matt Yankey, Mahesh Balakrishna, Huang Tang, Shuo Chen, Tao Yu, Michele Ricciardi, and Anahita Imanian of the MITRE Corporation for their support, valuable discussions, and insights. \begin{table} \begin{tabular}{|c|l|l|l|} \hline **Idx1** & **Idx2** & **Message1** & **Message2** & **Score** \\ \hline 35971 & 35972 & QU ANPDAXA, CHIXCXA 050345, FF & QU ANPDAXA,, CHIXCXA 050345, FF KANYXAAD, & 1.0000 \\ & KANYXAAD, 050345 YMENAITS, ATIS YMEN & 050345 YMENAITS, ATIS YMEN K 050345. WIND: & \\ & K 050345. WIND: 09015 MAX XW 15 KTS & 090/15 MAX XW 15 KTS MAX TW 3 KTS VIS: GT & \\ & MAX TW 3 KTS VIS: GT 10KM CLD: FEW030 & 10KM CLD: FEW030 & 10KM CLD: FEW030 & 1007.27 QNII: 1007. \\ & SCT042 TMP: 27 QNII: 1007. RWY: 17 & & RWY: 17. \\ \hline 54515 & 52842 & QU ANPDAXA, YQMATXA 070527, TIS, AD & QU ANPDAXA, YQMATXA 070106, TIS, AD CYQM & 0.9635 \\ & CYQM OS CZC2512, CYQM ATS INFO 270502 & OS CVC106, CYQM ATS INFO V 01002. 33007KT & 070068T 158M BKN025 BRNO 4000 & 158M BKN025 BRNO 4000 & 1007.42982. \\ & MO2/M05 0A2991. APPROACH MENW ZULU & APPROACH HRANY ZULU & & \\ & RWY 29. INFORM MONCTON CENTER ON & MONCTON CENTER ON FEQUENCY 124.4 OF & \\ & FRQUENCY 124.4 OF REQUESTED & REQUESTED APPROACH ON INITIAL CONTACT. & \\ & APPROACH ON INITIAL CONTACT. & ARRIVING AND DEPARTING RWY 29. RSC & 06, SRC 6 6 10\% ICE, 100% DCF, 100% DCF, VALID \\ & ARRIVING AND DEPARTING RWY 29. RSC & 06, SRC 6 6 10\% ICE, 100% DCF, 100% ICE, VALID & \\ & RWY 06, RSC 6 6 10\% ICE, 100% DCF, 100% DCF, 100% DCF, 100% & AT 2329Z. RSC RNVY 29, RSC 6 6 6 100\% DRY, 100% ICE, VALID AT 2335Z. INFORM CYQM & \\ & ICE, VALID AT 2329Z. RSC RNVY 29, RSC 6 6 & DRY, 100\% ICE, VALID AT & \\ & 100\% DRY, 100% DRY, 10% ICE, VALID AT AT ATIS V. & & AT ATTS V. \\ \hline 65 & 92 & QU ANPDAXA, BKKATAX 010010, TIS, AD & QU ANPDAXA,, LASATXA 010018, TIS, AD LAS OS & 0.3117 \\ & VTSS OS CA0000, VPSS ARR ATIS A 0012Z. & CY2356, LAS ATIS INFO Y 2356Z. 24009KT 10SM & \\ & 00002WIND 1004KT VIS 8000M FBL RA CLD & FEW060 130/A02951 (TWO NINNER FIVE ONE), ILS & \\ & FEW 1800FT SCT 2000FT BSN 2500FT T23 & APPROACH RNV 26L, VISAL APPROACH in & \\ & DP23 QNII 1012HAR TRENN DSOSIG. RNP 08 & USE. ARRIVING RWY 26L and 19R. DEPARTING & \\ & 12312335 08 5 5 5 100/100/100 NR/NR/NR & RWYS 26R, 19R AND 19L. SIMUL APPROACH TO & \\ & WET/WET/WET. ADZ CONTROLLER WHEN & CROSSING AND PARALLEL RWYS IN USE, & \\ & INITIAL CONTACT YOU HAVE INFO A. & CONVERING RWY OPERATIONS IN EFFECT. & \\ & & NOTAMS. TWY DELTA BETWEEN SIERA AND & \\ & & MIKE IS RESTRICTED TO MAX WINGSPAN 1 3 5 & FEET, HAZD WX INFO AVAILABLE ON HIWAS, & \\ & & FSS FREQ. GC COMBINED ON 121.1, HELICOPTOR & \\ & CONTROL OPEN ON 118.75. ADVS YOU HAVE & \\ & & INFO Y. & \\ \hline \end{tabular} \end{table} Table 10: DATIS message paraphrase mining examples ## NOTICE This work was sponsored by MITRE's Independent Research and Development Program. The contents of this document reflect the views of the authors and do not necessarily reflect the views of the Federal Aviation Administration (FAA) or the Department of Transportation (DOT). Neither the FAA nor the DOT makes any warranty or guarantee, expressed or implied, concerning the content or accuracy of these views.
2306.10557
The chow weight structure for geometric motives of quotient stacks
We construct the Chow weight structure on the derived category of geometric motives with arbitrary coefficients for X a finite type scheme over a field characteristic 0 and G an affine algebraic group. In particular we also show that the heart of this weight structure recovers the category of Chow motives on [X/G].
Dhyan Aranha, Chirantan Chowdhury
2023-06-18T13:41:22Z
http://arxiv.org/abs/2306.10557v3
# The Chow weight structure for geometric motives of quotient stacks ###### Abstract. We construct the Chow weight structure on the derived category of geometric motives \(\operatorname{DM}_{\operatorname{gm}}([X/G],\Lambda)\) for \(X\) a quasi-projective scheme over a field characteristic \(0\), \(G\) an affine algebraic group and \(\Lambda\) an arbitrary commutative ring. In particular we also show that the heart of this weight structure recovers the category of Chow motives on \([X/G]\). ###### Contents * 1 Introduction * 1.1 Acknowledgements * 1.2 Notation * 2 DM for algebraic stacks * 3 Descent results * 4 Geometric motives * 4.1 The six operations and geometric motives * 4.2 Generation results for the derived category of geometric motives * 5 Mapping spectra and Chow groups * 6 Weight Structures * 7 Equivariant Motives ## 1. Introduction The notion of weight structure on a triangulated category was introduced by Bondarko in [1] and independently by Pauksztello [11] (under the name of "co-t-structures"). In [1] and [12] Chow weight structures for the derived category of Beilinson motives where constructed and in [1] Chow weight structures for \(\operatorname{DM}_{\operatorname{cdh}}(-,\Lambda)\) were constructed where \(\Lambda\) is a general ring such that the characteristic of the base is invertible. The motivation for this note comes from the works of [13]Rem. II.4.15] and [14][Rem. 4.8], where it is asked if there is a general way to put a Chow weight structure on derived category of (geometric) motives for quotient stacks. In this article we propose a definition for \(\operatorname{DM}_{\operatorname{gm}}([X/G],\Lambda)\), the derived category of geometric motives over a stack \([X/G]\) where \(X\) is assumed to be quasi-projective (Definition 4.4). Roughly speaking, \(\operatorname{DM}_{\operatorname{gm}}([X/G],\Lambda)\) is the thick subcategory of \(\operatorname{DM}([X/G],\Lambda)\) generated by (Tate twists of) motives of stacks which are smooth and quasi-projective over \([X/G]\). Our justification for this definition is that it is equivalent to the usual definition [1][Def. 2.3] when \(G\) is trivial (Lemma 4.2). Our first main theorem is **Theorem 1**.: Suppose that \(\mathcal{X}=[X/G]\) where \(X\) is a quasi-projective scheme over a field \(k\) of characteristic \(0\) and \(G\) is an affine algebraic group acting on \(X\). Let \(\Lambda\) be any commutative ring. Then the \(\infty\)-category of geometric motives \(\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\) admits a Chow weight structure \(w_{\operatorname{Chow}}\). The reason we call the weight structure constructed in Theorem 1, the _Chow_ weight structure is because of our second main theorem **Theorem 2**.: Suppose we are in the setup of Theorem 1. Then there is an equivalence \[\operatorname{h}\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)^{ \heartsuit_{\infty}}\simeq\operatorname{CHM}(\mathcal{X},\Lambda),\] where \(\operatorname{CHM}(\mathcal{X},\Lambda)\) denotes the category of classical Chow motives over \(\mathcal{X}\). (see Definition 7.1). Theorem 1 appears as Theorem 6.3 in the main text. The assumption that the field \(k\) is of characteristic zero in Theorem 1 is needed in two places in this work: Firstly, in order to prove the existence of proper \(*\)-pushforwards for \(\operatorname{DM}_{\operatorname{gm}}(-,\Lambda)\) we rely on the results of [1]. Secondly, because we need to use \(G\)-equivariant resolution of singularities. In particular to get this result in characteristic \(p\), different arguments are needed and we think this is an interesting question. In fact in both [20] and [14] weight structures were constructed on various subcategories of \(\operatorname{DM}([X/G],\Lambda)\) under various assumptions on the action of \(G\) on \(X\). We believe that with an appropriate version of Chow's lemma for stacks one can show that our category \(\operatorname{DM}_{\operatorname{gm}}([X/G],\Lambda)\) is equivalent to the category \(\operatorname{DM}_{G}^{\operatorname{Spr}}(X,\Lambda)\) of [14] when the base field \(k\) is of characteristic \(0\). We now give a general outline of the article. In Section 2 we introduce the \(\infty\)-category \(\operatorname{DM}(\mathfrak{X},\Lambda)\) and its six-functor formalism for a general class of stacks (so called Nis-loc stacks [13]). We also explain the equivalence \[\operatorname{DM}(\mathfrak{X},\Lambda)\simeq\operatorname{Mod}_{H\Lambda}( \operatorname{SH}(\mathfrak{X}))\] when \(\mathfrak{X}\) is Nis-loc, which is presumably well known to experts, but we could not find a reference for this in the literature. In Section 3 we record various descent results which will be used to construct the Chow weight structure. Most important, will be the fact that \(\operatorname{DM}(-,\Lambda)\) on the category of Nis-loc stacks has \(cdh\)-descent (see Proposition 3.5). This will be used together with the existence of equivariant resolutions of singularities in characteristic \(0\) to show that the \(\infty\)-category of Chow motives (see Definition 4.19) generates the derived category of geometric motives for a quotient stack in a suitable sense. In Section 4 we introduce the category of geometric motives, \(\operatorname{DM}_{\operatorname{gm}}(\mathfrak{X},\Lambda)\), and consider how various operations in the six functor formalism on \(\operatorname{DM}(\mathfrak{X},\Lambda)\) restrict to \(\operatorname{DM}_{\operatorname{gm}}(\mathfrak{X},\Lambda)\). One of the main results in this section is that \(\operatorname{DM}_{\operatorname{gm}}(\mathfrak{X},\Lambda)\) is stable under projective \(*\)-pushforwards. This result relies crucially on the results of [1]. The other important result in this section is that we show that the \(\infty\)-category of Chow motives \(\operatorname{\mathbf{Chow}}_{\infty}(\mathfrak{X},\Lambda)\) generates \(\operatorname{DM}_{\operatorname{gm}}(\mathfrak{X},\Lambda)\): Theorem 4.23. In Section 5 we explain the connectivity of the mapping spectrum between any two objects of the \(\infty\)-category of Chow motives of a quotient stack. Along the way we will also explain the equivalence \[\pi_{0}\operatorname{map}_{\operatorname{DM}(\mathfrak{X},\Lambda)}(1_{ \mathfrak{X}}(s)[2s+t],f^{\dagger}1_{B})\simeq\operatorname{CH}_{*}(\mathfrak{ X},t)_{\Lambda}.\] for a quotient stack \(\mathfrak{X}=[X/G]\) which is well known in the case that \(X\) is smooth [13][12][14]. In Section 6 we remind the reader about the definition of weight structures and prove the existence of the Chow weight structure for quotient stacks: Theorem 6.3. Finally in Section 7 we explain the identification of the homotopy category of the weight heart of the Chow weight structure on \(\operatorname{DM}_{\operatorname{gm}}([X/G],\Lambda)\) with the classically defined category of Chow motives. As expected in [20][11][14.15], when \(\mathfrak{X}=BG\) for \(G\) a linear algebraic group over a field \(k\) and \(\Lambda=\mathbf{Q}\), Theorem 2 gives an identification of the weight heart of our weight structure with Laterveer's category category of equivariant motives [10] (See Corollary 7.8). ### Acknowledgements We would first like to take the opportunity to thank Alessandro D'Angelo for many conversations about the material in this note and for pointing out an important mistake in an earlier incarnation of this work. We would also like to thank Marc Levine for patiently answering several asinine questions about motivic homotopy theory and Jochen Heinloth for many conversations about stacks. ### Notation We will denote stacks by the letters \(\mathfrak{X},\mathfrak{Y},\mathbb{Z}..\) etc. and denote schemes/algebraic spaces by the letters \(X,YZ..\) etc. All of our geometric objects will live over a base scheme \(B=\operatorname{Spec}(k)\) where \(k\) is an algebraically closed field. We will denote by \(\Lambda\) a arbitrary commutative ring, such that if \(\operatorname{char}(k)=p>0\) we assume that \(p\) is invertible in \(\Lambda\). We will assume all of our stacks have affine diagonal and are of finite type over \(B\). In particular by [12][Thm. 1.2] all our stacks are Nis-loc (see Definition 2.4). We will often still refer to the Nis-loc stack hypothesis anyway in many statements to reassure the reader. ## 2. DM for algebraic stacks We will begin by recalling the construction of the category \(\operatorname{DM}(\mathcal{X},\Lambda)\) and it's six functor formalism. Given a finite type \(B\)-scheme \(S\), and a commutative ring \(\Lambda\), it follows from the work of [10] and [11] that there is a well defined motivic Eilenberg-MacLane spectrum \(H\Lambda_{S}\). We make the following definition of the derived category of motives for finite type schemes over \(B\). **Definition 2.1**.: Let \(X\) be a finite type scheme over \(B\), and \(\Lambda\) an arbitrary commutative ring. We define the derived category of motives with coefficients in \(\Lambda\) to be \[\operatorname{DM}(X,\Lambda):=\operatorname{Mod}_{H\Lambda_{X}}(\operatorname{ SH}(X)).\] The category \(\operatorname{DM}(X,\Lambda)\) is equivalent to the category \(\operatorname{DM}_{\operatorname{cdh}}(X,\Lambda)\) by [13][Thm. 5.1]. In particular it has a six functor formalism which we will recall shortly. **Remark 2.2**.: In the case that \(\Lambda=\mathbb{Q}\) it follows from [13] that \(\operatorname{DM}(X,\Lambda)\) is equivalent to the category of Beilinson motives. We now summarize the \(6\)-functor formalism on schemes for \(\operatorname{DM}(-,\Lambda)\) which follows from [10] and [13]. There is a functor \[\operatorname{DM}^{*}:(\operatorname{Sch}_{B}^{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{ \operatorname{ \ 9. (Localization) For \(i:Z\hookrightarrow X\) a closed immersion with open complement \(j:U\hookrightarrow X\) we have the following cofiber sequences \[i_{i}i^{!} \to\operatorname{id}\to j_{*}j^{*}\] \[j_{!}j^{!} \to\operatorname{id}\to i_{\sharp}i^{*}.\] 10. (Absolute purity) For any closed immersion \(i:Z\hookrightarrow X\) between regular schemes of codimension \(c\) there is an isomorphism \[i^{!}1_{X}\simeq 1_{Z}(-c)[-2c]\] **Notation 2.3**.: In order to avoid overloading notation we will simply write \(\operatorname{DM}(-,\Lambda)\) for \(\operatorname{DM}^{*}(-,\Lambda)\). Our goal now is to describe an extension of the functor \(\operatorname{DM}(-,\Lambda)\) to a certain class of stacks introduced by Chowdhury [10] called Nis-loc stacks. We will first recall the definition. **Definition 2.4**.: We say that an algebraic stack \(\mathcal{X}\) admits Nisnevich-local sections if there exists a morphism \(x:X\to\mathcal{X}\) such that \(X\) is a scheme and for any morphism \(y:Y\to\mathcal{X}\) with \(Y\) a scheme, the induced map \(x^{\prime}:X\times_{\mathcal{X}}Y\to Y\) admits Nisnevich-Local sections. We say that an algebraic stack \(\mathcal{X}\) is _Nis-loc_ if there exists a smooth cover which admits Nisnevich-local sections. We will denote the \(\infty\)-category of Nis-loc stacks by Nis-locSt **Example 2.5**.: The following example is from [10][Cor. 2.3.6]. Let \(X\) be a finite type scheme over \(B\) and \(G\) an affine algebriac group. Then \([X/G]\) is a Nis-loc stack. **Example 2.6**.: By [11][Thm. 1.2] any quasi-separated, finite type algebraic stack over \(B\) with separated diagonal is Nis-loc. One can construct an extension of \(\operatorname{DM}(-,\Lambda)\) to all locally finite type algebraic stacks over \(B\) by considering the so called _lisse-extension_ as introduced in [17][Constr. 12.1]. However, the Cech nerve of an arbitrary smooth cover will not be confinal in general and so we cannot compute this extension along arbitrary smooth covers. The reason for introducing the notion of Nis-loc stack is that they provide a convenient class of stacks where we can compute \(\operatorname{DM}(-,\Lambda)\) along Cech nerves of Nis-loc covers. **Theorem 2.7**.: _The functor \(\operatorname{DM}(-,\Lambda)\) extends to an \(\infty\)-sheaf_ \[\operatorname{DM}_{\operatorname{ext}}(-,\Lambda):\operatorname{Nis-locSt}^{ \operatorname{op}}\to\operatorname{CAlg}(\operatorname{Pr}^{\operatorname{L}_{ \operatorname{stb},\Lambda}}).\] _Moreover, for any \(\mathcal{X}\in\operatorname{Nis-locSt}\) with a schematic Nis-loc atlas \(\pi:X\to\mathcal{X}\) we can compute \(\operatorname{DM}_{\operatorname{ext}}(\mathcal{X},\Lambda)\) on the Cech nerve of \(\pi\). That is_ \[\operatorname{DM}_{\operatorname{ext}}(\mathcal{X},\Lambda)\simeq\varprojlim \Big{(}\operatorname{DM}(X,\Lambda)\rightrightarrows\operatorname{DM}(X \times_{\mathcal{X}}X,\Lambda)\varrightarrows\cdots\Big{)}.\] Proof.: The proof is the same as [10][Cor. 2.5.1] with SH replaced by \(\operatorname{DM}(-,\Lambda)\). Note that as SH is a Nisnevich sheaf, by Proposition 2.12 we see that \(\operatorname{DM}(-,\Lambda)\) is a Nisnevich sheaf so we can apply [10, Thm. 2.4.1] in mimicking the proof of [10, Cor. 2.5.1]. We emphasize that the key result [10][Thm. 2.4.1] is proved in the generality of an arbitrary \(\infty\)-sheaf. **Definition 2.8**.: Let \(\mathcal{X}\) be a Nis-Loc stack. We define the derived category of motives over \(\mathcal{X}\) with coefficients in an arbitrary commutative ring \(\Lambda\) to be \[\operatorname{DM}(\mathcal{X},\Lambda):=\operatorname{DM}_{\operatorname{ext} }(\mathcal{X},\Lambda).\] **Remark 2.9**.: We would like to take the opportunity to point out that the six functor formalism for \(\operatorname{DM}\) of algebraic stacks has been considered in the literature in various places. In the case of quotient stacks there is [12][Ch. I]. For general stacks over \(\mathbf{Q}\) there is [13], and for general coefficients [17][12.1]. Also there is the work [10] in the setting of derivators. The next couple of remarks record how the category defined in Definition 2.8 compares with other constructions in the literature. **Remark 2.10**.: As alluded too above, By [17], one could just as well for an arbitrary locally finite type stack \(\mathcal{X}\) over \(B\) define \[\operatorname{DM}_{\triangleleft}(\mathcal{X},\Lambda):=\varprojlim_{(T,t)} \operatorname{DM}(T,\Lambda)\] where the limit is taken over the \(\infty\)-category \(\operatorname{Lis}_{\mathcal{X}}\) of pairs \((T,t)\) where \(T\) is a scheme and \(t:T\to\mathcal{X}\) is a smooth morphism. The same arguments used in [10][11][11] and [12][11][11]. We show that when \(\mathcal{X}\) is \(\operatorname{Nis-loc}\) the categories \(\operatorname{DM}_{\triangleleft}(\mathcal{X},\Lambda)\) and \(\operatorname{DM}(\mathcal{X},\Lambda)\) are equivalent. **Remark 2.11**.: We can also define for an arbitrary locally finite type stack \(\mathcal{X}\) over \(B\) the category \[\operatorname{DM}^{\operatorname{!}}(\mathcal{X},\Lambda):=\varprojlim_{ \operatorname{Lis}_{\mathcal{X}}}\operatorname{DM}^{\operatorname{!}}(S,\Lambda)\] in \(\operatorname{Pr}^{\operatorname{R}}_{\operatorname{stb},\Lambda}\). When \(\mathcal{X}\) is \(\operatorname{Nis-loc}\), the purity isomorphism implies that \[\operatorname{DM}(\mathcal{X},\Lambda)\simeq\operatorname{DM}^{\operatorname{!}}(\mathcal{X},\Lambda).\] In particular when \(\Lambda=\mathbf{Q}\) and \(\mathcal{X}\) is \(\operatorname{Nis-loc}\) then Definition 2.8 agrees with the the derived category of motives constructed in [10]. Next we would like to give a more global description of \(\operatorname{DM}(\mathcal{X},\Lambda)\). That is we would like to describe \(\operatorname{DM}(\mathcal{X},\Lambda)\) as a category of modules in \(\operatorname{SH}(\mathcal{X})\) over some motivic \(\mathbf{E}_{\infty}\)-ring spectrum. To do this we will first start with a purely categorical statement. **Proposition 2.12**.: _Suppose that \(\mathcal{C}^{\otimes}\in\operatorname{CAlg}(\mathit{Cat}^{\otimes}_{\infty})\) is the limit of a diagram \(q:I\to\operatorname{CAlg}(\mathit{Cat}^{\otimes}_{\infty})\). Let \(\operatorname{Mod}(\mathcal{C})\) be as in [12][11][11], then we have a canonical equivalence_ \[\operatorname{Mod}(\mathcal{C})\simeq\varprojlim_{i\in I}\operatorname{Mod}( \mathcal{C}_{i})\] _where \(q(i):=\mathcal{C}^{\otimes}_{i}\)._ Proof.: The \(\infty\)-category of modules associated to a symmetric monoidal \(\infty\)-category \(\mathcal{C}\) is equivalent to the \(\infty\)-category \(\operatorname{Alg}_{\mathbf{P}\mathfrak{P}}(\mathcal{C})\) of algebra objects associated to the \(\infty\)-operad \(\mathbf{P}\mathfrak{f}^{\otimes}\) ([11, Section 9.4.1.2]). Thus we can realize \(\operatorname{Mod}(\mathcal{C})\) as a full subcategory of the functor category \(\operatorname{Fun}(\mathbf{P}\mathfrak{f}^{\otimes},\mathcal{C}^{\otimes})\) spanned by objects \(p:\mathbf{P}\mathfrak{f}^{\otimes}\to\mathcal{C}^{\otimes}\) which commute with the usual projection maps to \(N(\operatorname{Fin}_{*})\). Firstly, we see that we have the following chain of equivalences : \[\operatorname{Fun}(\mathbf{P}\mathfrak{f}^{\otimes},\mathcal{C}^{\otimes}) \simeq\operatorname{Fun}(\mathbf{P}\mathfrak{f}^{\otimes},\varprojlim_{i\in I }\mathcal{C}^{\otimes}_{i})\simeq\varprojlim_{i\in I}\operatorname{Fun}( \mathbf{P}\mathfrak{f}^{\otimes},\mathcal{C}^{\otimes}_{i}). \tag{0.1}\] In order to get the equivalence on the level of module categories, we are reduced to check if \(\{p_{i}:\mathbf{P}\mathfrak{f}^{\otimes}\to\mathcal{C}^{\otimes}_{i}\}_{i\in I}\) is a compatible family of morphisms commuting to \(N(\operatorname{Fin}_{*})\), then the limit morphims \(p:\mathbf{P}\mathfrak{f}^{\otimes}\to\mathcal{C}^{\otimes}\) commutes with projection to \(N(\operatorname{Fin}_{*})\). This is because \(\operatorname{CAlg}(\operatorname{Cat}^{\otimes}_{\infty})\) admits limits ([12, Proposition 3.2.2.1]). **Lemma 2.13**.: _Let \(\mathcal{X}\) be a \(\operatorname{Nis-loc}\) stack and \(\Lambda\) an arbitrary commutative ring. Then there is a canonically defined object \(H\Lambda_{\mathcal{X}}\in\operatorname{CAlg}(\operatorname{SH}(\mathcal{X}))\) whose restriction along any morphism \(f:U\to\mathcal{X}\) from a scheme \(U\) is canonically equivalent to \(H\Lambda_{U}\in\operatorname{CAlg}(\operatorname{SH}(U))\)._ Proof.: Consider the ring spectrum \(H\Lambda_{B}\in\operatorname{CAlg}(\operatorname{SH}(B))\) constructed in [13] and [11]. We define \(H\Lambda_{\mathcal{X}}\) to be \(f^{*}H\Lambda_{B}\). Where \(f:\mathcal{X}\to B\) is the structure morphism. It follows directly from the definition of \(\operatorname{SH}(\mathcal{X})\)[10][11][11]. The \(f^{*}\) is a symmetric monoidal functor and thus \(H\Lambda_{\mathcal{X}}\) is contained in \(\operatorname{CAlg}(\operatorname{SH}(\mathcal{X}))\). Then via [11][11] it follows that \(H\Lambda\) has the desired property. **Theorem 2.14**.: _Let \(\mathcal{X}\) be a \(\operatorname{Nis-loc}\) stack over \(B\) and \(\Lambda\) an arbitrary commutative ring. Then we have the following canonical equivalence_ \[\operatorname{Mod}_{H\Lambda_{\mathcal{X}}}(\operatorname{SH}(\mathcal{X})) \simeq\operatorname{DM}(\mathcal{X},\Lambda).\] Proof.: As \(\mathcal{X}\) is \(\operatorname{Nis-loc}\), for an atlas \(x:X\to\mathcal{X}\) admitting Nisnevich-local sections, we have the following equivalence: \[\operatorname{SH}(\mathcal{X})\simeq\varprojlim\Big{(}\operatorname{SH}(X) \rightrightarrows\operatorname{SH}(X\times_{\mathcal{X}}X)\varrightarrows \cdots\Big{)}.\] Applying Proposition 2.12 to \(\mathcal{C}=\operatorname{SH}(\mathcal{X}),\ I=N(\Delta),\ \mathcal{C}_{i}= \operatorname{SH}(X^{i}_{\mathcal{X}})\), where \(X^{i}_{\mathcal{X}}:=X\times_{\mathcal{X}}X\cdots_{(i+1)\text{-times}}X\), we get the equivalence, \[\operatorname{Mod}(\operatorname{SH}(\mathcal{X}))\simeq\varprojlim_{i\in \Delta}\operatorname{Mod}(\operatorname{SH}(X^{i}_{\mathcal{X}})).\] Taking the fiber of the equivalence over the canonical Eilenberg-Maclane spectrum \(H\Lambda_{\mathscr{X}}\simeq\varprojlim_{i\in\Delta}H\Lambda_{X_{\mathscr{X}}^{i}}\) (Lemma 2.13), we get that \[\operatorname{Mod}_{H\Lambda_{\mathscr{X}}}(\operatorname{SH}(\mathscr{X})) \cong\varprojlim_{i\in I}\operatorname{Mod}_{H(X_{\mathscr{X}}^{i})}( \operatorname{SH}(X_{\mathscr{X}}^{i}).\] By definition of \(\operatorname{DM}\) on the level of schemes along with Theorem 2.7, we get that \[\operatorname{Mod}_{H\Lambda_{\mathscr{X}}}(\operatorname{SH}(\mathscr{X})) \simeq\operatorname{DM}(\mathscr{X},\Lambda)\] completing the proof. **Remark 2.15**.: One could envision another construction of \(\operatorname{DM}(\mathscr{X},\Lambda)\) more along the lines of [1][1][2]. That is consider the category of stable motivic complexes on a given Nis-loc stack. It would be interesting to compare this with Definition 2.8. We will now explain how the six functor formalism for \(\operatorname{DM}(-,\Lambda)\) on schemes generalizes to \(\operatorname{Nis-locSt}\). **Proposition 2.16**.: _(4-functors) The functor_ \[\operatorname{DM}(-,\Lambda):\operatorname{Nis-locSt}^{\operatorname{op}} \to\operatorname{CAlg}(\operatorname{Pr}^{\operatorname{L}}_{\operatorname{ stb},\Lambda})\] _has the following 4-functor formalism:_ 1. _For every morphism_ \(f:\mathscr{X}\to\mathscr{Y}\) _in_ \(\operatorname{Nis-locSt}\) _we have a pair of adjoints_ \((f^{*},f_{*})\)_, such that_ \(f^{*}\) _is symmetric monoidal._ 2. _For every_ \(\mathscr{X}\) _in_ \(\operatorname{Nis-locSt}\) _there are functors_ \[-\otimes-:\operatorname{DM}(\mathscr{X},\Lambda)\times\operatorname{DM}( \mathscr{X},\Lambda)\to\operatorname{DM}(\mathscr{X},\Lambda)\] \[\varinjlim\operatorname{map}(-,-):\operatorname{DM}(\mathscr{X}, \Lambda)^{\operatorname{op}}\times\operatorname{DM}(\mathscr{X},\Lambda)\to \operatorname{DM}(\mathscr{X},\Lambda)\] _which form an adjoint pair_ \((\otimes,\varinjlim\operatorname{map})\)_. i.e._ \(\operatorname{DM}(\mathscr{X},\Lambda)\) _is a closed symmetric monoidal_ \(\infty\)_-category._ Proof.: The existence of \(f^{*}\) and the fact that it is symmetric monoidal follows directly from Definition 2.8. Moreoever since \(f^{*}\) is colimit preserving by Lurie's adjoint functor theorem [15][16][17] there exists a right adjoint which we denote by \(f_{*}\). To see that \(\operatorname{DM}(\mathscr{X},\Lambda)\) is a closed symmetric monoidal \(\infty\)-category one can simply use the proof of [1][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][16][166][[166][16][16][166][[166][166][[166][16][166][[166][166][166][[166][166][166][[166][166][166][[166][166][166][166][[166][166][[166][166][[166][166][166][[166][166][[166][166][166][[166][166][166][[166][166][[166][166][[166][166][[166][166][[166][166][[166][166][[166][166][166][166][[166][166][[166][166][166][[166][166][166][[166][[166][166][[166][166][[166][166][[166][166][[166][166][[166][166][166][[166][166][[166][166][[166][166][[166][166][[166][166][[166][[166][166][166][[166][166][[166][[166][166][[166][166][[166][166][[166][166][[166][16][166][[166][[166][16][[166][16][16][16][[166][[166][16][16][[166][16][16][[166][[166][16][16][16][16][[166][16][16][[166][16][[16][16][16][16][[16][16][[16][16][16][[16][16][[16][16][[16][16][16][[16][16][[16][16][16][[16][16][[16][16][[16][16][16][[16][16][[16][16][16][[16][16][16][[16][16][[16][16][[16][16][16][[16][16][[16][16][[16][16][16][16][[16][16][[16][16][16][[16][16][[16][16][16][[16][16][16][[16][16][[16][16][[16][16][[16][16][[16][16][16][[16][16][[16][16][[16][16][16][[16][16][[16][16][16][[16][16][[16][16][[16][16][[16][16][[16][16][[16][[16][16][16][16][[16][16][[16][16][[16][16][16][[16][16][[16][16][16][[16][16][[16][16][[16][16][[16][16][[16][16][[16][16][16][[16][16][[16][16][[16][16][16][[16][16][16][[16][16][[16][16][16][[16][16][[16][16][16][[16][16][[16][16][[16][16][[16][16][16][[16][16][[16][16][[16][16][[16][16][16][[16][16][16][[16][16][[16][16][[16][16][[16][16][[16][16][[16][16][[16][16][[16][16][[16][[16][16][[16][16][[16][[16][16][[16][16][[16][16][[16][16][16][[16][16][[16][[16][16][[16][16][[16][16][[16][16][[16][16][[16][[ is invertible. In particular we have Tate twists for all \(n\in\mathbf{Z}\). We now record the existence of exceptional functors, base change, projection formulas and proper pushforward formulas. **Proposition 2.19**.: _For any locally of finite type morphism \(f:\mathcal{X}\to\mathcal{Y}\) of Nis-loc stacks there exist functors_ \[f_{!}:\operatorname{DM}(\mathcal{X},\Lambda) \to\operatorname{DM}(\mathcal{Y},\Lambda)\] \[f^{!}:\operatorname{DM}(\mathcal{Y},\Lambda) \to\operatorname{DM}(\mathcal{X},\Lambda)\] _which form an adjoint pair \((f_{!},f^{!})\) and satisfy:_ 1. _(Projection formula) Let_ \(\mathcal{E},\mathcal{E}^{\prime}\in\operatorname{DM}(\mathcal{Y},\Lambda)\) _and_ \(\mathcal{F}\in\operatorname{DM}(\mathcal{X},\Lambda)\) _we have the following equivalences_ \[f_{!}(\mathcal{F}\otimes f^{*}(\mathcal{E}) \simeq f_{!}(\mathcal{F})\otimes\mathcal{E}\] \[f^{!}\underline{\operatorname{map}}_{\operatorname{DM}(\mathcal{Y },\Lambda)}(\mathcal{E},\mathcal{E}^{\prime}) \simeq \underline{\operatorname{map}}_{\operatorname{DM}(\mathcal{X}, \Lambda)}(f^{*}\mathcal{E},f^{!}\mathcal{E}^{\prime}).\] 2. _(Base change) If_ \[\mathcal{X}^{\prime}\xrightarrow{f^{\prime}}y^{\prime}\] \[g^{\prime}\] \[\mathcal{X}\xrightarrow{f}\mathcal{Y}\] _is a cartesian square in_ \(\operatorname{Nis-locSt}\)_, of locally finite type morphisms. We have the following equivalences_ \[g_{!}f^{\prime*} \simeq f^{*}g_{!}\] \[f^{\prime}_{*}g^{\prime\prime} \simeq g^{!}f_{*}\] 3. _(Proper pushforward) If_ \(f:\mathcal{X}\to\mathcal{Y}\) _is proper and representable, then there exists a natural isomorphism_ \[\alpha_{f}:f_{!}\simeq f_{*}.\] Proof.: The proof of this will be contained in forthcoming work [3]. **Proposition 2.20**.: _(Purity) The Nisnevich sheaf \(\operatorname{DM}_{\operatorname{gm}}(-,\Lambda)\) on \(\operatorname{Nis-locSt}\) is oriented. Moreover we have_ 1. _For any smooth representable morphism of relative dimension_ \(d\)_, there is a natural isomorphism_ \[f^{!}\simeq f^{*}(d)[2d].\] 2. _For a closed immersion_ \(i:\mathcal{Z}\hookrightarrow\mathcal{X}\) _between regular Nis-loc stacks of codimension_ \(c\) _we have_ \[i^{!}1_{\mathcal{X}}\simeq 1_{\mathcal{Z}}(-c)[-2d].\] Proof.: For the claim about the orientation we refer to reader to [1][1][Rem. 1.7]. The proof of [1][1] works in this case with SH replaced by DM. We note that the separated hypothesis is not needed in loc. cit. because on the level of \(B\)-schemes of finite type we already have the existence of the exceptional functors. We note that (2) is a direct consequence of (1). **Proposition 2.21**.: _(Localization) Let \(\mathcal{X}\) be a Nis-loc stack. Suppose \(j:\mathcal{U}\hookrightarrow\mathcal{X}\) is an open immersion with closed complement \(i:\mathcal{Z}\hookrightarrow\mathcal{X}\) then we have the following cofibers_ \[i_{!}i^{!} \to\operatorname{id}\to j_{*}j^{*}\] \[j_{!}j^{!} \to\operatorname{id}\to i_{*}i^{*}.\] Proof.: The same proof as [1][Prop. 4.2.1] works with SH replaced by \(\operatorname{DM}(-,\Lambda)\). We note that in this case the inclusions \(i\) and \(j\) are representable thus the proof of [1][Thm. 3.1.1] applied to \(\operatorname{DM}(-,\Lambda)\) will also construct the exceptional functors. **Proposition 2.22**.: _(Homotopy invariance) For any Nis-loc stack \(\mathcal{X}\), the projection \(\pi:\mathbf{A}^{1}_{\mathcal{X}}\to\mathcal{X}\) induces a fully-faithful functor \(\pi^{*}:\mathrm{DM}(\mathcal{X},\Lambda)\to\mathrm{DM}(\mathbf{A}^{1}_{ \mathcal{X}},\Lambda)\)._ Proof.: The same proof of [10][Prop. 4.2.2] works with SH replaced by DM. **Corollary 2.23**.: _(Recollement) Suppose we have a diagram in \(\mathrm{Nis}\)-\(\mathrm{locSt}\)_ \[\mathcal{U}\xrightarrow{j}\mathcal{X}\xleftarrow{i}\mathcal{Z}:=\mathcal{X}- \mathcal{U}\] _where \(j\) is an open immersion and \(i\) is a closed immersion. Then for \(\mathrm{DM}(-,\Lambda)\) the following conditions are satisfied:_ 1. _The functor_ \(i_{*}\cong i_{!}\) _admits a left adjoint_ \(i^{*}\) _and a right adjoint_ \(i^{!}\)_._ 2. _The functor_ \(j^{*}\simeq j^{!}\) _admits a right adjoint_ \(j_{*}\) _and a left adjoint_ \(j_{!}\)_._ 3. _There is an equivalence_ \(j^{*}i_{*}\simeq 0\)_._ 4. _We have the following localization triangles_ \[i_{!}i^{!}\to\mathrm{id}\to j_{*}j^{*}\] \[j_{!}j^{!}\to\mathrm{id}\to i_{*}i^{*}.\] 5. _The functors_ \(i_{*},j_{*}\) _and_ \(j_{!}\) _are all full embeddings._ Proof.: Both (1) and (2) follow Proposition 2.19, Proposition 2.16 and Proposition 2.20. Claims (3) and (5) follow from the base change equivalence in Proposition 2.19. Finally (4) is Proposition 2.21. ## 3. Descent results In this section we discuss \(cdh\)-descent and Nisnevich descent for \(\mathrm{DM}(-,\Lambda)\). Khan in [11] has shown that \(\mathrm{DM}(-,\Lambda)\) when restricted to algebraic spaces satisfies \(cdh\)-descent. What we say in this section follows easily from Khan's work [11] but we will review the arguments here for completeness. For our purposes it will not be necessary to consider \(cdh\)-squares and Nisnevich squares for arbitrary morphisms of stacks, we will only need to consider representable \(cdh\) and Nisnevich squares. Recall from [10] that the _constructible topology_ on \(\mathrm{AlgSp}_{B}\) is the coarsest topology such that 1. The empty sieve covers the empty algebraic space. 2. If \(Z\hookrightarrow X\) is a closed immersion with open complement \(U\hookrightarrow X\), \(\{U\hookrightarrow X,Z\hookrightarrow X\}\) generates a covering sieve. **Lemma 3.1**.: _Let \(\{f_{i}:U_{i}\to S\}\) be a constructible cover of \(S\) in \(\mathrm{AlgSp}_{B}\). Then the family of functors \(\{f^{*}_{i}:\mathrm{DM}(S,\Lambda)\to\mathrm{DM}(U_{i},\Lambda)\}\) is conservative_ Proof.: This follows directly from Proposition 2.21. The following is [11][Thm. 2.51] **Proposition 3.2**.: _Suppose that cartesian square_ _in \(\mathrm{AlgSp}_{B}\) is a \(cdh\)-square (resp. Nisnevich square). Then we have a canonical equivalence_ \[\mathrm{DM}(X,\Lambda)\simeq\mathrm{DM}(Z,\Lambda)\times_{\mathrm{DM}(T, \Lambda)}\mathrm{DM}(Z,\Lambda)\] _(resp._ \[\mathrm{DM}(X,\Lambda)\simeq\mathrm{DM}(U,\Lambda)\times_{\mathrm{DM}(T, \Lambda)}\mathrm{DM}(Y,\Lambda)\] _of \(\infty\)-categories._ Proof.: The proof is the same as the proof [10][Prop. 6.24]. We prove the \(cdh\)-statement the Nisnevich result is analogous. That is, by [12] it is enough to show 1. The pair \((i^{*},f^{*})\) is conservative. 2. Given \(\mathcal{F}_{Z}\in\operatorname{DM}(Z,\Lambda),\mathcal{F}_{Y}\in \operatorname{DM}(Y,\Lambda),\mathcal{F}_{T}\in\operatorname{DM}(T,\Lambda)\) and \(g^{*}(\mathcal{F}_{Z})\simeq\mathcal{F}_{T}\simeq k^{*}(\mathcal{F}_{Y})\), if \[\mathcal{F}_{X}=i_{*}\mathcal{F}_{Z}\times_{(fk)_{*}\mathcal{F}_{T}}f_{*} \mathcal{F}_{Y},\] then the maps \[i^{*}\mathcal{F}_{X}\to\mathcal{F}_{Z}\text{ and }f^{*}\mathcal{F}_{X}\to \mathcal{F}_{Y}\] induced by the canonical projections are equivalences. Part (1) follows directly from Lemma3.1. Part (2) follows from first noting that proper base change Proposition2.19 and the fact that \(i_{*}\) is fully faithful implies that \(i^{*}\mathcal{F}_{X}\to\mathcal{F}_{Z}\) is an equivalence. To see the later equivalence we note that it follows via smooth base change Proposition2.17. The next definitions are natural generalizations of the notions of \(cdh\)-square and Nisnevich square to Nis-locSt. **Definition 3.3**.: A cartesian diagram of algebraic stacks in Nis-locSt is called a representable \(cdh\)-square if: 1. The morphism \(f\) is representable proper and surjective. 2. The morphism \(i\) is a closed immersion. 3. The restriction of \(f\) to \(\mathcal{X}-\mathcal{Z}\) is an isomorphism. **Definition 3.4**.: We say that a cartesian diagram of algebraic stacks in Nis-locSt is called a representable Nisnevich square if: 1. The morphism \(f\) is representable etale morphism. 2. The morphism \(j\) is an open immersion. 3. The restriction of \(f\) to \((\mathcal{X}-\mathcal{U})_{\operatorname{red}}\) is an isomorphism. **Proposition 3.5**.: _Given a \(cdh\)-square (resp. Nisnevich square)_ _in \(\operatorname{Nis-locSt}\). There is a canonical equivalence_ \[\operatorname{DM}(\mathcal{X},\Lambda)\simeq\operatorname{DM}(\mathcal{Z}, \Lambda)\times_{\operatorname{DM}(\mathcal{Y},\Lambda)}\operatorname{DM}( \mathcal{Y},\Lambda)\] _(resp._ \[\operatorname{DM}(\mathcal{X},\Lambda)\simeq\operatorname{DM}(\mathcal{U}, \Lambda)\times_{\operatorname{DM}(\mathcal{Y},\Lambda)}\operatorname{DM}( \mathcal{Y},\Lambda)\] _of \(\infty\)-categories._ Proof.: We will only prove the \(cdh\)-case, the Nisnevich case is entirely analogous. Let \(\pi:X\to\mathcal{X}\) be a Nis-loc atlas of \(\mathcal{X}\). Then by Theorem2.7 there is an equivalence \[\operatorname{DM}(\mathcal{X},\Lambda)\simeq\varprojlim_{n\in\Delta} \operatorname{DM}(X_{n},\Lambda).\] But now for each \(n\in\Delta\) the induced square \[\begin{CD}T_{n}@>{k_{n}}>{}>Y_{n}\\ g_{n}@>{}>{}>f_{n}\\ Z_{n}@>{}>{i_{n}}>{}>X_{n}\end{CD}\] is a \(cdh\)-square of algebraic spaces (resp. Nisnevich square). Thus by Proposition3.2 there is a canonical equivalence \[\operatorname{DM}(X_{n},\Lambda)\simeq\operatorname{DM}(Z_{n},\Lambda)\times_ {\operatorname{DM}(T_{n},\Lambda)}\operatorname{DM}(Y_{n},\Lambda).\] We can then rewrite \(\operatorname{DM}(\mathfrak{X},\Lambda)\) as \[\operatorname{DM}(\mathfrak{X},\Lambda)\simeq\varprojlim_{n\in\Delta} \operatorname{DM}(Z_{n},\Lambda)\times_{\operatorname{DM}(T_{n},\Lambda)} \operatorname{DM}(Y_{n},\Lambda).\] But now since limits commute with fiber products we are done. Given \(cdh\)-square as in Definition3.3, and setting \(a\simeq ig\simeq fk\). For any \(\mathcal{M}\in\operatorname{DM}(\mathfrak{X},\Lambda)\) we can form a commutative square (0.1) by considering the various unit and counit natural transformation associated to the adjunctions involved. **Corollary 3.6**.: _The square in Equation (0.1 ) is cartesian._ We will also be interested in the case of a Nisnevich square as in Definition3.4. That is, again setting \(a\simeq jg\simeq fk\), for any \(\mathcal{M}\in\operatorname{DM}(\mathfrak{X},\Lambda)\) we can form the commutative square (0.2) Applying the functor \(\varinjlim_{\operatorname{DM}(\mathfrak{X},\Lambda)}(-,1)\) we also get a square (0.3) **Corollary 3.7**.: _The square in Equation (0.2 ) is cartesian and hence so is Equation (0.3 )._ Corollary3.7 can be used to to show that for geometric motives we get a Mayer-Vietoris cofiber sequence. **Corollary 3.8**.: _Suppose that we have a Nisnevich square as in Definition3.4, where all vertices are smooth and representable over some base Nis-loc stack \(\mathcal{S}\). Then we have the following cofiber sequence_ \[\mathcal{M}_{\mathcal{S}}(\mathfrak{Y})\to\mathcal{M}_{\mathcal{S}}(\mathfrak{ U})\oplus\mathcal{M}_{\mathcal{S}}(\mathfrak{Y})\to\mathcal{M}_{\mathcal{S}}( \mathfrak{X})\xrightarrow{[1]}\] _induced by the cartesian square Equation (0.3 )._ Proof.: One takes \(\mathcal{M}:=\mathcal{M}_{\mathcal{S}}(\mathfrak{X})\) in Equation (0.3 ). The result then follows by smooth base change Proposition2.17. We now explain a descent result for \(G\)-equivariant resolutions of singularites. The following result is a direct corollary of [1][1][12]. 8.1.2 of Abramovich, Temkin and Wlodarczyk and will be critical for the main results of this paper. See also [13][10][11]. **Theorem 3.9**.: _Suppose \(k\) is of characteristic \(0\) and let \(G\) be a smooth group scheme acting on a reduced quasi-projective \(X\). Then there exists a \(G\)-equivariant surjective projective birational morphism \(\tilde{X}\to X\) such that \(\tilde{X}\) is regular over \(k\)._ Proof.: Consider the action map \(a:G\times X\to X\) and let \(\tilde{X}\) be the resolution of \(X\). Then by smooth functoriality of the resolution in [1][Thm. 8.1.2] we see that we have a identification \[\widetilde{G\times X}\simeq G\times_{X}\tilde{X}.\] But we also have the projection map \(p_{X}:G\times X\to X\) which is again smooth. Using [1][Thm. 8.1.2] we have the identification \[\widetilde{G\times X}\simeq G\times\tilde{X}.\] Putting both identifications together defines a group action \(\tilde{a}:G\times\tilde{X}\to\tilde{X}\) such that the commutative diagram Cartesian. The claim follows. In particular Theorem 3.9 implies that for a reduced algebraic stack \(\mathscr{X}=[X/G]\) of finite type over a field \(k\) of characteristic \(0\) there is a regular stack \(\tilde{\mathscr{X}}=[\tilde{X}/G]\) over \(k\), together with a projective birational morphism \(\tilde{\mathscr{X}}\to\mathscr{X}\). Moreover we have the following: **Corollary 3.10**.: _Given a finite type reduced algebraic stack \(\mathscr{X}=[X/G]\) over a field \(k\) of characteristic \(0\), there is a \(cdn\)-square_ _In particular for any \(\mathcal{M}\in\operatorname{DM}_{\operatorname{gm}}(\mathscr{X})\) the associated square Equation (0.1 ) is cartesian._ ## 4. Geometric motives For this section we will insist on the assumption that the base \(B=\operatorname{Spec}(k)\) has _characteristic_\(0\). Let \(X\) a finite type scheme over \(k\). We have the following definition of the category geometric motives. **Definition 4.1**.: Let \(\operatorname{DM}_{\operatorname{gm}}(X,\Lambda)\) be the smallest full stable \(\infty\)-subcategory of \(\operatorname{DM}(X,\Lambda)\) which is closed under retracts, and generated by objects \[\{f_{\#}(1_{Z})(q)\ |\ f:Z\to X\ \text{smooth},\ q\in\mathbf{Z}\}.\] We call \(\operatorname{DM}_{\operatorname{gm}}(X,\Lambda)\)_The category of geometric motives over \(X\)_. Our first observation is that we need not take all smooth morphisms over \(X\) when \(X\) quasi-projective in Definition 4.1. In fact it is enough to take smooth morphisms which are _quasi-projective_. **Lemma 4.2**.: _The category of geometric motives over a quasi-projective scheme \(X\), can be equivalently described as the smallest full stable \(\infty\)-subcategory of \(\operatorname{DM}(X,\Lambda)\) which is closed under retracts, and generated by objects_ \[\{f_{\#}(1_{Z})(q)\ |\ f:Z\to X\ \text{smooth and quasi-projective},\ q\in\mathbf{Z}\}.\] Proof.: First we note that in Definition 4.1 it is enough to consider smooth morphisms \(f:Z\to X\) such that \(Z\) is quasi-projective over \(B\). Indeed, we may cover \(Z\) by affines and then inductively use the Mayer-Vietoris sequence. Secondly, if \(X\) itself is quasi-projective then the claim follows because any smooth morphism between two quasi-projective schemes is itself quasi-projective. **Remark 4.3**.: Lemma 4.2 also holds for more generally for finite type schemes \(X\). Indeed, For a general (reduced) finite type scheme \(X\), we may stratify it by quasi-projective schemes and an induction argument together with the localization triangle suffices to show this. For our main application, we will be interested in stacks \([X/G]\) which are quasi-projective over \(BG\), this together with Lemma 4.2 motivates our definition of geometric motives over a stack. **Definition 4.4**.: Let \(\mathcal{X}\) be in \(\operatorname{Nis-locSt}\), we define \(\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\) to be the smallest full stable subcategory of \(\operatorname{DM}(\mathcal{X},\Lambda)\) which is closed under retracts, and generated by objects \[\{M_{\mathcal{X}}(\mathcal{Z})(q)\ |\ \mathcal{Z}\text{ smooth, representable and quasi-projective over }\mathcal{X},\ q\in\mathbf{Z}\}\] where \(M_{\mathcal{X}}(\mathcal{Z}):=f_{\#}(1_{\mathcal{Z}})\) for \(f:\mathcal{Z}\to\mathcal{X}\). **Remark 4.5**.: The perhaps more natural variant of Definition 4.4 where the quasi-projective condition is omitted could also be considered. We think it would be interesting to compare these two notions. Ultimately, we chose to use Definition 4.4 because it was easier to show that it was well-behaved under \(*\)-pushforwards by closed immersions. In this section we will first establish various results about when the \(\infty\)-subcategory \(\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\) of geometric motives in \(\operatorname{DM}(\mathcal{X},\Lambda)\), is preserved under some of the six operations. Once we have addressed this, we will use these results in the second part of the section to show that the category of geometric motives can be generated by a smaller subcategory whose mapping spectra have good connectedness properties. In particular we will see that in case \(\mathcal{X}\) is a quotient of a quasi-projective scheme by \(G\) over a field of characteristic \(0\), the category of geometric motives can be generated by the subcategory of so called Chow motives. Before we begin we would also like to point out that that while it is very natural to wonder if the objects in \(\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\) are compact in \(\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\), this is in general not the case. In fact, the example of [11][1] in the setting of constructible sheaves works in our setting as well: **Example 4.6**.: Let \(\mathcal{X}=B\mathbf{G}_{m}\), we claim that \(1_{B\mathbf{G}_{m}}\) is not compact in \(\operatorname{DM}(B\mathbf{G}_{m},\Lambda)\). First by [10][2][2][2][3]\(\simeq\) CH*\((B\mathbf{G}_{m})_{\Lambda}\simeq\Lambda[x]\) where \(x\) is in degree \(1\). In particular \(x\) lifts to a map \[x:1_{B\mathbf{G}_{m}}\to 1_{B\mathbf{G}_{m}}(1)\] [2]. in \(\operatorname{DM}(B\mathbf{G}_{m},\Lambda)\). We note that if we pull back \(x\) along the covering morphism \(\pi:\operatorname{pt}\to B\mathbf{G}_{m}\), we have that \(\pi^{*}x\simeq 0\), because \(x\) corresponds to \(c_{1}(\mathcal{O}(1))\). Thus if we pullback the filtered colimit \[\mathcal{M}:=\varinjlim(1_{B\mathbf{G}_{m}}\overset{x}{\to}1_{B\mathbf{G}_{m }}(1)[2]\overset{x}{\to}1_{B\mathbf{G}_{m}}(2)[4]\overset{x}{\to}\dots)\] along \(\pi\) we see that \[\pi^{*}\mathcal{M}\simeq 0,\] and since \(\pi\) is a smooth cover, it follows that \(\pi^{*}\) is conservative hence \(\mathcal{M}\simeq 0\). It follows that \[\operatorname{map}_{\operatorname{DM}(B\mathbf{G}_{m},\Lambda)}(1,\mathcal{M} )\simeq 0.\] On the other hand \[\varinjlim(\operatorname{map}_{\operatorname{DM}(B\mathbf{G}_{m},\Lambda)}(1, 1)\to\operatorname{map}_{\operatorname{DM}(B\mathbf{G}_{m},\Lambda)}(1,1(1)[ 2])\to\dots)\] can be seen to be not equivalent to \(0\) by looking at for instance \(\pi_{0}\) which is \(\Lambda[x,x^{-1}]\). The upshot of Example 4.6, is that is shows that the geometric motives are in general not compact objects in \(\operatorname{DM}(\mathcal{X},\Lambda)\). ### The six operations and geometric motives **Lemma 4.7**.: _For any \(f:\mathscr{X}\to\mathscr{Y}\) of stacks, the functor \(f^{*}\) restricts to a functor_ \[f^{*}:\operatorname{DM}_{\operatorname{gm}}(\mathscr{Y},\Lambda)\to \operatorname{DM}_{\operatorname{gm}}(\mathscr{X},\Lambda)\] Proof.: To prove the claim it is enough to check on generators \(\mathcal{M}_{\mathscr{Y}}(\mathscr{Z})(q)\) in \(\operatorname{DM}_{\operatorname{gm}}(\mathscr{Y},\Lambda)\) thus by considering the cartesian square cartesian square where \(\mathcal{M}_{\mathscr{Y}}(\mathscr{Z})(q)=p_{\#}1_{\mathscr{Z}}(q)\), from Proposition 2.17 it follows that \[f^{*}(\mathcal{M}_{\mathscr{Y}}(\mathscr{Z})(q))\simeq f^{*}p_{\#}1_{\mathscr{Z }}(q)\simeq q_{\#}g^{*}1_{\mathscr{Z}}(q).\] **Proposition 4.8**.: _If \(f:\mathscr{X}\to\mathscr{Y}\) is smooth and representable then the functor \(f_{\#}\) restricts to a functor_ \[f_{\#}:\operatorname{DM}_{\operatorname{gm}}(\mathscr{X},\Lambda)\to \operatorname{DM}_{\operatorname{gm}}(\mathscr{Y},\Lambda).\] Proof.: This follows from the fact that for a smooth representable \(g:\mathscr{Z}\to\mathscr{X}\), we have that \(f_{\#}\circ g_{\#}\simeq(f\circ g)_{\#}\). **Lemma 4.9**.: _If \(\mathcal{M},N\in\operatorname{DM}_{\operatorname{gm}}(\mathscr{X},\Lambda)\) then so is \(\mathcal{M}\otimes\mathcal{N}\)._ Proof.: The proof is the same as in [2][1][4.2.3]. **Lemma 4.10**.: _Suppose \(\mathscr{X}\) is a finite type Nis-loc stack and that there exists a Zariski cover \(\mathscr{X}=\bigcup_{i}\mathcal{U}_{i}\), then an object \(\mathcal{M}\in\operatorname{DM}(\mathscr{X},\Lambda)\) is in \(\operatorname{DM}_{\operatorname{gm}}(\mathscr{X},\Lambda)\) if and only if \(M|_{\mathcal{U}_{i}}\) is in \(\operatorname{DM}_{\operatorname{gm}}(\mathcal{U}_{i},\Lambda)\)._ Proof.: By arguing inductively it is enough to consider the case that \(\mathscr{X}=\mathcal{U}\cup\mathscr{V}\). Via the Nisnevich square for each \(\mathcal{M}\in\operatorname{DM}(\mathscr{X},\Lambda)\), write \(\mathcal{M}_{\mathscr{W}}:=j_{\mathscr{W}\#}j_{\mathscr{W}}^{*}\mathcal{M}\) for \(\mathscr{W}=\mathcal{U}\times_{\mathscr{X}}\mathcal{V},\mathcal{U},\mathcal{V}\), we get a triangle of motives by Corollary 3.7 \[\mathcal{M}_{\mathcal{U}\times_{\mathscr{X}}\mathcal{V}}\to\mathcal{M}_{ \mathcal{U}}\oplus\mathcal{M}_{\mathscr{V}}\to\mathcal{M}\stackrel{{ [1]}}{{\to}}\] Then since \(\mathcal{M}_{\mathcal{U}\times_{\mathscr{X}}\mathcal{V}}\) and \(\mathcal{M}_{\mathcal{U}}\oplus\mathcal{M}_{\mathscr{V}}\) are contained in \(\operatorname{DM}_{\operatorname{gm}}(\mathscr{X},\Lambda)\) it follows that \(\mathcal{M}\in\operatorname{DM}_{\operatorname{gm}}(\mathscr{X},\Lambda)\). **Lemma 4.11**.: _For any stack \(\mathscr{X}\) and vector bundle \(\mathscr{E}\) over \(\mathscr{X}\), tensoring by \(Th(\mathscr{E})\) and \(Th(-\mathscr{E})\) preserve \(\operatorname{DM}_{\operatorname{gm}}(\mathscr{X},\Lambda)\)._ Proof.: This follows from the fact that \(\operatorname{DM}(\mathscr{X},\Lambda)\) is oriented i.e. \(-\otimes Th(\mathscr{E})\simeq(n)[2n]\). **Corollary 4.12**.: _Let \(f:\mathscr{X}\to\mathscr{Y}\) be a smooth and proper representable morphism in \(\operatorname{Nis-locSt}\). Then the functor \(f_{*}\) restricts to a functor_ \[f_{*}:\operatorname{DM}_{\operatorname{gm}}(\mathscr{X},\Lambda)\to \operatorname{DM}_{\operatorname{gm}}(\mathscr{Y},\Lambda).\] Proof.: The corollary follows immediately from Proposition 4.8, the equivalence \(\alpha_{f}:f_{!}\simeq f_{*}\) of Proposition 2.19 (3) and purity Proposition 2.20 together with Lemma 4.11. Our goal now is to show that for projective morphisms the lower-\(*\) functor preserves geometric objects. We will do this in two steps, first we will show that closed immersions have this property and then use the fact that for a general projective morphism we can factor it by closed immersion and a morphism from a projective space. **Lemma 4.13**.: _Let \(i:\mathcal{X}\hookrightarrow\mathcal{Y}\) be a closed immersion in \(\mathrm{Nis\text{-}locSt}\) and suppose that \(\mathcal{X}\) has the resolution property. Then the functor \(i_{*}\) restricts to a functor_ \[i_{*}:\mathrm{DM}_{\mathrm{gm}}(\mathcal{X},\Lambda)\to\mathrm{DM}_{\mathrm{ gm}}(\mathcal{Y},\Lambda).\] Proof.: Let \(f_{0}:\mathcal{Z}_{0}\to\mathcal{X}\) be a smooth quasi-projective representable morphism over \(\mathcal{X}\). First we assume that \(\mathcal{Z}_{0}\) is linearly fundamental in the sense of [1][1]. Now we apply [1][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][22][2][22][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][[2]][2][[2]][22][[2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][[2]][[2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][[2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][22][2][[2][2][2][2][2][2][2][2][22][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][2][22][2][2][2][22][2][2][22][2][22][2][22][2][2][2][22][2][2][22][2][22][2][22][2][22][2][2][22][2][2][2][2][2][22][2][2] For \(\mathcal{M}\in\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\) we have a cofiber sequence \[j_{\#\iota_{\mathcal{U},\bullet}}j^{\prime\bullet}(\mathcal{M})\to t_{\ast}( \mathcal{M})\to i_{\bullet}\iota_{n-1,\bullet}i^{\prime\bullet}(\mathcal{M}) \xrightarrow{[\underline{1}]}.\] We now observe that \(j_{\#}i_{\mathcal{U},\bullet}j^{\prime\bullet}(\mathcal{M})\in\operatorname{ DM}_{\operatorname{gm}}(\mathcal{Y},\Lambda)\) by Lemma4.7, Lemma4.13 and Proposition4.8 and \(i_{\bullet}\iota_{n-1,\bullet}i^{\prime\bullet}(\mathcal{M})\in\operatorname{ DM}_{\operatorname{gm}}(\mathcal{Y},\Lambda)\) by Lemma4.7, induction and, Lemma4.13. Thus \(\iota_{\bullet}\mathcal{M}\in\operatorname{DM}_{\operatorname{gm}}(\mathcal{Y },\Lambda)\). **Lemma 4.15**.: _Let \(f:\mathcal{X}\to\mathcal{Y}\) be a projective morphism in \(\operatorname{Nis-locSt}\). Suppose that \(\mathcal{Y}\) has the resolution property. Then the functor \(f_{\ast}\) restricts to a functor_ \[f_{\ast}:\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\to \operatorname{DM}_{\operatorname{gm}}(\mathcal{Y},\Lambda).\] Proof.: Since \(\mathcal{Y}\) has the resolution property we may factor \(f:\mathcal{X}\to\mathcal{Y}\) as \[\mathcal{X}\xrightarrow{\iota}\mathcal{Y}\xrightarrow{p}\mathcal{Y}\] where \(\iota\) is a closed immersion and \(p\) is a smooth representable proper morphism. The claim now follows from Proposition4.14 and Corollary4.12. **Proposition 4.16**.: _Let \(f:\mathcal{X}\to\mathcal{Y}\) be a projective morphism in \(\operatorname{Nis-locSt}\). Then the functor \(f_{\ast}\) restricts to a functor_ \[f_{\ast}:\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\to \operatorname{DM}_{\operatorname{gm}}(\mathcal{Y},\Lambda).\] Proof.: By reduced invariance we may assume that \(\mathcal{Y}\) is reduced. Thus, by [11] there is a stratification of \(\mathcal{Y}\) be stacks with the resolution property. We proceed by induction on the length of the stratification. In the case that \(\mathcal{Y}\) has the resolution property we are done by Lemma4.15. In the general case assume that we have a stratification of length \(n\): \[\emptyset=\mathcal{Y}_{0}\hookrightarrow\mathcal{Y}_{1}\hookrightarrow \cdots\hookrightarrow\mathcal{Y}_{n}=\mathcal{Y}\] Let \(\mathcal{U}:=\mathcal{Y}-\mathcal{Y}_{n-1}\) and consider the localization square For \(\mathcal{M}\in\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\) we have a cofiber sequence \[j_{\#}f_{\mathcal{U},\bullet}j^{\prime\bullet}(\mathcal{M})\to f_{\ast}( \mathcal{M})\to i_{\bullet}f_{n-1,\bullet}i^{\prime\bullet}(\mathcal{M}) \xrightarrow{[\underline{1}]}.\] Now, \(j_{\#}f_{\mathcal{U},\bullet}j^{\prime\bullet}(\mathcal{M})\in\operatorname{ DM}_{\operatorname{gm}}(\mathcal{Y},\Lambda)\) by Lemma4.15 and \(i_{\bullet}f_{n-1,\bullet}i^{\prime\bullet}(\mathcal{M})\in\operatorname{ DM}_{\operatorname{gm}}(\mathcal{Y},\Lambda)\) by induction. Thus we see that \(f_{\ast}(\mathcal{M})\in\operatorname{DM}_{\operatorname{gm}}(\mathcal{Y},\Lambda)\) which is what we wanted to show. **Remark 4.17**.: With an appropriate form of Chow's lemma for stacks, one can extend Proposition4.16 to proper representable morphisms. **Corollary 4.18**.: _Suppose \(f:\mathcal{X}\to\mathcal{Y}\) is quasi-projective morphism in \(\operatorname{Nis-locSt}\). Then the functor \(f_{!}\) restricts to a functor_ \[f_{!}:\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\to \operatorname{DM}_{\operatorname{gm}}(\mathcal{Y},\Lambda).\] Proof.: Since \(f:\mathcal{X}\to\mathcal{Y}\) is quasi-projective, we may factor \(f\) as \[\mathcal{X}\xrightarrow{j}\mathcal{P}\xrightarrow{p}\mathcal{Y}\] where \(j\) is an open immersion and \(p\) is projective. The result now follows from Proposition4.8 and Proposition4.16. ### Generation results for the derived category of geometric motives **Definition 4.19**.: For an \(\mathcal{X}\in\operatorname{Nis-locSt}\) we define the additive \(\infty\)-category of Chow motives \[\operatorname{\mathbf{Chow}}_{\infty}(\mathcal{X},\Lambda)\subseteq \operatorname{DM}(\mathcal{X},\Lambda)\] to be smallest additive \(\infty\)-category generated by \[\{f_{!}1_{2}(q)[2q]:\ \mathcal{Z}\ \text{smooth over}\ B,\ f:\mathcal{Z} \to\mathcal{X}\ \text{projective},q\in\mathbf{Z}\}.\] and retracts thereof. In this section we wish to prove that \(\operatorname{\mathbf{Chow}}_{\infty}(\mathcal{X})\) generates \(\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\) under finite limits, colimits and retracts. i.e. that the smallest thick subcategory of \(\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\) containing \(\operatorname{\mathbf{Chow}}_{\infty}(\mathcal{X})\) is \(\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\). For the readers familiar with [10] we remark that our strategy was inspired by and will follow closely that of [10][4.4.3]. We start first with an elementary lemma about stacks, which will be useful for induction arguments. **Lemma 4.20**.: _Let \(\mathcal{X}\) be a finite type algebraic stack over \(B\). Suppose that \(\mathcal{U}\subseteq\mathcal{X}\) is a dense open substack of \(\mathcal{X}\). Let \(\mathcal{Z}\) denote the complement of \(\mathcal{U}\) in \(\mathcal{X}\) Then \(\dim(\mathcal{Z})<\dim(\mathcal{X})\)._ Proof.: Consider the diagram of cartesian squares where \(\Pi\) is a smooth cover It follows that since the map \(\pi\) is a continuous and surjective on underlying topological spaces that \(U\) is a dense open subscheme of \(X\) with complement \(Z\). We will be finished if we show for each \(z\in|\mathcal{Z}|\) that \[\dim_{z}(\mathcal{Z})<\dim_{z}(\mathcal{X})\] We are free to pick any lift of the point \(z\) to \(Z\). In particular there exists a lift \(\tilde{z}\) such that \(\dim_{\tilde{z}}(Z)=\dim(Z)\) and we have that \[\dim_{z}(\mathcal{Z})=\dim(Z)-\dim(Rz,z).\] Similarly we may pick any lift of \(z\) in \(X\) such that \[\dim_{z}(\mathcal{X})=\dim(X)-\dim(R_{\mathcal{X},z}).\] For each \(z\in|\mathcal{Z}|\) we have a canonical equivilance \(Rz,z\simeq R_{z}\) where \(R:=X\times_{\mathcal{X}}X\) and \(Rz\) is the restriction to \(\mathcal{Z}\). But since \(U\) is dense in \(X\) with complement \(Z\) we have that \(\dim(Z)<\dim(X)\) and hence \[\dim_{z}(\mathcal{Z})<\dim_{z}(\mathcal{X}),\] which is what we wanted to show. **Proposition 4.21**.: _Suppose that \(f:\mathcal{X}\to\mathcal{Y}\) is representable and separated and \(\mathcal{Y}\) has the resolution property. Then \(f_{!}\) restricts to a functor_ \[f_{!}:\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\to \operatorname{DM}_{\operatorname{gm}}(\mathcal{Y},\Lambda)\] Proof.: Since \(\mathcal{Y}\) has the resolution property it is of the form \([Y/GL_{n}]\) where \(Y\) is quasi-affine. Since \(f:\mathcal{X}\to\mathcal{Y}\) is representable it follows that \(\mathcal{X}\simeq[X/GL_{n}]\) for \(X\) an algebraic space. By reduced invariance we may assume that \(\mathcal{X}\) is reduced. The stack \(\mathcal{X}\) has affine stabilizers is finite type and quasi-separated, and by [17][Prop. 2.6] there exists a stratification of \(\mathcal{X}\) by global quotient stacks which are quasi-projective over \(BGL_{n}\). We will use induction on the length of the stratification. In the trivial case when \(\mathcal{X}\) is quasi-projective over \(BGL_{n}\) then \(f:\mathcal{X}\to\mathcal{Y}\) is quasi-projective and we are done by Corollary4.18. For a stratification of length \(n\) \[\emptyset=\mathcal{X}_{0}\hookrightarrow\mathcal{X}_{1}\hookrightarrow\cdots \hookrightarrow\mathcal{X}_{n}=\mathcal{X}\] we consider the diagram \[\mathcal{X}_{n-1}\stackrel{{ i}}{{\hookrightarrow}}\mathcal{X} \stackrel{{ j}}{{\hookrightarrow}}\mathcal{U}\] of stacks over \(\mathcal{Y}\). Since \(\mathcal{U}\) and \(\mathcal{Y}\) are quasi-projective over \(BGL_{n}\), it follows that the induced map \(f|_{\mathcal{U}}:\mathcal{U}\to\mathcal{Y}\) is quasi-projective. Let \(\mathcal{M}\in\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\) and consider the localization triangle induced by Proposition 2.21 \[j_{!}j^{!}(\mathcal{M})\to\mathcal{M}\to i_{*}i^{*}\mathcal{M}\stackrel{{ [1]}}{{\longrightarrow}}.\] Since \(f_{!}\) is an exact functor between stable \(\infty\)-categories we get a cofiber sequence \[f_{\mathcal{U},!}j^{*}(\mathcal{M})\to f_{!}\mathcal{M}\to f_{n-1,!}i^{*} \mathcal{M}\stackrel{{[1]}}{{\longrightarrow}}.\] Now,Corollary 4.18 implies that \(f_{\mathcal{U},!}j^{*}(\mathcal{M})\in\operatorname{DM}_{\operatorname{gm}}( \mathcal{Y},\Lambda)\) and induction implies that \(f_{n-1,!}i^{*}\mathcal{M}\in\operatorname{DM}_{\operatorname{gm}}(\mathcal{Y },\Lambda)\). Thus \(f_{!}(\mathcal{M})\in\operatorname{DM}_{\operatorname{gm}}(\mathcal{Y},\Lambda)\) finishing the argument. Recall, that a full subcategory \(\mathscr{D}\) of a stable \(\infty\)-category \(\mathscr{C}\) is called _thick_ if it is closed under taking retracts (see [10][4.4.5] for a discussion on retracts and idempotents in the setting of \(\infty\)-categories). **Theorem 4.22**.: _Let \(\mathcal{X}\) be a finite type Nis-loc stack with affine stabilizers. The category \(\operatorname{DM}_{\operatorname{gm}}(\mathcal{X})\) is the smallest thick stable \(\infty\)-subcategory of \(\operatorname{DM}(\mathcal{X},\Lambda)\) generated by the collection of objects_ \[\mathscr{P}(\mathcal{X}):=\{f_{!}(1_{\mathcal{X}^{\prime}}(n))\ |\ f:\mathcal{X}^{\prime}\to\mathcal{X}\text{ is projective and }n\in\mathbf{Z}\}.\] Proof.: Let \(\operatorname{DM}_{\operatorname{proj}}(\mathcal{X})\) be the smallest thick subcategory generated by \(\mathscr{P}(\mathcal{X})\). By Proposition 4.16 it follows that \(\operatorname{DM}_{\operatorname{proj}}(\mathcal{X})\subset\operatorname{DM}_ {\operatorname{gm}}(\mathcal{X})\). So we prove the reverse inclusion. For any quasi-projective smooth morphism \(f:\mathcal{X}^{\prime}\to\mathcal{X}\) it follows from purity that \(f_{\#}\) agrees with \(f_{!}\) up to a Tate twist. Thus it is enough to prove that \(f_{!}1_{\mathcal{X}^{\prime}}\) for any such \(f\) is contained in \(\operatorname{DM}_{\operatorname{proj}}(\mathcal{X})\). When \(\mathcal{X}\) has the resolution property we are finished by Proposition 4.21. In the general case, we can argue by induction on the length of the stratification of \(\mathcal{X}\) by stacks with the resolution property. Note that we may assume that \(\mathcal{X}\) is reduced by reduced invariance. That is for a length \(n\) stratification \[\emptyset=\mathcal{X}_{0}\hookrightarrow\mathcal{X}_{1}\hookrightarrow\cdots \hookrightarrow\mathcal{X}_{n}=\mathcal{X}\] we consider the diagram of cartesian squares where \(i\) is a closed immersion and \(j\) is an open immersion. By considering the localization triangle from Proposition 2.21 \[j_{!}j^{!}f_{!}(1_{\mathcal{X}^{\prime}})\to f_{!}(1_{\mathcal{X}^{\prime}}) \to i_{*}i^{*}f_{!}(1_{\mathcal{X}^{\prime}})\] it follows from the base change isomorphisms of Proposition 2.19, Corollary 4.18, and Proposition 4.16 that both \(j_{!}j^{!}f_{!}(1_{\mathcal{X}^{\prime}})\) and \(i_{*}i^{*}f_{!}(1_{\mathcal{X}^{\prime}})\) are contained in \(\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\). Hence we see that \(f_{!}(1_{\mathcal{X}^{\prime}})\) is contained in \(\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\). The next result says that when \(\mathcal{X}\) is a Nis-loc stack which is quasi-projective over \(BG\) then \(\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\) is the thick closure of \(\operatorname{\mathbf{Chow}}_{\infty}(\mathcal{X},\Lambda)\). **Theorem 4.23**.: _Suppose that \(\mathcal{X}=[X/G]\) where \(X\) is a quasi-projective scheme over \(B\) and \(G\) is affine algebraic group. Then the category \(\operatorname{\mathbf{Chow}}_{\infty}(\mathcal{X},\Lambda)\) generates \(\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\) under finite limits, colimits and retracts._ Proof.: Let \(\operatorname{DM}_{\mathbf{Chow}_{\infty}}(\mathcal{X},\Lambda)\) be the smallest thick subcategory of \(\operatorname{DM}(\mathcal{X},\Lambda)\) which contains the category \(\mathbf{Chow}_{\infty}(\mathcal{X},\Lambda)\). We must show that \(\operatorname{DM}_{\mathbf{Chow}_{\infty}}(\mathcal{X},\Lambda)\) is precisely all of \(\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\). To this end by Theorem4.22, it will be enough to show that \(\mathscr{P}(\mathcal{X})\) is contained in in \(\operatorname{DM}_{\mathbf{Chow}_{\infty}}(\mathcal{X},\Lambda)\). Consider a projective morphism \(f:\mathcal{X}^{\prime}\to\mathcal{X}\). From our hypothesis on \(\mathcal{X}\) we may take \(\mathcal{X}^{\prime}\simeq[X^{\prime}/G]\) where \(X^{\prime}\) is projective over \(X\), moreover without loss of generality we may assume that \(\mathcal{X}^{\prime}\) is reduced. We now proceed by induction on the relative dimension of \(\mathcal{X}^{\prime}\to BG\) the claim is clear when relative dimension is \(0\), so we assume it holds for some \(n>0\) and consider \(\mathcal{X}^{\prime}\to BG\) with relative dimension \(n+1\). We may apply equivariant resolutions of singularities over \(k\), Theorem3.9, to \(X^{\prime}\). Thus after taking stack quotients by \(G\) we arrive at a projective birational morphism \([\tilde{X}^{\prime}/G]\to[X/G]\). The stack \([\tilde{X}^{\prime}/G]\) is smooth over \(k\). Now we consider the cartesian squares where \(\mathcal{U}\) is dense open, the right hand side is a \(cdh\)-square. In particular since the relative dimension of \(\mathcal{Z}\to BG\) is strictly less then the relative dimension of \(\mathcal{X}^{\prime}\) over \(BG\) and we may apply induction hypothesis together with the cofiber sequence induced by Corollary3.10. ## 5. Mapping spectra and Chow groups In this section we will study the mapping spectra in \(\operatorname{DM}(\mathcal{X},\Lambda)\). Our main goal will be to show that the mapping spectra of \(\mathbf{Chow}_{\infty}(\mathcal{X},\Lambda)\) are connective but along the way we will identify the Borel-Moore homology of a quotient stack with the equivariant higher Chow groups. In this section we will take \(k\) to be a field of arbitrary characteristic, also in many proofs we will suppress the notation for the coefficient ring and often write \(\operatorname{DM}(\mathcal{X})\) with the hope of making things easier to read. Let \(X\) be a quasi-projective scheme over \(B\) of dimension \(n\) equipped with an action of an affine algebraic group \(G\) and integers \(s,t\in\mathbf{Z}\). We fix a Totaro gadget \((U\subset V)\) where \(V\) is a \(G\)-representation and \(j:U\subset V\) an open subscheme on which \(G\) acts freely and such that the reduced complement \(\iota:Z\hookrightarrow V\) satisfies \(c:=\operatorname{codim}_{V}Z>n-s\) and such that the quotient \((U\times X)/G\) exists as a scheme. Let \(l:=\dim V\) and \(g:=\dim G\). Then one can define the equivariant higher Chow groups as \[\operatorname{CH}_{s}^{G}(X,t):=\operatorname{CH}_{s+l-g}((U\times X)/G,t).\] For the stack associated stack \(\mathcal{X}:=[X/G]\) we define the (higher) Chow groups of \(\mathcal{X}\) as \[\operatorname{CH}_{s}(\mathcal{X},t):=\operatorname{CH}_{s+g}^{G}(X,t)= \operatorname{CH}_{s+l}((U\times X)/G,t).\] Note that from this definition of we automatically have \(\operatorname{CH}_{s}(\mathcal{X},t)=0\) for \(s>\dim(\mathcal{X})=n-g\). One checks this definition is well defined in the same way checks the definition of equivariant Chow groups of Edidin-Graham is well defined [10]. The next proposition is probably well known and its proof follows a standard way of arguing [11][12.4] and [12][21]. We include it here because it will serve as a warm up for Theorem5.2. **Proposition 5.1**.: _Suppose \(\mathcal{X}=[X/G]\) and the integers \(s,t\in\mathbf{Z}\) as in the discussion above. Let \(f:\mathcal{X}\to B\) be the structure map. We have the following equivalence_ \[\pi_{0}\operatorname{map}_{\operatorname{DM}(\mathcal{X},\Lambda)}(1_{ \mathcal{X}}(s)[2s+t],f^{\dagger}1_{B})\simeq\operatorname{CH}_{s}(\mathcal{X },t)_{\Lambda}.\] Proof.: First we choose an embedding \(G\hookrightarrow GL_{r}\). Fix a Totaro gadget \((U,V)\) with \[\operatorname{codim}_{V}(Z)>n-s+r^{2}-g.\] Let \(p:\mathcal{V}:=[V/G]\to BG\) the induced vector bundle over \(BG\) and \(p_{\mathcal{X}}\) is base change to \(\mathcal{X}\). Then by homotopy invariance we have that \(p^{*}\) is fully-faithful. Thus we have an equivalence \[\operatorname{map}_{\operatorname{DM}(\mathcal{X})}(1(s)[2s+t],f^{\dagger}1_{B })\simeq\operatorname{map}_{\operatorname{DM}(\mathcal{V}\times_{BG}\mathcal{ X})}(1(s)[2s+t],p_{\mathcal{X}}^{*}f^{\dagger}1_{B}). \tag{0.1}\] Let \(\bar{j}:\mathcal{U}:=[U/G]\to\mathcal{V}\) be the map induced by the open immersion \(j:U\to V\) and \(\bar{j}\chi\) its base change to \(\mathfrak{X}\). We claim that induced morphism \[\operatorname{map}_{\operatorname{DM}(\mathcal{V}\times_{BG}\mathfrak{X})}(1(s) [2s+l],p_{\mathfrak{X}}^{\bullet}f^{!}1_{B})\stackrel{{\bar{j} \chi}}{{\to}}\operatorname{map}_{\operatorname{DM}(\mathfrak{U}\times_{BG} \mathfrak{X})}(1(s)[2s+t],\bar{j}_{\mathfrak{X}}^{\bullet}p_{\mathfrak{X}}^{ \bullet}f^{!}1_{B}) \tag{0.2}\] is an equivalence. Since \(p_{\mathfrak{X}}\) is smooth, separated and representable, we map apply the purity isomorphism \(p_{\mathfrak{X}}^{!}\simeq p^{\bullet}(l)[2l]\) which gives \[\operatorname{map}_{\operatorname{DM}(\mathcal{V}\times_{BG}\mathfrak{X})}( 1(s+l)[2s+2l+t],p_{\mathfrak{X}}^{!}f^{!}1_{B})\stackrel{{\bar{j} \chi}}{{\to}}\operatorname{map}_{\operatorname{DM}(\mathfrak{U}\times_{BG} \mathfrak{X})}(1(s+l)[2s+2l+t],\bar{j}_{\mathfrak{X}}^{!}p_{\mathfrak{X}}^{!}f^{!}1_{B}).\] Writing \(\pi:\mathcal{V}\times_{BG}\mathfrak{X}\to B\) and \(\sigma:\mathcal{U}\times_{BG}\mathfrak{X}\to B\) for the structure maps we can rewrite this is as \[\operatorname{map}_{\operatorname{DM}(\mathcal{V}\times_{BG}\mathfrak{X})}( 1(s+l)[2s+2l+t],\pi^{!}1_{B})\stackrel{{\bar{j}\chi}}{{\to}} \operatorname{map}_{\operatorname{DM}(\mathfrak{U}\times_{BG}\mathfrak{X})}( 1(s+l)[2s+2l+t],\sigma^{!}1_{B}). \tag{0.3}\] To see that 0.3 is an equivalence via the localization triangle \[i_{\bullet}i^{!}\to\operatorname{id}\to j_{\bullet}j^{!},\] we are reduced to showing that \[\pi_{0}\operatorname{map}_{\operatorname{DM}(\mathfrak{Z}\times_{BG} \mathfrak{X})}(1(s+l)[r],\bar{l}^{!}\pi^{!}1_{B})=0\] for all \(r\in\mathbf{Z}\). As in [1][Rem. 2.3.7] we may find a Nis-loc atlas: \[W\to(X\times Z)\times^{G}GL_{r}\to\mathfrak{Z}\times_{BG}\mathfrak{X}\] where \(W\) is a scheme and the first arrow is an etale surjection. Since we can compute \(\operatorname{DM}(\mathfrak{Z}\times_{BG}\mathfrak{X})\) along Cech nerves of Nis-loc atlases it will be enough to show that \(\pi_{0}\operatorname{map}_{\operatorname{DM}(\mathfrak{Z}\times_{BG} \mathfrak{X})}(1(s+l)[r],\bar{l}^{!}\pi^{!}1_{B})\) vanishes on the \(!\)-restriction to each term \[W^{a}:=\underbrace{W\times_{\mathbb{Z}\times_{BG}\mathfrak{X}}\cdots\times_{ \mathbb{Z}\times_{BG}\mathfrak{X}}W}_{a}\] for \(a\geq 0\) in the Cech nerve of \(W\to\mathfrak{Z}\times_{BG}\mathfrak{X}\). Writing \(\eta_{a}^{!}:\operatorname{DM}(\mathfrak{Z}\times_{BG}\mathfrak{X})\to \operatorname{DM}(W^{a})\) for the \(!\)-restriction in the Cech nerve, the purity isomorphism gives \[\eta_{a}^{!}\simeq\eta_{a}^{\bullet}(a\gamma)[2a\gamma]\] where \(\gamma\) is the relative dimension of the Nis-loc atlas \(W\to\mathfrak{Z}\times_{BG}\mathfrak{X}\). We also have \(\eta_{a}^{!}\bar{l}^{!}\pi^{!}\simeq h_{a}^{!}\) where \(h_{a}:W^{a}\to B\) the structure map. We are reduced to showing \[\pi_{0}\operatorname{map}_{\operatorname{DM}(W^{a})}(1(s+l+a\gamma)[r+2a \gamma],h_{a}^{!}1_{B})=0\] for all \(a\geq 0\). But now we are in the realm of finite type schemes over a field and we know that \[\pi_{0}\operatorname{map}_{\operatorname{DM}(W^{a})}(1(s+l+a\gamma)[r+2a \gamma],h_{a}^{!}1_{B})=\operatorname{CH}_{s+l+a\gamma}(W^{a},r-s-l)\] and by our choice of Totaro gadget we have that \[l+s>n+\dim(Z)+r^{2}-g\] and thus \(l+s+a\gamma>n+\dim(Z)+a\gamma\). In particular because the Chow groups vanish, we conclude that \[\pi_{0}\operatorname{map}_{\operatorname{DM}(W^{a})}(1(s+l+a\gamma)[r+2a \gamma],h_{a}^{!}1_{B})=0\] which is what we wanted to show. The next theorem will be important in establishing the weight structure on \(\operatorname{DM}(\mathfrak{X},\Lambda)\), we will use the symbol Map to refer to the _mapping spectra_ as opposed to the symbol map which denotes the _mapping space_. **Theorem 5.2**.: _Suppose that \(\mathcal{S}=[S/G]\) where \(S\) is a finite type scheme over \(B\) and \(G\) is an affine algebraic group. Let \(\mathfrak{X}\) and \(\mathcal{Y}\) be smooth stacks over \(B\) and projective over \(\mathcal{S}\) and \(j,m,n\in\mathbf{Z}\) and let \(d_{\mathcal{Y}}\) be the dimension of \(\mathcal{Y}\) over \(B\)._ \[\pi_{j}\operatorname{Map}_{\operatorname{DM}(\mathcal{S},\Lambda)}(f_{!}1_{\mathfrak{ X}}(m)[2m],g_{!}1_{\mathfrak{Y}}(n)[2n])\simeq\operatorname{CH}_{d_{y}-n+m}( \mathfrak{X}\times_{\mathcal{S}}\mathcal{Y},j),\] _in particular the mapping spectrum_ \[\operatorname{Map}_{\operatorname{DM}(\mathcal{S},\Lambda)}(f_{!}1_{ \mathfrak{X}}(m)[2m],g_{!}1_{\mathfrak{Y}}(n)[2n])\] _is connective._ Proof.: First we fix an embedding \(G\hookrightarrow GL_{r}\) and we fix a Totaro gadget \((U,V)\) for \(G\) so that \[c:=\operatorname{codim}_{V}(Z)>\dim(X)-\dim(S)+n-m+r^{2}-g.\] Consider the cartesian squares Combined with base change these give the following equivalences \[p^{*}f_{!}1_{\mathfrak{X}}\simeq f_{\mathcal{V}}p_{\mathfrak{X}}^{*}1_{ \mathfrak{X}}\simeq f_{\mathcal{V}}1_{\mathcal{V}\times_{BG}\mathfrak{X}} \qquad p^{*}g_{!}1_{\mathfrak{Y}}\simeq g_{\mathcal{V}}p_{\mathfrak{Y}}^{*}1_ {\mathfrak{Y}}\simeq g_{\mathcal{V}}1_{\mathcal{V}\times_{BG}\mathfrak{Y}}. \tag{0.4}\] Since \(p\) is a vector bundle over \(\mathcal{S}\) it follows by homotopy invariance that \(p^{*}\) is fully-faithful which when combined with 0.4 gives an equivalence \[\operatorname{map}_{\operatorname{DM}(\mathcal{S})}(f_{!}1_{\mathfrak{X}}(m)[ 2m+j],g_{!}1_{\mathfrak{Y}}(n)[2n])\stackrel{{ p^{*}}}{{=}} \operatorname{map}_{\operatorname{DM}(\mathcal{V}\times_{BG}\mathcal{S})}(f_{ \mathcal{V}}1(m)[2m+j],g_{\mathcal{V}}1(n)[2n]). \tag{0.5}\] Let \(\bar{j}:\mathcal{U}\times_{BG}\mathcal{S}\to\mathcal{V}\times_{BG}\mathcal{S}\) be the open immersion induced by \(U\subset V\). Via the cartesian diagrams we get the equivalences \[\bar{j}^{*}f_{!}1_{\mathfrak{X}}\simeq f_{!}\bar{j}^{*}_{\mathfrak{X}}1_{ \mathfrak{X}}\simeq f_{!}1_{!}1_{\mathcal{U}\times_{BG}\mathfrak{X}}\qquad \bar{j}^{*}g_{!}1_{\mathfrak{Y}}\simeq g_{!}\bar{j}^{*}_{\mathfrak{Y}}1_{ \mathfrak{Y}}\simeq g_{!}1_{!}1_{\mathcal{U}\times_{BG}\mathfrak{Y}}. \tag{0.6}\] Composing \(\bar{j}^{*}\) with \(p^{*}\) gives a map \[\operatorname{map}_{\operatorname{DM}(\mathcal{S})}(f_{!}1_{\mathfrak{X}}(m)[ 2m+j],g_{!}1_{\mathfrak{Y}}(n)[2n])\stackrel{{\bar{j}^{*}}}{{ \to}}\operatorname{map}_{\operatorname{DM}(\mathcal{U}\times_{BG}\mathcal{S}) }(f_{!}1_{!}1(m)[2m+j],g_{!}1_{\mathcal{U}}1(n)[2n]). \tag{0.7}\] we claim that Equation (0.7 ) is an equivalence. From considering the localization triangle \[\bar{\iota}_{*}\bar{\iota}^{!}\to\operatorname{id}\to\bar{j}_{*}\bar{j}^{!},\] we simply need to show that \[\operatorname{map}_{\operatorname{DM}(\mathcal{V}\times_{BG}\mathcal{S})}(f_{ \mathcal{V}}1(m)[2m+j],\bar{\iota}_{*}\bar{\iota}^{!}g_{\mathcal{V}}1(n)[2n]) \simeq 0.\] Thus it will be enough to prove the following: **Claim 5.3**.: (0.8) \[\pi_{0}\operatorname{map}_{\operatorname{DM}(\mathbb{Z}\times_{BG}\mathcal{S}) }(f_{\mathcal{Z}}1_{!}1_{!}\bar{\iota}^{!}g_{V}1(n-m)[r])\simeq 0\] _for all \(r\in\mathbf{Z}\)._ Via standard arguments we can reduce to the situation where \(Z\) is regular, in which case we have by absolute purity \(\iota^{*}\simeq\iota^{!}(c)[2c]\). Via the diagram of cartesian square \[\begin{CD}\mathcal{Z}\times_{BG}\mathcal{Y}\xrightarrow{\tilde{\iota}y}\mathcal{V} \times_{BG}\mathcal{Y}\xrightarrow{py}\\ g_{\mathbb{Z}}\raisebox{-2.0pt}{\includegraphics[height=2.0pt]{images/201.eps}} \mathcal{Y}\times_{BG}\mathcal{S}\xrightarrow{py}\mathcal{S}\end{CD}\] we have the base change equivalence \[\tilde{\iota}^{l}g\mathcal{V}_{!}\simeq g_{\mathbb{Z}}\tilde{\iota}^{l}_{y}.\] which when combined with absolute purity for \(\iota\) allows us to rewrite the \(0.8\) as \[\pi_{0}\operatorname{map}_{\operatorname{DM}(\mathcal{Z}\times_{BG}\mathcal{ S})}(f_{\mathbb{Z}}!1,g_{\mathbb{Z}}!1(n-m-c)[r-2c])\simeq 0. \tag{0.9}\] Since \(\mathcal{Z}\times_{BG}\mathcal{S}\simeq[(Z\times S)/G]\), by [10][11][111][111] we have a Nis-loc atlas \(W\to\mathcal{Z}\times_{BG}\mathcal{S}\). Hence we can compute \(DM(\mathcal{Z}\times_{BG}\mathcal{S})\) via the Cech nerve \[\cdots\xrightarrow{\pi}W^{2}:=W\times_{\mathcal{Z}\times_{BG}\mathcal{S}}W \xrightarrow{\pi}W\to\mathcal{Z}\times_{BG}\mathcal{S}.\] Thus it is enough to show \(0.9\) on the restriction to each \(\operatorname{DM}(W^{q})\). We write \(\pi_{q}^{*}\) for the restiction \(\operatorname{DM}(\mathcal{Z}\times_{BG}\mathcal{S})\to\operatorname{DM}(W^{ q})\). Then we can write the mapping space \[\operatorname{map}_{\operatorname{DM}(W^{q})}(\pi_{q}^{*}f_{\mathbb{Z}}!1, \pi_{q}^{*}g_{\mathbb{Z}}!1(n-m-c)[r-2c]),\] as \[\operatorname{map}_{\operatorname{DM}(W^{q})}(f_{\mathbb{Z}\,q_{1}}!,g_{ \mathbb{Z}\,q_{1}}!(n-m-c)[r-2c]),\] where \(f_{\mathbb{Z}\,q}:W^{q}_{\mathcal{X}}\to W^{q}\) (resp. \(g_{\mathbb{Z}\,q}\)) is the base change of \(f_{\mathbb{Z}}\) along the map \(\pi_{q}:W^{q}\to\mathcal{Z}\times_{BG}\mathcal{S}\). We must show \[\pi_{0}\operatorname{map}_{\operatorname{DM}(W^{q})}(f_{\mathbb{Z}\,q_{1}}!,g_ {\mathbb{Z}\,q_{1}}!(n-m-c)[r-2c])=0.\] Via [11][111, Lem. 2.37] we have the equivalence \[\pi_{0}\operatorname{map}_{\operatorname{DM}(W^{q})}(f_{\mathbb{Z}\,q_{1}}!,g_{\mathbb{Z}\,q_{1}}!(n-m-c)[r-2c])\simeq H^{BM}_{2d_{q}-r+2c,d_{q}-n+m+c}(W ^{q}_{\mathcal{X}}\times_{W^{q}}W^{q}_{y})\] where \(d_{q}:=\dim(Y)+\dim(Z)+r^{2}-g+q\gamma\). Now comparing with the Chow groups \[H^{BM}_{2d_{q}-r+2c,d-n+m+c}(W^{q}_{\mathcal{X}}\times_{W^{q}}W^{q}_{y}) \simeq\operatorname{CH}_{d_{q}-n+m+c}(W^{q}_{\mathcal{X}}\times_{W^{q}}W^{q}_ {y},-r+2n-2m-c)\] we see that \[d_{q}-n+m+c>\dim(W^{q}_{\mathcal{X}}\times_{W^{q}}W^{q}_{y}).\] thus these groups vanish proving 5.3. To see how the main result follows from 5.3, note that it's consequence is the equivalence \[\operatorname{map}_{\operatorname{DM}(\mathcal{S})}(f_{!}!1_{\mathcal{X}}(m)[2 m+j],g_{!}!_{y}(n)[2n])\overset{\tilde{\jmath}*p*}{\to}\operatorname{map}_{ \operatorname{DM}(\mathcal{U}\times_{BG}\mathcal{S})}(f_{\mathcal{U}}!1(m)[2 m+j],g_{\mathcal{U}}!1(n)[2n]). \tag{0.10}\] Now we note that the stack \(\mathcal{U}\times_{BG}\mathcal{X}\times_{\mathcal{S}}\mathcal{Y}\) is equivalent to \(\mathcal{U}\times_{BG}\mathcal{X}\times_{\mathcal{U}\times_{BG}\mathcal{S}} \mathcal{U}\times_{BG}\mathcal{Y}\), which means that it is a scheme. Following the arguments of [11][11][111] we can identify the right hand side of (0.10 ) with \[\pi_{0}\operatorname{map}_{\operatorname{DM}(\mathcal{U}\times_{BG}\mathcal{ X}\times_{\mathcal{S}}y)}(1(l+\dim(\mathcal{Y})+m-n)[2m-2n+2l+2\dim(\mathcal{Y})+j],a^{!}1_{B})\] via base change and purity, where \(a:\mathcal{U}\times_{BG}\mathcal{X}\times_{\mathcal{S}}\mathcal{Y})\to B\) is the structure morphism. We now see by 5.1 that this is just \[\operatorname{CH}_{l+d_{y}-n+m}(\mathcal{U}\times_{BG}\mathcal{X}\times_{ \mathcal{S}}\mathcal{Y},j)\simeq\operatorname{CH}_{d_{y}-n+m}(\mathcal{X} \times_{\mathcal{S}}\mathcal{Y},j)\] which is what we wanted to show. ## 6. Weight Structures We first remind the reader of the definition of a weight structure on the stable \(\infty\)-category. **Definition 6.1**.: A weight structure on a stable \(\infty\)-category \(\mathscr{C}\), is the data of two retract closed subcategories \((\mathscr{C}_{w\geqslant 0},\mathscr{C}_{w\leqslant 0})\) such that: 1. \(\Sigma\mathscr{C}_{w\geqslant 0}\subset\mathscr{C}_{w\geqslant 0},\ \Omega \mathscr{C}_{w\leqslant 0}\subset\mathscr{C}_{w\leqslant 0}\). We write \[\mathscr{C}_{w\geqslant n}:=\Sigma^{n}\mathscr{C}_{w\geqslant 0},\mathscr{C}_{w \leqslant n}:=\Omega^{n}\mathscr{C}_{w\leqslant 0}\] 2. If \(x\in\mathscr{C}_{w\leqslant 0},y\in\mathscr{C}_{w\geqslant 1}\) then \[\pi_{0}\operatorname{map}(x,y)\simeq 0.\] 3. For any \(x\in\mathscr{C}\) we have a cofiber sequence \[x_{\leqslant 0}\to x\to x_{\geqslant 1}.\] with \(x_{\leqslant 0}\in\mathscr{C}_{w\leqslant 0}\) and \(x_{\geqslant 1}\in\mathscr{C}_{w\geqslant 1}\), called the weight truncations of \(x\). We say that a weight structure is bounded if \[\mathscr{C}=\bigcup_{n\in\mathbf{Z}}(\mathscr{C}_{w\geqslant-n}\cap\mathscr{ C}_{w\leqslant n}).\] We also define the weight heart of weight structure to be \[\mathscr{C}^{\searrow_{w}}:=\mathscr{C}_{w\geqslant 0}\cap\mathscr{C}_{w \leqslant 0}.\] Next we state a theorem due to Bondarko [10][4.3.2.II], but see also Hebert [2][Thm 1.9] (see also [11][Rem. 2.2.6] for the \(\infty\)-categorical version, which we state here) **Theorem 6.2**.: _(Bondarko) Let \(\mathscr{C}\) be a stable \(\infty\)-category. Assume we are given a full subcategory \(\mathscr{B}\subset\mathscr{C}\) such that_ 1. \(\mathscr{B}\) _generates_ \(\mathscr{C}\) _under finite limits, finite colimits and retracts._ 2. \(\mathscr{B}\) _has connective mapping spectra._ _Then we may define the following subcategories_ \[\mathscr{C}_{w\geqslant 0}=\{\text{retracts of finite colimits of objects of }\mathscr{B}\}\] _and_ \[\mathscr{C}_{w\leqslant 0}=\{\text{retracts of finite limits of objects of }\mathscr{B}\}.\] _These subcategories give a bounded weight structure on \(\mathscr{C}\) whose heart is the minimal retract-closed additive subcategory containing \(\mathscr{B}\)._ **Theorem 6.3**.: _Let \(\mathcal{X}=[X/G]\) where \(X\) is a quasi-projective scheme over a field \(k\) of characteristic \(0\) and \(G\) is an affine algebraic group._ 1. _The_ \(\infty\)_-category_ \(\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\) _admits a bounded weight structure, with_ \[\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)^{\heartsuit_{w}} \simeq\textbf{Chow}_{\infty}(\mathcal{X},\Lambda).\] 2. _The_ \(\infty\)_-category_ \(\operatorname{Ind}\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\) _admits a weight structure which restricts to the weight structure on_ \(\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\) _constructed in (_1_)._ Proof.: For the first claim simply have to verify the conditions Theorem 6.2. In the notation of that theorem we take \[\mathscr{C}:=\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\qquad \mathscr{B}:=\textbf{Chow}_{\infty}(\mathcal{X},\Lambda).\] Condition (1) follows from Theorem 4.23 and condition (2) follows from Theorem 5.2. For the second claim we can use [1][Prop. 1.4.2 (9)] to finish the argument. ## 7. Equivariant Motives In this final section we identify the homotopy category of Chow motives \(\operatorname{h}\textbf{Chow}_{\infty}(\mathcal{S},\Lambda)\) with the natural generalization of both Laterveer's category of \(G\)-equivariant Chow motives [12] as well as Corti and Hanamura's category of Chow motives over a general base [1] when \(\mathcal{S}\) is a global quotient stack. That is to say when \(\mathcal{S}\) is \(BG\) our identification will show that \(\operatorname{h}\textbf{Chow}_{\infty}(\mathcal{S},\textbf{Q})\) is equivalent to Laterveer's original category. Let \(\mathcal{S}=[S/G]\) where \(S\) is quasi-projective over \(B:=\operatorname{Spec}(k)\) and \(G\) is an affine algebraic group over \(B\). Suppose that \(\mathcal{X}\), \(\mathcal{Y}\) are smooth over \(B\) and projective over \(\mathcal{S}\). Then following [1] we define the set of correspondences of degree \(r\) between \(\mathcal{X}\) and \(\mathcal{Y}\) as follows: Let \(\mathcal{Y}=\coprod_{i}\mathcal{Y}_{i}\) with \(\mathcal{Y}_{i}\) irreducible components then \[\operatorname{Corr}_{r}(\mathcal{X},\mathcal{Y}):=\bigoplus_{i}\operatorname{ CH}_{\dim\mathcal{Y}_{i}+r}(\mathcal{X}\times_{\mathcal{S}}\mathcal{Y}_{i})_{ \Lambda}.\] We can construct a composition of correspondences \[\circ:\operatorname{Corr}_{r}(\mathcal{X},\mathcal{Y})\otimes\operatorname{ Corr}_{s}(\mathcal{Y},\mathcal{Z})\to\operatorname{Corr}_{r+s}(\mathcal{X}, \mathcal{Z})\] by considering the diagram which allows us to define \[\alpha\circ\beta:=p_{\mathcal{X}\times_{\mathcal{S}}}(\delta^{!}(\alpha\times \beta)),\] where \(\delta:\mathcal{Y}\to\mathcal{Y}\times\mathcal{Y}\) and \(p_{\mathcal{X},\mathcal{Z}}:\mathcal{X}\times_{\mathcal{S}}\mathcal{Y}\times_{ \mathcal{S}}\mathcal{Z}\to\mathcal{X}\times_{\mathcal{S}}\mathcal{Z}\) the projection. We note that identity \(\operatorname{id}\in\operatorname{Corr}_{0}(\mathcal{X},\mathcal{X})\) is given by the scheme theoretic image of \(\Delta:\mathcal{X}\to\mathcal{X}\times_{\mathcal{S}}\mathcal{X}\). **Definition 7.1**.: Let \(\operatorname{CHM}(\mathcal{S},\Lambda)\) denote the classical category Chow motives. The objects of this category are triples \[(\mathcal{X},p,m)\] where \(\mathcal{X}\) is smooth and projective over \(\mathcal{S}\), \(p\) is an idempotent in \(\operatorname{Corr}_{0}(\mathcal{X},\mathcal{X})\) and \(m\in\mathbf{Z}\). The morphism sets are defined as \[\operatorname{Hom}((\mathcal{X},p,m),(\mathcal{Y},q,n)):=q\circ\operatorname{ Corr}_{m-n}(\mathcal{X},\mathcal{Y})\circ p\subseteq\operatorname{Corr}_{m-n}( \mathcal{X},\mathcal{Y}).\] **Example 7.2**.: When \(\mathcal{S}=BG\), the category \(\operatorname{CHM}(\mathcal{S},\mathbf{Q})\) is equivalent to Laterveer's category of \(G\)-equivariant motives [10]. In particular, to get the equivalence one must re-index because in loc. cit. Chow cohomology is used and in our situation since we are working over a not necessarily smooth stack \(\mathcal{S}\) we must use Chow homology. **Lemma 7.3**.: _The category \(\operatorname{CHM}(\mathcal{S},\Lambda)\) is an additive, idempotent complete and symmetric monoidal where the tensor product is defined as_ \[(\mathcal{X},p,m)\otimes(\mathcal{Y},q,n):=(\mathcal{X}\times_{\mathcal{S}} \mathcal{Y},p\times q,m+n).\] Proof.: The proof follows by combining the arguments of [10][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][[][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][[][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][[][][][][][][][][][][][][][][][][][][][][][][][][][[][][][][][][][][][][][][][][][][][][][][][][][][][][[][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][[][][][][][][][][][][][][][][][][][][][[][][][][][][][][][][][][][][][[][][][][][][][][][][][][][][[][][][][][][][][][][][][][][][][][][][][][[][][][][][][][][][][][[][][][][][][][][][][][][][][][][][][][[][][][][][][][][][][][][][][][][][][][][[][][][][][][][][][[][][][][][][][][][][][][][][[][][][][][][][][][][][][][][][][][][[][][][][][][][][][][][][][][[][][][][][][][][][][][][][[][][][][][][[][][][][][][][][][][][][][[][][][][][][][][][][][][][][][[][][][][][][][][][][][][][][[][][][][][][][][][][][[][][][][][][][][][][][][[][][][][][][][][][][][][][][[][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][[][][][][][][][][][][][][][[][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][][ _Then we have the following equality_ \[\epsilon_{\mathcal{X},\mathbb{Z}}(\beta\circ\alpha)=\pi_{*}\delta^{!}(\epsilon_{ \mathcal{X},\mathbb{Y}}(\alpha)\circ\epsilon_{\mathbb{Y},\mathbb{Z}}(\beta))\] _where \(\pi:\mathcal{X}\times_{\mathcal{S}}\mathbb{Y}\times_{\mathcal{S}}\mathbb{Z} \rightarrow\mathcal{X}\times_{\mathcal{S}}\mathbb{Z}\) is the projection._ Proof.: We fix a Totaro gadget \(U\subset V\) and let \(q:U/G\to B\). Then as in the proof of Theorem 5.2 the functor \(q^{*}=\bar{j}^{*}p^{*}\) induces natural isomorphisms \[\pi_{0}\operatorname{map}_{\operatorname{DM}(\mathcal{S},\Lambda)}(f_{1}1,g_{ 1}1)\xrightarrow{q^{*}}\pi_{0}\operatorname{map}_{\operatorname{DM}(\mathbb{ U}\times_{BG}\mathcal{S},\Lambda)}(f_{\mathbb{U}!}1,g_{\mathbb{U}!}1)\] \[\pi_{0}\operatorname{map}_{\operatorname{DM}(\mathcal{S},\Lambda)}(g_{1}1,h _{1}1)\xrightarrow{q^{*}}\pi_{0}\operatorname{map}_{\operatorname{DM}( \mathbb{U}\times_{BG}\mathcal{S},\Lambda)}(g_{\mathbb{U}!}1,h_{\mathbb{U}!}1).\] The result now follows from [11][Prop. 2.39], applied to \[f_{\mathbb{U}} :\mathcal{U}\times_{BG}\mathcal{X}\rightarrow\mathcal{U}\times_ {BG}\mathcal{S},\] \[g_{\mathbb{U}} :\mathcal{U}\times_{BG}\mathcal{Y}\rightarrow\mathcal{U}\times_ {BG}\mathcal{S},\] \[h_{\mathbb{U}} :\mathcal{U}\times_{BG}\mathcal{Z}\rightarrow\mathcal{U}\times_ {BG}\mathcal{S}.\] **Corollary 7.5**.: _The map \(F^{\prime}:\operatorname{CHM}(\mathcal{S},\Lambda)^{\prime}\to h\, \boldsymbol{Chow}_{\infty}(\mathcal{S},\Lambda)\) is a functor._ Proof.: This follows directly from Proposition 7.4 together with [11][Props. 3.11, 3.15, 3.16]. Now by the universal property of idempotent completion we get a well defined functor \[F:\operatorname{CHM}(\mathcal{S},\Lambda)\rightarrow\operatorname{h}\, \boldsymbol{Chow}_{\infty}(\mathcal{S},\Lambda) \tag{0.2}\] **Theorem 7.6**.: _The functor \(F\), 0.2, is an equivalence of categories_ \[F:\operatorname{CHM}(\mathcal{S},\Lambda)\stackrel{{\simeq}}{{ \rightarrow}}h\,\boldsymbol{Chow}_{\infty}(\mathcal{S},\Lambda)\] Proof.: Fully-faithfullness is clear. To see that it is essentially surjective simply note that \(F\) is an additive functor and every generator of \(\operatorname{h}\,\boldsymbol{Chow}_{\infty}(BG)\) is contained in its essential image. **Corollary 7.7**.: _Let \(\mathcal{X}\) be a Nis-loc stack over a field of characteristic \(0\). Then the homotopy category of the heart of the weight structure constructed in Theorem 6.3 for \(\operatorname{DM}_{\operatorname{gm}}(\mathcal{X},\Lambda)\) can be identified with \(\operatorname{CHM}(\mathcal{X},\Lambda)\). That is_ \[\operatorname{CHM}(\mathcal{X},\Lambda)\simeq h\operatorname{DM}_{ \operatorname{gm}}(\mathcal{X},\Lambda)^{\heartsuit_{\infty}}.\] **Corollary 7.8**.: _For an algebraic group \(G\), the category \(h\,\boldsymbol{Chow}_{\infty}(BG,\mathbf{Q})\) is equivalent to the category of \(G\)-equivariant chow motives of Laterveer._ Proof.: This is just Example 7.2 combined with Theorem 7.6.
2308.06841
Averages of products of characteristic polynomials and the law of real eigenvalues for the real Ginibre ensemble
An elementary derivation of the Borodin-Sinclair-Forrester-Nagao Pfaffian point process, which characterises the law of real eigenvalues for the real Ginibre ensemble in the large matrix size limit, uses the averages of products of characteristic polynomials. This derivation reveals a number of interesting structures associated with the real Ginibre ensemble such as the hidden symplectic symmetry of the statistics of real eigenvalues and an integral representation for the $K$-point correlation function for any $K\in \mathbb{N}$ in terms of an asymptotically exact integral over the symmetric space $U(2K)/USp(2K)$.
Roger Tribe, Oleg Zaboronski
2023-08-13T19:59:59Z
http://arxiv.org/abs/2308.06841v1
Averages of products of characteristic polynomials and the law of real eigenvalues for the real Ginibre ensemble. ###### Abstract An elementary derivation of the Borodin-Sinclair-Forrester-Nagao Pfaffian point process, which characterises the law of real eigenvalues for the real Ginibre ensemble in the large matrix size limit, uses the averages of products of characteristic polynomials. This derivation reveals a number of interesting structures associated with the real Ginibre ensemble such as the hidden symplectic symmetry of the statistics of real eigenvalues and an integral representation for the \(K\)-point correlation function for any \(K\in\mathbb{N}\) in terms of an asymptotically exact integral over the symmetric space \(U(2K)/USp(2K)\). ## 1 Introduction and main results ### The real Ginibre ensemble of random matrices. The real Ginibre ensemble, denoted by \(\mathrm{GinOE}(N)\), is one of the classical random matrix ensembles defined as the following probability measure on the space of real \(N\times N\) matrices: \[d\mu^{(N)}(M)=(\pi)^{-\frac{N^{2}}{2}}e^{-TrMM^{T}}\prod_{i,j=1}^{N}dM_{ij}, \tag{1}\] where \(\prod_{i,j=1}^{N}dM_{ij}\) is the Lebesgue measure on \(\mathbb{R}^{N\times N}\). This model was introduced in [16] in 1965, but, unlike its'self-adjoint' counterparts (the models defined on spaces of symmetric, Hermitian and quaternionic self-dual matrices), the calculation of the correlation functions for the real Ginibre model took much longer. For example, the joint probability density of eigenvalues was derived by Lehman and Sommers in [25] in 1991 only, and it took another decade for the calculation of the correlation functions to be carried out by by Forrester, Nagao [14] and Borodin, Sinclair [5]. They independently discovered that the law of \(\mathrm{GinOE}(N)\) eigenvalues is a Pfaffian point process and determined its kernel. The statistics of real eigenvalues for the real Ginibre ensemble turns out to be particularly interesting. It has been known since the work by Edelman, Kostlan and Shub [11] that a large random Ginibre matrix has \(O(\sqrt{N})\) eigenvalues. One of the results of [14], [5] is that the marginal law of real eigenvalues is also a Pfaffian point process. It turns out, that the large-\(N\) limit of this point process coincides (up to a Brownian rescaling) with the fixed-time law of annihilating Brownian motions on the real line [38]. It is important to stress that, unlike the link between the statistics of GUE and Dyson Brownian motions, this does not extend to multi-time statistics, see [36]. However, it does suggest that the Pfaffian point process at hand maybe universal, where the corresponding universality class contains both non-equilibrium interacting particle systems and the non-symmetric ensembles of random matrices. We recall that the well-known results of Borodin, Sinclair, Forrester and Nagao concerning the behaviour of real eigenvalues for the real Ginibre ensemble can be obtained without a reference to Lehmann-Sommers distribution. Instead, a duality relation between \(\mathrm{GinOE}(N)\) and \(\mathrm{GinOE}(N-K)\) allows one to express a \(K\) point correlation function for \(\mathrm{GinOE}(N)\) in terms of the expectation of the product of characteristic polynomials for \(\mathrm{GinOE}(N-K)\)[36]. Here \(N,K\in\mathbb{N}\), \(K<N\). The latter is easy to calculate using the Berezin calculus of anti-commuting variables. Overall, the computation turns out to be surprisingly similar to the derivation of the fixed time law for the annihilating Brownian motions carried out in [26], [38] which relied on Markov duality between a finite and infinite systems of annihilating Brownian motions. Roughly, the method is to "linearise" the model by finding sufficiently many expectation values which (i) determine the law of real eigenvalues; (ii) can be characterised as solutions to a linear initial value problem, which are easy to write down explicitly. Our re-derivation of the Borodin-Sinclair-Forrester-Nagao Pfaffian point process reveals a couple of interesting mathematical structures associated with the real Ginibre ensemble in the large-\(N\) limit: firstly, the \(K\)-point correlation function of the real eigenvalues is given by the density of the eigenvalue distribution for the Mehta-Pandey model interpolating between \(\mathrm{GUE}(K)\) and \(\mathrm{GSE}(K)\) at the anti-self-dual point [29], [30]. Secondly, the Mehta-Pandey integral representing the \(K\)-point correlation function turns out to be asymptotically exact, in the sense that the leading order term of its stationary phase approximation coincides with an exact answer. It belongs to a novel family of asymptotically exact integrals over symmetric spaces generalising the celebrated Itzykson-Zuber integrals. The primary aim of the paper is to study the integral formulae for the multi-point correlation functions for the real Ginibre ensemble using the heat kernel method and the closely related proof of the asymptotic exactness of the integrals. To make the presentation self-contained, we review the results of [36] concerning the derivation of the law of the real eigenvalues for the bulk scaling limit of \(\mathrm{GinOE}(N)\). The rest of the paper is organised as follows. In the rest of the introduction, we will recall the definition of the random function counting the parity of the number of real eigenvalues in a semi-infinite interval, which we refer to as'spin variables' (subsection 1.2). The expectations of the spin variables can be related to the expectations of products of characteristic polynomials using the Householder transformation [20]. We will then state and discuss the main results (subsection 1.3). The proofs are presented in section 2. ### The linearising'spin' variables for the real Ginibre ensemble. The spin variable associated with a real valued matrix \(M\) is the function \(s(M):\mathbb{R}\to\{\pm 1\}\): \[s_{x}(M)=(-1)^{\Lambda^{M}(-\infty,x)},\quad x\in\mathbb{R}, \tag{2}\] where \(\Lambda^{M}(a,b)\) is the number of _real_ eigenvalues of \(M\) in the interval \((a,b)\subset\mathbb{R}\). Note an analogy between the spin variables (2) and spins in a one-dimensional spin chain with real eigenvalues playing the role of domain walls. Spin variables are crucial in linearising the moment equations for annihilating random walks and/or Brownian motions, see e.g. [17], [26]. We believe they will be useful for any random matrix model with either purely real spectrum or such that the complex eigenvalues appear in conjugate pairs. Examples include both the Hermitian matrix models such as \(\mbox{GUE}(N)\) or \(\mbox{GOE}(N)\), and the non-Hermitian ones such as \(\mbox{GinOE}(N)\). The following elementary remark provides a tool for computing product moments of spin variables: the spectrum of a real \(N\times N\) matrix \(M\) consists of real eigenvalues and pairs of conjugated complex eigenvalues. Therefore, \[s_{x}(M)=\left(-1\right)^{\#}\{\mbox{ All eigenvalues of $M$ with real parts in $(-\infty,x)$}\}. \tag{3}\] Now, let us assume that \(M\sim\mbox{GinOE}(N)\). As a pair of complex conjugated eigenvalues corresponds to a positive factor in the characteristic polynomial, (3) implies that \[s_{x}(M)=\mbox{sgn}\left(\det\left(M-xI\right)\right)=\frac{\det\left(M-xI \right)}{\left|\det\left(M-xI\right)\right|}\mbox{ a.s.} \tag{4}\] Here we used that under the law of the real Ginibre ensemble the probability that \(x\) is an eigenvalue of \(M\) is zero. We will recall that when computed with the help of Householder transformations [20], the product moments of spin variables are expressed in terms of the product moments of characteristic polynomials for the real Ginibre ensemble of a smaller size. In the large-\(N\) limit, these correlation functions will be shown to satisfy a linear parabolic partial differential equation on the Weyl chamber, thus confirming the claimed linearisation of the real Ginibre ensemble in terms of spin variables. Of course, the statistics of characteristic polynomials can be studied directly using the Lehmann-Sommers distribution, see for example [1], [32]. All multi-point (Lebesgue) densities for real eigenvalues can be restored from the moments of spin variables. Namely we have the following relation: \[\rho^{(N)}(x_{1},x_{2},\ldots,x_{K}) \tag{5}\] \[= \left.\left(-\frac{1}{2}\right)^{K}\left(\prod_{k=1}^{K}\frac{ \partial}{\partial y_{k}}\right)\mathbb{E}_{N}\left[\prod_{m=1}^{K}s_{x_{m}} \left(M\right)s_{x_{m}+y_{m}}\left(M\right)\right]\right|_{y_{m}=0+,\,m=1,2 \ldots,K}\] where \(\mathbb{E}_{N}\) is the expectation with respect to \(\mbox{GinOE}(N)\) and \(\rho^{(N)}(x_{1},x_{2},\ldots,x_{K})\) is the correlation function of order \(K\) (the factorial density of order \(K\)) for the point process corresponding to the law of real eigenvalues of \(\mbox{GinOE}(N)\), see [38] for details of the rigorous derivation of (5). From a purely technical point of view, it is also useful to consider derivatives of moments of product spins leading us to the so-called modified correlation functions defined as follows: \[\tilde{\rho}^{(N)}(x_{1},x_{2},\ldots,x_{K}):=\left(-\frac{1}{2}\right)^{K}\left( \prod_{m=1}^{K}\partial_{x_{m}}\right)\mathbb{E}_{N}\left[\prod_{k=1}^{K}s_{x_{ k}}(M)\right]. \tag{6}\] Equivalently, in terms of the counting measure \(\Lambda^{M}\), \[\tilde{\rho}^{(N)}(x_{1},x_{2},\ldots,x_{K})\prod_{k=1}^{K}dx_{k}=\mathbb{E}_ {N}\left[\prod_{k=1}^{K}s_{x_{k}}(M)\Lambda^{M}(dx_{k})\right]. \tag{7}\] The above formula is an equality between measures acting on direct products of disjoint intervals (with \(dx_{k}\) on the left hand side being a standard abbreviation for the Lebesgue measure on \(\mathbb{R}\)). The product moments of spin variables can be iteratively restored from the modified densities by a \(K\)-dimensional integration: \[\mathbb{E}_{N}\left[\prod_{k=1}^{K}\left(s_{x_{k}}(M)-1\right)\right]=(-2)^{ K}\left(\prod_{k=1}^{K}\int_{-\infty}^{x_{k}}dy_{k}\right)\tilde{\rho}^{(N)}(y_{1 },y_{2},\ldots,y_{k}). \tag{8}\] We are ready to state the main results of the paper. We use the convention that \(C\) denotes a constant, whose dependence will be indicated (for example \(C_{K}\)) but whose exact value is unimportant and may change form line to line. For constant whose value we wish to record we use subscripts for future reference (for example \(c_{1}(K,N)\)). ### Results We first recall is a simple relation between the modified density expressed in terms of the product moment of spin variables and moments of characteristic polynomials, which is valid for real Ginibre matrices of any size \(N\leq\infty\). It can be succinctly expressed by the following duality formula: **Lemma 1** ([36]): _Choose \(-\infty<x_{1}<x_{2}<\ldots<x_{2K-1}<x_{2K}<\infty\). Then_ \[\left(\prod_{k=1}^{2K}\partial_{x_{k}}\right)\mathbb{E}_{N}\left[ \prod_{l=1}^{2K}\frac{det(M-x_{l})}{|det(M-x_{l})|}\right] \tag{9}\] \[= c_{1}(K,N)e^{-\sum_{k=1}^{K}x_{k}^{2}}\textit{V}(x)\mathbb{E}_{ N-2K}\left[\prod_{l=1}^{2K}det(M-x_{l})\right],\ 2K<N\in\mathbb{N},\] _where \(c_{1}(K,N)>0\) is an explicit constant and \(\mbox{V}(x):=\prod_{i<j}^{2K}(x_{j}-x_{i})\) is the Vandermonde determinant._ In other words, the derivatives with respect to the argument s of characteristic polynomials appear to 'cancel' the denominators in the product of ratios of characteristic polynomials in the left hand side leaving one with the expected value of the product of characteristic polynomials on the right hand side. The latter are easy to evaluate in the large-\(N\) limit using the supersymmetric formalism, see [23] for a review. Thus the formula (9) leads to an integral representation for the modified \(K\)-point density of real eigenvalues in the large \(N\) limit \[\tilde{\rho}(x_{1},x_{2},\ldots,x_{K}):=\lim_{N\to\infty}\tilde{\rho}^{(N)}(x_ {1},x_{2},\ldots,x_{K}). \tag{10}\] **Theorem 1** ([36]): _(Ginibre ensemble and anti-self dual Gaussian symplectic ensembles.) Let \(K\) be an even natural number. Let \(X=\mbox{Diag}(x)\) be a diagonal \(K\times K\) matrix with the diagonal entries \(x\in\mathbb{R}^{K}\) satisfying \(x_{1}<x_{2}<\ldots<x_{K}\in\mathbb{R}\). Then the limit (10) exists and is given by_ \[\tilde{\rho}(x_{1},x_{2},\ldots,x_{K})=C_{K}\mbox{V}(x)\int_{U(K)}\mu_{H}(dU)e ^{-\frac{1}{2}Tr\left(H-H^{R}\right)^{2}} \tag{11}\] _where \(C_{K}\) is a positive constant, \(H=UXU^{\dagger}\) is a Hermitian matrix, \(\mu_{H}\) is Haar measure on the unitary group \(U(K)\),and \(H^{R}=JH^{T}J\) is a symplectic involution of matrix \(H\) using \(J\) the canonical symplectic matrix._ The integral on the right hand side of (11) is a particular case of the elliptic Gaussian matrix model which interpolates between the classical GUE and GSE ensembles. This model was introduced and solved by Mehta and Pandey in [29], [30], see [28] for a review. It is remarkable that it appears in the \(N=\infty\) limit of the correlation function of characteristic polynomials for the real Ginibre ensemble, which does not have any apparent symplectic symmetry. **Remark 1**: _Let \(\mathbb{E}_{MP(K)}\) be the expectation with respect to the anti-self-dual instance of \(K\times K\) Mehta-Pandey model, a matrix model defined on the space of \(K\times K\) matrices by the measure \(\exp\left[-\frac{1}{2}Tr\left(H-H^{R}\right)^{2}\right]dH\), where \(dH\) is the Lebesgue measure. Then the statement of Theorem 1 can be re-written _as a statement of matrix model duality as defined in e.g. [15]:_ \[\lim_{N\to\infty}\left(\prod_{k=1}^{K}\partial_{k}\right)\mathbb{E} _{N}\left[\prod_{\ell=1}^{K}\frac{det(M-x_{\ell}I)}{|det(M-x_{\ell}I)|}\right]= C_{K}\mathbb{E}_{MP(K)}\left[\delta(\sigma_{H}-x)\right]\,\mbox{V}(x), \tag{12}\] _where \(\sigma_{H}\) is the spectrum of the self-adjoint matrix \(H\)._ Mehta and Pandey use a Hubbard-Stratonovich transformation to reduce the integral to the Itzykson-Zuber case. In this paper we show that the integral in the right hand side of (11) can be evaluated using the heat kernel method, the advantage of which is the possibility of a generalisation to an arbitrary symmetric space of a compact Lie group \(G\) with an involution. Namely, we have the following statement proved in Section 2.3. **Proposition 1**: _Define_ \[I_{t}(X):=\int_{U(K)}\mu_{H}(dU)e^{-\frac{1}{2t}Tr\left(H-H^{R} \right)^{2}}, \tag{13}\] _where \(H=UXU^{\dagger}\) and \(X=\mbox{Diag}(x)\), and \(x\in\mathbb{R}^{K}\) for even \(K>0\). Let \(\tilde{\rho}_{t}(x)=C_{K}t^{-\frac{K(K+1)}{4}}\,\mbox{V}(x)I_{t}(x)\) be a deformation of \(\tilde{\rho}\), which coincides with \(\tilde{\rho}\) for \(t=1\) and \(x\in W^{(K)}:=\{x:x_{1}<x_{2}<\ldots<x_{K}\}\). Then \((\tilde{\rho}_{t}:t>0)\) is the unique distributional solution to the heat equation_ \[\left\{\begin{array}{l}(\partial_{t}-\frac{1}{8}\Delta)\tilde{ \rho}_{t}(x)=0,t>0,\quad x\in\mathbb{R}^{K}\\ \tilde{\rho}_{0+}(x)=C_{K}\prod_{k=1}^{K/2}\delta^{\prime}(x_{2k}-x_{2k-1}), \quad x\in\mathbb{R}^{K}.\end{array}\right. \tag{14}\] The equation (14) is easy to solve leading to a direct proof, avoiding marginalisation, of the following foundational result. **Theorem 2** (Borodin-Sinclair [5], Forrester-Nagao [14]): _The bulk scaling limit of the law of real eigenvalues for \(\mbox{GinOE}(N)\) is a Pfaffian point process:_ \[\lim_{N\to\infty}\rho^{(N)}(x_{1},x_{2},\ldots,x_{K})=\underset{ 1\leq i,j\leq K}{\mbox{Pfaff}}H(x_{j}-x_{i}),\ K\geq 1, \tag{15}\] _where_ \[H(x)=\left(\begin{array}{cc}-F^{\prime\prime}(x)&-F^{\prime}( x)\\ F^{\prime}(x)&sgn(x)F(|x|)\end{array}\right), \tag{16}\] _and \(F(x)=\pi^{-1/2}\int_{x}^{\infty}e^{-z^{2}}\,dz\)._ **Remark 2**: _This is Corollary 9 of [5] and can also be easily restored from the results of [14]._ As a by-product of our proof of Theorem 2 one gets the following answer for the integral \(I_{t}\): for all disjoint \((x_{i})\) \[I_{t}(x) := \int_{U(K)}\mu_{H}(dU)e^{-\frac{1}{2t}Tr\left(H-H^{R}\right)^{2}} \tag{17}\] \[= C_{K}\frac{\mbox{Pfaff}_{1\leq i,j\leq K}\left[\frac{(x_{i}-x_{j })}{\sqrt{t}}e^{-\frac{(x_{i}-x_{j})^{2}}{t}}\right]}{\mbox{V}\left(\frac{ \mathbf{x}}{\sqrt{\mathbf{t}}}\right)}.\] Our final result concerns the asymptotic exactness of the integral \(I_{it}(x)\) for \(t\in\mathbb{R}\), a result conjectured by Yan Fyodorov during an after-seminar discussion. **Theorem 3**: _The integral, for even \(K>0\),_ \[I_{it}(x):=\int_{U(K)}\mu_{H}(dU)e^{\frac{i}{2t}Tr\left(H-H^{R}\right)^{2}} \tag{18}\] _is asymptotically exact. In other words the exact expression for \(I_{it}\) obtained from (17) by replacing \(t\) with it coincides with the leading term of the stationary phase expansion, for small \(t\), of the integral in (18)._ **Remark 3**: _At the moment, the precise reason for this localisation is unclear to us. In particular, the Duistermaat-Heckmann Theorem [4] which is responsible for the exact localisation of the Itzykson-Zuber-Harish-Chandra integral is not directly applicable to our case. Due to symplectic invariance of the integrand, the integral in (17) is taken over the symmetric space \(U(K)/USp(K)\), where \(USp(K)\) is the symplectic subgroup of \(U(K)\). But \(dimU(K)/USp(K)=K(K-1)/2\), which is even only if \(K\) is divisible by \(4\). So in general, \(U(K)/USp(K)\) is not even symplectic and the Duistermaat-Heckmann Theorem cannot apply._ The proof of Lemma 1 can be found in Section 2.1; Theorem 1 is proved in Section 2.2; Proposition 1 is proved in Section 2.3; Theorem 2 is proved in Section 2.4; Theorem 3 is proved in Section 2.5. ## 2 Proofs. ### Lemma 1 Following [36], let us fix \((x_{1},x_{2},\ldots,x_{2K})\in W^{(2K)}\). We will assume \(N\) to be even, which helps us avoid tracking various \(\pm\) signs. The proof for odd \(N\) is similar. It follows from (7) and (8) that we need to evaluate \[\tilde{\rho}^{(N)}(x_{1},x_{2},\ldots,x_{2K})\,\prod_{k=1}^{2K}dx_{k}=\frac{1} {4^{K}}\int_{R^{N^{2}}}dM\frac{e^{-TrMM^{T}}}{\pi^{N^{2}/2}}\prod_{k=1}^{2K}s_ {x_{k}}(M)\Lambda^{M}(dx_{k}). \tag{19}\] The integral in the right hand side can be computed recursively using the Householder transform [20]. The calculation is a direct generalisation of Edeleman's calculation of the eigenvalue density for the real Ginibre ensemble [11]. Let \(M\) be a real \(N\times N\) matrix with a real eigenvalue \(x\) and the corresponding eigenvector \(v\in S^{+}_{N-1}\), the upper half of the \(N-1\) dimensional unit sphere in \(\mathbb{R}^{N}\). Consider the following change of variables: \[M=P_{v}M^{e}P_{v} \tag{20}\] where \(P_{v}\) is the Householder transformation [20] that reflects in the hyperplane at right angles to the vector \(v-e_{N}\) (where \(e_{N}\) is the unit vector \((0,\ldots,0,1)\)), and \(M^{e}\) is a block matrix \[M^{e}=\left(\begin{array}{cc}M_{0}^{e}&0\\ w^{T}&x\end{array}\right) \tag{21}\] with \(M_{0}^{e}\) an \((N-1)\times(N-1)\) real matrix, \(w\in R^{N-1}\) and \(x\in\mathbb{R}\). The Jacobian of the Edelman's transformation (20) is \(|\det(M_{0}^{e}-xI)|\), see [11]. Let us perform the change of variables \(M\rightarrow(M^{e},v_{2K},x_{2K})\) in the integral (19), where \(x_{2K}\) is the eigenvalue of \(M\) lying in \(dx_{2K}\) and \(v_{2K}\) is the corresponding eigenvector. Integrating over the half sphere \(S^{+}_{N-1}\) and noticing the cancellation between the denominator \(|\det(M-x_{2K}I)|\) of the spin variable \(S_{M}(x_{2K})\) and the Jacobian \(|\det(M_{0}^{e}-x_{2K}I)|\) of (20), we obtain \[4^{K}\tilde{\rho}^{(N)}(x_{1},x_{2},\ldots,x_{2K})\,\prod_{k=1}^ {2K-1}dx_{k}\] \[= \frac{1}{2}|S_{N-1}|\pi^{-\frac{N-1}{2}}e^{-x_{2K}^{2}}\mathbb{E} _{N-1}\left[\det\left(M-x_{2K}I\right)\prod_{k=1}^{2K-1}s_{x_{k}}(M)\Lambda^{ M}(dx_{k})\right].\] A subsequent Edelman transform about the eigenvalue lying in \(dx_{2K-1}\) yields \[4^{K}\tilde{\rho}^{(N)}(x_{1},x_{2},\ldots,x_{2K})\,dx_{1}\ldots dx _{K-2}\] \[= \frac{1}{4}|S_{N-1}||S_{N-2}|\pi^{-\frac{N-1}{2}-\frac{N-2}{2}}e^{ -x_{2K}^{2}-x_{2K-1}^{2}}(x_{2K-1}-x_{2K})\] \[\qquad\mathbb{E}_{N-2}\left[\det\left(M-x_{2K}I\right)\det\left(M- x_{2K-1}I\right)\prod_{k=1}^{2K-2}s_{x_{k}}(M)\Lambda^{M}(dx_{k})\right].\] An application of further \((2K-2)\) Edelman transforms leads to the desired expression for the modified density: \[\tilde{\rho}^{(N)}(x_{1},x_{2},\ldots,x_{2K}) \tag{24}\] \[= \frac{\mathrm{V}(\mathbf{x})}{16^{K}}\prod_{k=1}^{2K}\left(|S_{N- k}|\pi^{-\frac{N-k}{2}}e^{-x_{k}^{2}}\right)\mathbb{E}_{N-K}\left[\prod_{m=1}^{2K }\det\left(M-x_{m}I\right)\right].\] Lemma 1 is proved with \[c_{1}(N,K)=\prod_{k=1}^{2K}\left(\frac{|S_{N-k}|\pi^{-\frac{N-k}{2}}}{4}\right).\] ### Theorem 1 The full proof of this theorem can be found in the appendix of [36], the main topic of which is the study of Brownian motion taking values in real matrices. Here we sketch the main steps of the proof. The integral representation for the expectation value of a product of characteristic polynomials can be derived for any Gaussian random matrix ensemble following [7], see [33] for the specific case of the real Ginibre ensemble. As a first step, the determinants are represented as Gaussian integrals over anti-commuting (Grassmann) variables. The integral with respect to the random matrix measure becomes Gaussian as well and can then be computed exactly. This leads to an integral representation for the expectation of a product of \(K\) characteristic polynomials as a Berezin integral with respect to \(O(KN)\) variables. The integrand is the exponential of a polynomial of the fourth degree in anti-commuting variables. Finally, the Berezin integral can be re-written as a bosonic integral over \(K(K-1)/2\) complex variables using a Hubbard Stratonovich transformation and also computed exactly. The answer is \[\mathbb{E}_{N}\left[\prod_{m=1}^{K}\det\left(M-x_{m}I\right)\right] \tag{25}\] \[= \prod_{1\leq p<q\leq K}\left[\int_{\mathbb{R}^{2}}\frac{dz_{pq}d \overline{z}_{pq}}{\pi}e^{-|z_{pq}|^{2}}\right]Pf\left(\begin{array}{cc} \frac{1}{\sqrt{2}}Z&X\\ -X&\frac{1}{\sqrt{2}}Z^{\dagger}\end{array}\right)^{N}.\] Here each \(dz_{pq}d\overline{z}_{pq}\) is shorthand for Lebesgue measure on \(\mathbb{R}^{2}\) and arises from repeated use of the Hubbard-Stratonovich transform; \(X=\mathrm{Diag}(x)\) with \(x\in\mathbb{R}^{K}\); and \(Z\) is a skew symmetric complex \(K\times K\) matrix. The right hand side of expression (25) can be re-written as a matrix integral: \[\pi^{-\frac{K(K-1)}{2}}\int_{Q^{(K)}}\lambda(dZ,dZ^{\dagger})e^{-\frac{1}{2} TrZZ^{\dagger}}Pf\left(\begin{array}{cc}\frac{1}{\sqrt{2}}Z&X\\ -X&\frac{1}{\sqrt{2}}Z^{\dagger}\end{array}\right)^{N}, \tag{26}\] where \(Q^{(K)}=\{Z\in\mathbf{C}^{K\times K}\mid Z^{T}=-Z\}\) is the space of skew-symmetric complex matrices and \(\lambda(dZ,dZ^{\dagger})\) is the Lebesgue measure on \(Q^{(K)}\) as described above. Note that the dimension of the integral in the right hand side of (26) is \(N\)-independent. The size of the original matrix only enters the integral as the power of the Pfaffian in the integrand. This allows one to calculate the large \(N\)-limit of (26) using the Laplace method. To facilitate the application of asymptotic methods, one rescales the integration variables \((Z,Z^{\dagger})\rightarrow\sqrt{N}(Z,Z^{\dagger})\) to arrive at \[\mathbb{E}_{N}\left[\prod_{m=1}^{K}\det\left(M-x_{m}\right)\right]=\pi^{- \frac{K(K-1)}{2}}2^{-\frac{NK}{2}}N^{\frac{NK}{2}}N^{\frac{K(K-1)}{2}}J_{N} \tag{27}\] where \[J_{N}=\int_{Q^{(K)}}\lambda(dZ,dZ^{\dagger})e^{-\frac{N}{2}TrZZ^{\dagger}}Pf \left(\begin{array}{cc}Z&\sqrt{\frac{2}{N}}X\\ -\sqrt{\frac{2}{N}}X&Z^{\dagger}\end{array}\right)^{N}\!\!\!. \tag{28}\] The integrand in \(J_{N}\) is now of the form \(\exp(NF_{N}(Z))\), where \(F_{N}\) is a slow function of \(N\) in the sense that \(F_{N}\) and its derivatives converge in the limit \(N\to\infty\). The main contribution to (28) for \(N\to\infty\) comes from the neighborhood of the points of global minimum of the function \[F_{\infty}(Z)=TrZZ^{\dagger}-\ln\det\left(ZZ^{\dagger}\right),\ Z\in Q^{(K)}. \tag{29}\] The global minimum value of \(F_{\infty}\) is \(K\) and it is attained on the set \[aU(K)=\{W\in Q^{(K)}\mid WW^{\dagger}=I\}. \tag{30}\] of the skew-symmetric unitary \(K\times K\) matrices. The set \(aU(K)\) is a smooth sub-manifold \(Q^{(K)}\). It is also a non-degenerate critical set meaning that the Hessian of \(F_{\infty}\) has the maximal possible rank at every point of \(aU(K)\). Therefore we can use the standard multi-dimensional Laplace Theorem [12] to calculate the asymptotic expansion of \(J_{N}\). The final answer is \[J_{N}=e^{-\frac{NK}{2}}(2\pi N)^{-\frac{K(K-1)}{4}}\int_{aU(K)}\mu(dW)e^{Tr \left(W^{\dagger}XWX\right)}(1+O(N^{-1})), \tag{31}\] where \(\mu(dW)\) is the Haar measure on \(aU(K)\) which can be defined as the unique probability measure invariant with respect to the following transitive action of \(U(K)\) on \(aU(K)\): \[U(K)\times aU(K) \longrightarrow aU(K) \tag{32}\] \[(U,W) \mapsto UWU^{T}.\] Collecting together (24), (27) and (31) we find \[\tilde{\rho}^{(N)}(x_{1},x_{2},\ldots,x_{K})=c_{2}(N,K)\mathrm{V}(\mathbf{x}) \prod_{k=1}^{K}e^{-x_{k}^{2}}\int_{aU(K)}\mu(dW)e^{Tr\left(W^{\dagger}XWX \right)}\left(1+o(1)\right), \tag{33}\] where \[c_{2}(N,K) = C_{K}\prod_{k=1}^{K}\left(|S_{N-k}|\pi^{-\frac{N-k}{2}}\right) \pi^{-\frac{K(K-1)}{2}}2^{-\frac{(N-K)K}{2}}\] \[\qquad(N-K)^{\frac{(N-K)K}{2}}(N-K)^{\frac{K(K-1)}{2}}e^{-\frac{( N-K)K}{2}}(2\pi(N-K))^{-\frac{K(K-1)}{4}}\] and \(C_{K}>0\) denotes a \(K\)-dependent constant. It is lengthy but straightforward to check that \(c_{2}(N,K)\to c_{3}(K)>0\) as \(N\to\infty\) and hence that the limiting modified density \(\tilde{\rho}(x_{1},x_{2},\ldots,x_{K}):=\lim_{N\to\infty}\tilde{\rho}^{(N)}(x_{1},x_{2},\ldots,x_{K})\) exists and is given by \[\tilde{\rho}(x_{1},x_{2},\ldots,x_{K})=c_{3}(K)\mathrm{V}(\mathbf{x})\prod_{k=1 }^{K}e^{-x_{k}^{2}}\int_{aU(K)}\mu(dW)e^{Tr\left(W^{\dagger}XWX\right)}. \tag{35}\] By the spectral theorem for skew-symmetric unitary matrices, \(W\in aU(K)\) iff there is \(U\in U(K)\) such that \(W=UJU^{T}\), [27]. The pullback of the Haar measure on \(aU(K)\)is a Haar measure on \(U(K)\) under the map \(U\to W\). Under this map the integral (35) coincides with (1). Theorem 1 is proved. ### Proposition 1 #### 2.3.1 Equation Let us start with a general observation concerning solutions to the heat equation on linear spaces. Consider the canonical heat equation on a real \(n\)-dimensional vector space \(\mathbb{V}\), equipped with an inner product \(\langle\cdot,\cdot\rangle\): \[\left(\frac{\partial}{\partial t}-\frac{1}{2}\mathrm{div\ grad}\right)\phi_{t} (x)=0,\ \lim_{x\to\infty}\phi_{t}(x)=0. \tag{36}\] Using explicit coordinates on \(\mathbb{V}\), \(\mathrm{div\ grad}=\sum_{i,j=1}^{n}g^{ij}\frac{\partial}{\partial x_{i}}\frac{ \partial}{\partial x_{j}}\), where \((g^{ij})_{1\leq i,j\leq n}\) is the inverse of the matrix defining the inner product. The fundamental solution is \[\Phi_{t}(x)=\frac{1}{(2\pi t)^{n/2}}e^{-\frac{\langle x,x\rangle}{2t}}. \tag{37}\] Let \(P:\mathbb{V}\to\mathbb{V}\) be an orthogonal projection operator, that is \(P^{2}=P\), \(P=P^{T}\). Then it is straightforward to check that \[\Phi_{t}(x\mid P)=\frac{1}{(2\pi t)^{rank(P)/2}}e^{-\frac{\langle x,Px\rangle} {2t}}, \tag{38}\] also solves (36). This solution can be regarded as fundamental in the space of initial conditions constant on \(Ker(P)\). Let \(\{P_{g}\}_{g\in\mathbf{G}}\) be a (continuous) family of projection operators parameterized by points of a compact measure space \(\mathbf{G}\) with a finite measure \(\mu\). Then, by (38) and the linearity of the heat equation, the function \[\Phi_{t}(x\mid P,\mu)=\int_{\mathbf{G}}\frac{\mu(dg)}{(2\pi t)^{rank(P_{g})/2 }}e^{-\frac{\langle x,P_{g}x\rangle}{2t}}, \tag{39}\] also solves (36). Now consider an integral representation for the modified density \[\tilde{\rho}_{t}(X):=C_{K}\left(\sqrt{t}\right)^{-\frac{K(K+1)}{2}} \mathrm{V}(X)I_{t}(X), \tag{40}\] where \(X=\mathrm{Diag}(x)\) for \(x\in\mathbb{R}^{K}\) and \[I_{t}(X)=\prod_{k=1}^{K}e^{-x_{k}^{2}/t}\int_{aU(K)}\mu(dW)e^{ \frac{1}{t}Tr\left(W^{\dagger}XWX\right)}. \tag{41}\] The total power of \(\sqrt{t}\) corresponds to the diffusive rescaling of \(\tilde{\rho}(x)\prod_{k=1}^{K}dx_{k}\) from (11). By changing variables in the above integral, \(W\to UWU^{T}\), where \(U\) is a fixed unitary matrix, \[I_{t}(X)=\int_{aU(K)}\mu(dW)e^{-\frac{1}{t}Tr\left(-W^{\dagger} HWH^{T}+H^{2}\right)}, \tag{42}\] where \(H=UXU^{\dagger}\) is a Hermitian matrix with eigenvalues given by \(X\). The integral (42) can be further re-written as \[I_{t}(H)=\int_{aU(K)}\mu(dW)e^{-\frac{1}{t}Tr(HP_{W}(H))}, \tag{43}\] where \[P_{W}=I+W\otimes W^{\dagger}\circ\hat{T} \tag{44}\] is a linear operator acting in the space of \(K\times K\) Hermitian matrices, \(\hat{T}\) is the operator of transposition; the action of the linear operator \(A\otimes B\) is defined according to the formula \[A\otimes B(C)=ACB^{T}\] The operator \(P_{W}\) is proportional to a projector operator: for any Hermitian matrix \(H\), \[P_{W}^{2}(H)=H+2W\otimes W^{\dagger}\circ\hat{T}(H)+W(WH^{T}W^{ \dagger})^{T}W^{\dagger}=2P_{W}(H), \tag{45}\] where we used \(WW^{\dagger}=I\) and \(W^{T}=-W\). So \(\frac{P_{W}}{2}\) is a projection operator, which is also self-adjoint, and we can apply the theory outlined in the beginning of the subsection by identifying \(\mathbb{V}\) with the real vector space of \(K\times K\) Hermitian matrices with the inner product given by \((A,B)\mapsto TrAB\). As \[rank\left(\frac{P_{W}}{2}\right)=\frac{1}{2}TrP_{W}=\frac{1}{2}(K^{2}+K),\] we can conclude that \[\left(\frac{\partial}{\partial t}-\frac{1}{8}\Delta_{H}\right)t^{-\frac{K(K+1 )}{4}}I_{t}(H)=0 \tag{46}\] Recall that \(I_{t}(H)=I_{t}(UXU^{\dagger})=I_{t}(X)\) due to the \(U(K)\)-invariance. Therefore, the above equation reduces to the 'radial' form well used in random matrix theory, \[\left(\frac{\partial}{\partial t}-\frac{1}{8}\sum_{k=1}^{K}\frac{1}{\mathrm{V} ^{2}(x)}\frac{\partial}{\partial x_{k}}\mathrm{V}^{2}(x)\frac{\partial}{ \partial x_{k}}\right)t^{-\frac{K(K+1)}{4}}I_{t}(X)=0. \tag{47}\] The same equation is satisfied by the Itzykson-Zuber integral albeit for a different choice of initial conditions. Using this equation for \(I_{t}\) in (40) we find the modified density \(\tilde{\rho}\) satisfies the canonical 'flat' heat equation: \[\left(\frac{\partial}{\partial t}-\frac{1}{8}\sum_{k=1}^{K}\frac{\partial^{2 }}{\partial x_{k}^{2}}\right)\tilde{\rho}_{t}(X)=0,\ t>0,x\in\mathbb{R}^{K}. \tag{48}\] #### 2.3.2 Initial conditions Without loss of generality, due to the permutation symmetry of \(\tilde{\rho}_{t}\), we can assume that \(x\in W^{(K)}\). The initial condition for (48) follows from the asymptotic analysis of the integral (41) in the limit \(t\downarrow 0\). Our aim is to prove that \[\tilde{\rho}_{0+}(x)=C_{K}\prod_{k=1}^{K/2}\delta^{\prime}(x_{2k}-x_{2k-1}), \tag{49}\] where \(\delta^{\prime}\) is the derivative of the Dirac's delta function understood in the distributional sense. The proof should be read after the proof of Theorem 3 in Section 2.5, where all the relevant notions used in the current section are defined. The set of the critical points for the integral in (41) is given by (67), which is valid for any \(t\in\mathbb{C}\setminus\{0\}\). But, unlike the imaginary case, the asymptotics for \(t>0\) is determined by the global maximum of the function \(F_{X}:aU(K)\rightarrow\mathbb{R}\) given by \(F_{X}(W)=Tr(W^{\dagger}XWX)\). Note that \(F_{X}\) is constant on each torus \(\mathbb{T}_{\sigma}\), that is it does not depend on the phase \(\Phi\). We claim that the global maximum of \(F_{X}\) is achieved on \(\mathbb{T}_{\pi_{0}}\) for the matching \(\pi_{0}=(1,2)(3,4)\ldots(K-1,K)\). Let us prove this claim by contradiction. Assume that the global maximum is reached at a matching \(\pi_{1}\neq\pi_{0}\), \[\pi_{1}=(i_{1},j_{1})(i_{2},j_{2})\ldots(i_{K/2},j_{K/2}),\] where \(i_{k}<i_{l}\) for \(k<l\) and \(i_{k}<j_{k}\) for any \(k,l\) between \(1\) and \(K/2\). Notice that using the \((i,j)\) notations, the matching \(\pi_{0}\) can be uniquely characterised by the following property: \(j_{k}<i_{l}\) for any \(k<l\). As \(\pi_{1}\neq\pi_{0}\), there exists \(k<l\) such that \(j_{k}>i_{l}\), see Fig. 2.3.2 for the illustration of the corresponding matchings. Then \[F_{X}(P_{\pi_{1}})=F_{0}+2(x_{i_{l}}x_{j_{l}}+x_{i_{k}}x_{j_{k}}),\] where \(F_{0}\) is the part of \(F_{X}\) which does not depend on \(x_{i_{l}},x_{j_{l}},x_{k_{k}},x_{j_{k}}\). Let \(\tilde{\pi}_{1}\) be a matching obtained from \(\pi_{1}\) by rematching indices \(i_{k},j_{k},i_{l},j_{l}\) as follows: \((i_{k},j_{k})(i_{l},j_{l})\rightarrow(i_{l},i_{k})(j_{k},j_{l})\). Then \[F_{X}(P_{\tilde{\pi}_{1}})=F_{0}+2(x_{i_{l}}x_{i_{k}}+x_{j_{l}}x_{j_{k}}),\] and \[F_{X}(P_{\tilde{\pi}_{1}})-F_{X}(P_{\pi_{1}})=(x_{i_{l}}-x_{j_{k}})(x_{i_{k}}- x_{j_{l}})>0 \tag{50}\] due to the ordering of \(x\)'s, \(x_{m}>x_{n}\) for \(m>n\). Here we used the assumed inequality \(j_{k}>i_{l}\) and the observation \(i_{k}<i_{l}<j_{l}\). The inequality (50) contradicts our assumption that the global maximum of \(F_{X}\) is achieved at \(\pi_{1}\). We conclude that the global maximum of \(F_{X}\) is achieved on the critical manifold \(\mathbb{T}_{\pi_{0}}\) where it is constant with the value \[F_{max}=2\sum_{i=1}^{K/2}x_{2k-1}x_{2k}. \tag{51}\] The principal asymptotics of the integral (41) is given by the integral of the exponent of the order second expansion of the function \(F\) in the vicinity of \(\mathbb{T}_{\pi_{0}}\) over the total space of the normal bundle \(N(\mathbb{T}_{\pi_{0}})\). The fiber of the normal bundle at \(W\in\mathbb{T}_{\pi_{0}}\) is defined by the orthogonal decomposition \(T_{W}(aU(K))=T_{W}(\mathbb{T}_{\pi_{0}})\oplus N_{W}(\mathbb{T}_{\pi_{0}})\) using the Hermitian inner product \(tr(A^{\dagger}B)\) inherited from the space of all skew-symmetric complex matrices. This part of the proof is completely analogous to the computation carried out in Section 2.5. Expanding the exponent \(F\) near a point of the global maximum \(W_{c}=P_{\pi_{0}}e^{i}\Phi\), we find \[F_{X}(P_{\pi_{0}}+\xi)=F_{max}+\frac{1}{2}\sum_{m,n=1}^{K}|\xi_{mn}|^{2}(X_{m}- X_{\pi_{0}(n)})(X_{n}-X_{\pi_{0}(m)})+O(\xi^{3}). \tag{52}\] Figure 1: Two possible matchings with \(j_{k}>i_{l}\) for \(k<l\): (i) single crossing; (ii) double crossing. The above quadratic form is degenerate due to the degeneracy of the critical manifold \(\mathbb{T}_{\pi_{0}}\). It follows from the characterisation (66) of the tangent space at \(W_{c}\) that for \(\pi_{0}=(12)(24)\ldots(K-1,K)\), the normal space \(N_{W_{c}}(\mathbb{T}_{\pi_{0}})\) is parameterised by complex co-ordinates \[(\xi_{2k-1,2l-1},\xi_{2k-1,2l}))_{1\leq k<l\leq K/2}.\] In terms of these co-ordinates, the restriction of the quadratic form (69) to \(N_{W_{c}}(T_{\pi_{0}})\) is negatively definite: \[F_{X}(P_{\pi_{0}}+\xi)=F_{max}+2\sum_{1\leq k<l<K/2}|\xi_{2k-1,2l -1}|^{2}(X_{2k-1}-X_{2l})(X_{2l-1}-X_{2k})\] \[+2\sum_{1\leq k<l<K/2}|\xi_{2k-1,2l}|^{2}(X_{2k-1}-X_{2l-1})(X_{2l }-X_{2k})+O(\xi^{3}). \tag{53}\] In the basis described above, the matrix of second derivatives of \(F\) evaluated at the critical point is diagonal. The square root of its determinant is equal to \[\mathrm{V}(X)\left(\prod_{k=1}^{K/2}(x_{x_{2k}}-x_{2k-1})\right)^{-1}\] We conclude that \[I_{t}(X)\stackrel{{ t\to 0}}{{\sim}}C_{K}\frac{\prod_{k=1}^{K/2}(x_{2k }-x_{2k-1})}{\mathrm{V}(X)}e^{+\frac{2}{t}\sum_{k=1}^{K/2}x_{2k-1}x_{2k}}e^{- \frac{1}{t}\sum_{k=1}^{K}x_{k}^{2}}, \tag{54}\] where \(C_{K}\) is a non-zero constant. Substituting (54) into (40) one finds that the \(t\downarrow 0\) limit of exists in the distributional sense and is equal to \[\tilde{\rho}_{0+}(x)=C_{K}\prod_{k=1}^{K/2}\delta^{\prime}(x_{2k}-x_{2k-1}). \tag{55}\] ### Theorem 2 Using the one-dimensional heat kernel \[g_{t}(x)=\frac{1}{\sqrt{\pi t/2}}e^{-\frac{2}{t}x^{2}} \tag{56}\] the equation (14) has the unique solution \[\tilde{\rho}_{t}(y)=C_{K}\int_{\mathbb{R}^{K}}dx_{1}dx_{2}\ldots dx_{K}\prod_{k=1 }^{K}g_{t}(y_{k}-x_{k})\prod_{k=1}^{K/2}\delta^{\prime}(x_{2k}-x_{2k-1}). \tag{57}\] For any permutation \(\sigma\in\Sigma_{K}\) we have, due to the Vandermonde \(V(y)\) in its definition, \[\tilde{\rho}_{t}(y_{\sigma_{1}},\ldots,y_{\sigma_{K}})=\mbox{ sign}(\sigma)\tilde{\rho}_{t}(y). \tag{58}\] Thus \[\tilde{\rho}_{t}(y) = \frac{C_{K}}{K!}\sum_{\sigma\in\Sigma_{K}}\mbox{sign}(\sigma)\int _{\mathbb{R}^{K}}dx_{1}dx_{2}\ldots dx_{K}\prod_{k=1}^{K}g_{t}(y_{\sigma_{k}}- x_{k})\prod_{k=1}^{K/2}\delta^{\prime}(x_{2k}-x_{2k-1})\] \[= \frac{C_{K}}{K!}\sum_{\sigma\in\Sigma_{K}}\mbox{sign}(\sigma) \prod_{k=1}^{K/2}\int_{\mathbb{R}^{2}}g_{t}(y_{\sigma_{2k}}-x)g_{t}(y_{\sigma _{2k-1}}-x^{\prime})\delta^{\prime}(x-x^{\prime})dxdx^{\prime}\] \[= \frac{C_{K}}{K!}\Pr_{1\leq i,j\leq K}\left[\frac{\partial}{ \partial y_{i}}g_{2t}(y_{i}-y_{j})\right].\] The derivation of the Pfaffian here is analogous to that in the de Bruijn formula [8]. Specializing to \(t=1\), we conclude for \(x_{1}<\ldots<x_{K}\) that \[\tilde{\rho}(x_{1},x_{2},\ldots,x_{K})=C_{K}\,\mbox{Pfaff}\left[(x_{i}-x_{j}) e^{-(x_{i}-x_{j})^{2}}:1\leq i<j\leq K\right]. \tag{59}\] The correlation functions of spins can be formally computed by integrating \(\tilde{\rho}\) with respect to space variables: for \(x_{1}<\ldots<x_{K}\) \[\mathbb{E}\left[\prod_{k=1}^{K}s_{x_{k}}(M)\right]=(-2)^{K}\left(\prod_{k=1}^ {K}\int_{-\infty}^{x_{k}}dy_{k}\right)\tilde{\rho}(y_{1},y_{2},\ldots,y_{k}). \tag{60}\] This leads to the spin-spin correlation function: for \(x_{1}<\ldots<x_{K}\) \[\mathbb{E}\left[\prod_{k=1}^{K}s_{x_{k}}(M)\right]=C_{K}\,\mbox{Pfaff}\left[ \int_{x_{j}-x_{i}}^{\infty}e^{-z^{2}}dz:1\leq i<j\leq K\right], \tag{61}\] which coincides with the correlation function of spins in the continuous limit of the kinetic Glauber spin chain, which justifies our choice of terminology. The constants \(C_{k}\) can be found inductively in \(k\) by allowing \(x_{2k}\downarrow x_{2k-1}\), and noting that \(\mathbb{E}[s_{x_{1}}(M)s_{x_{1}}(M)]=1\). This yields \(C_{K}=(4/\pi)^{K/4}\). Finally, substituting (61) into formula (5) and performing the differentiation explicitly, we find \[\rho(x)=\underset{1\leq i,j\leq K}{\text{Pfaff}}\,\big{[}H(x_{j}-x_{i})\big{]}, \tag{62}\] where \[H(z)=\left(\begin{array}{cc}-F^{\prime\prime}(z)&-F^{\prime}(z)\\ F^{\prime}(z)&\text{sgn}(z)F(|z|)\end{array}\right) \tag{63}\] and \[F(z)=\pi^{-1/2}\int_{z}^{\infty}e^{-x^{2}}dx. \tag{64}\] Here \(\text{sgn}(z)=1\) for \(z>0\), \(\text{sgn}(z)=-1\) for \(z<0\) and \(\text{sgn}(0)=0\). ### Theorem 3 Our aim is to calculate the leading term in the small-\(t\) asymptotic of \[I_{it}(X)=\prod_{k=1}^{2K}e^{-\frac{1}{it}x_{k}^{2}}\prod_{k=1}^{2K}\int_{aU(2 K)}\mu(dW)e^{+\frac{1}{it}F_{X}(W)}, \tag{65}\] where \(F_{X}(W)=Tr(W^{\dagger}XWX)\) and \(X=\text{Diag}(x)\) with \(x\in\mathbb{R}^{2K}\) and \(x_{i}\neq x_{j}\) for \(i\neq j\). Due to the permutation invariance of \(F_{X}\), we may assume that \(x_{i}<x_{j}\) for \(i<j\). The small-\(t\) asymptotics of the integral (65) can be found by adapting the standard multi-dimensional stationary phase method [13] as follows. The main contribution to the integral for \(t\to 0\) comes from the vicinity of the critical points of the function \(F_{X}:aU(2K)\rightarrow\mathbb{R}\). Its calculation requires the knowledge of the set of the critical points and the expansion of \(F_{X}\) around this set. To derive the critical point equation we need a parameterisation of the tangent space \(T_{W}(aU(2K))\) at \(W\in aU(2K)\): \[T_{W}(aU(2K))=\{\xi\in\mathbb{C}^{4K^{2}}\mid\xi^{T}=-\xi;W\xi^{\dagger}W=-\xi\}. \tag{66}\] In other words, the tangent space is spanned by skew-symmetric, \(W\)-anti self-dual \(2K\times 2K\) complex matrices, where \(W\)-dual of a complex matrix \(\alpha\) is \(\alpha^{R}:=W\alpha^{\dagger}W.\) The critical point condition is the vanishing of the directional derivative of \(F_{X}\), \(D_{\xi}F_{X}(W)=0\). Explicitly this leads to \[[X,WXW^{\dagger}]=0.\] As \(X\) is diagonal with distinct diagonal entries, the vanishing of the commutator \([X,WXW^{\dagger}]\) means that the matrix \(WXW^{\dagger}\) is diagonal, \[WXW^{\dagger}=D.\] As \(D\) is similar to \(X\), its diagonal entries are a permutation of the diagonal entries of \(X\). Therefore, \(W_{0}\) is a critical point of \(F_{X}\) if \[W_{0}=P_{\sigma}e^{i\Phi},\] where \(P_{\sigma}\) is the permutation matrix corresponding to the permutation \(\sigma\), an element of the permutation group \(S(2K)\), and \(\Phi\) is a diagonal real matrix. As it is easy to check, the skew-symmetry condition, \(W=-W^{T}\) implies that: 1. The permutation \(\sigma\) is a product of two-cycles, meaning that \((P_{\sigma})_{ii}=0\) for any \(i=1,2,\ldots,2K\) and \(P_{\sigma}=P_{\sigma}^{T}\); the elements of the set \(M(2K)\subset S(2K)\) satisfying these are called matchings. 2. \(\Phi_{\sigma(i)\sigma(i)}=\Phi_{ii}+\pi\) for any \(i=1,2,\ldots,2K\). We conclude that the set of the critical points of the function \(F_{X}\) is \[{\bf C}=\cup_{\sigma\in M(2K)}{\mathbb{T}}_{\sigma}, \tag{67}\] where \[{\mathbb{T}}_{\sigma}=\{P_{\sigma}e^{i\Phi};e^{i\Phi}\in U(1)^{K}:\Phi_{\sigma (i)\sigma(i)}=\Phi_{ii}+\pi\}\subset aU(2K).\] In other words, \({\bf C}\) is a union of non-intersecting \(K\)-dimensional tori. The restriction of the exponent \(F_{X}\) to the critical manifold \({\mathbb{T}}_{\sigma}\) is a constant equal to \[F_{X}(\sigma)=\sum_{i=1}^{2K}x_{\sigma(i)}x_{i}. \tag{68}\] The second order Taylor expansion of \(F_{X}\) in the vicinity of the critical manifold is given, for \(W(\sigma,\Phi)+\xi\in aU(2K)\), by \[F_{X}(W(\sigma,\Phi)+\xi)\!=\!F_{X}(\sigma)+\frac{1}{2}\!\sum_{m,n=1}^{2K}|\xi_{ mn}|^{2}(x_{m}\!-\!x_{\sigma(n)})(x_{n}\!-\!x_{\sigma(m)})\!+\!O(\xi^{3}). \tag{69}\] The quadratic form defining the second order term is constant on \(\mathbb{T}_{\sigma}\). It is also degenerate as the set of the critical points \(\mathbb{T}_{\sigma}\) is not isolated. A calculation shows that \(F_{X}(W(\sigma,\Phi)+\xi)=F_{X}(\sigma)+O(\xi^{3})\) if \(\xi\in T_{W}(\mathbb{T}_{\sigma})\subset T_{W}(aU(2K))\) for any \(W\in\mathbb{T}_{\sigma}\). The critical manifold is non-degenerate in the sense that the quadratic form entering (69) has the maximal rank, see below. The small-\(t\) asymptotic of the integral \(I_{it}(X)\) is given by the sum of contributions from each of the tori \(\mathbb{T}_{\sigma}\). Each contribution is equal to the volume of \(\mathbb{T}_{\sigma}\) multiplied by a Gaussian integral over the normal space to \(\mathbb{T}_{\sigma}\) at any point on the torus, say \(P_{\sigma}\). The normal space \(N_{\sigma}(\mathbb{T}_{\sigma})\) is defined by the orthogonal decomposition \(T_{\sigma}(aU(K))=\mathbb{T}_{\sigma}(T_{\pi_{0}})\oplus N_{\sigma}(\mathbb{T }_{\sigma})\) obtained using the Hermitian inner product \(TrAB\) inherited from the space of all skew-symmetric complex matrices. The volumes of different tori are equal due to the \(U(2K)\)-invariance. The geometry of the integration space is illustrated in Fig. 2.5. As the problem is reduced to integrals over linear spaces, standard stationary phase formulae apply, see for example [13], Chapter III. So, using the results collected above, we conclude that \[I_{it}=C_{K}t^{K(K-1)}\prod_{k=1}^{2K}e^{-\frac{1}{it}x_{k}^{2}}\sum_{\sigma \in M(2K)}\frac{e^{+\frac{1}{it}\sum_{i=1}^{2K}x_{\sigma(i)}x_{i}+\frac{i\pi}{ 4}\mbox{sig}(\partial\otimes\partial F_{X}(\sigma))])}}{\sqrt{|\det[\partial \otimes\partial F_{X}(\sigma))]|}}(1+O(t^{\frac{1}{2}})), \tag{70}\] where \(\partial\otimes\partial F_{X}(\sigma)\) is the restriction of the Hessian of \(F_{X}\) at \(\sigma\) to the normal space, and sig is its associated siganture, both of which we will compute using (69). The power of \(\sqrt{t}\) is equal to the dimension of the normal space which is in turn equal to \(\dim aU(2K)-\dim\mathbb{T}_{\sigma}=K(2K-1)-K=2K(K-1)\). The final step is the calculation of the determinant and the signature of the Hessian. The expression for \(\sigma\in M(2K)\) in cycle notation is \[\sigma=(i_{1}j_{1})(i_{2}j_{2})\ldots(i_{k}j_{k}),\] where \(i_{k}<j_{k}\), \(1\leq k\leq K\) and \(i_{k}<i_{l}\) for \(1\leq k<l\leq K\). Using the characterization (66) of the tangent space at \(P_{\sigma}\), the normal space \(N_{P_{\sigma}}(\mathbb{T}_{\sigma})\) can be parameterised by complex co-ordinates \[(\xi_{i_{k},i_{l}},\xi_{i_{k},j_{l}}))_{1\leq k<l\leq K}.\] The restriction of the quadratic form (69) to \(N_{P_{\sigma}}(\mathbb{T}_{\sigma})\) takes the form \[F_{X}(P_{\sigma}+\xi)-F_{X}(\sigma)=2\sum_{1\leq k<l\leq K}|\xi_ {i_{k},i_{l}}|^{2}(x_{i_{k}}-x_{j_{l}})(x_{i_{l}}-x_{j_{k}})\] \[+2\sum_{1\leq k<l\leq K}|\xi_{i_{k},j_{l}}|^{2}(x_{i_{k}}-x_{i_{l }})(x_{j_{l}}-x_{j_{k}})+O(\xi^{3}). \tag{71}\] In the basis described above, the Hessian \(\partial\otimes\partial F_{X}(\sigma)\) is diagonal. The square root of the modulus of its determinant is equal to \[2^{K(K-1)}\big{|}\prod_{k<l}(x_{j_{l}}-x_{i_{k}})(x_{i_{l}}-x_{j_{k}})(x_{i_{ l}}-x_{i_{k}})(x_{j_{l}}-x_{j_{k}})\big{|}=\frac{2^{2K(K-1)}{\rm V}(X)}{\prod_{k=1} ^{K}(x_{j_{k}}-x_{i_{k}})}, \tag{72}\] Figure 2: The sketch of the integration space for the asymptotic evaluation of the integral (65). where the ordering \(x_{1}<x_{2}<\ldots<x_{2K}\) can be used to show that the r.h.s. is positive. To calculate the signature of the Hessian, notice that each summand in (71) corresponds to an eigenvalue with multiplicity two due to complexity of \(\xi\)'s. The eigenvalue \((x_{i_{k}}-x_{j_{l}})(x_{i_{l}}-x_{j_{k}})\), where \(k<l\) is positive if \(j_{k}>i_{l}\). The eigenvalue \((x_{i_{k}}-x_{i_{l}})(x_{j_{l}}-x_{j_{k}})\), where \(k<l\) is positive if \(j_{k}>j_{l}\). In each of these cases the eigenvalue is positive if the permutation \(\pi(\sigma):=(1,2,\ldots,2K)\rightarrow\sigma=(i_{1},j_{1})(i_{2},j_{2})\ldots (i_{K},j_{K})\) has an inversion. Let us stress that here we are treating \(\sigma\) as a matching, not as a permutation. We conclude that the total number of positive eigenvalues of the Hessian is equal to twice the total number of inversions in the permutation \(\pi(\sigma)\), which we denote by \(\mbox{inv}(\pi(\sigma))\). Then \[\mbox{sign}(\partial\otimes\partial)F_{X}(\sigma) = \#\mbox{pos. eigenvalues}-\#\mbox{neg. eigenvalues} \tag{73}\] \[= 2\ \#\mbox{pos. eigenvalues}-2K(K-1)\] \[= 4\ \mbox{inv}(\sigma)-2K(K-1),\] where \(2K(K-1)\) is the dimension of the normal space. Finally, let us notice that \(\exp(i\pi\mbox{inv}(\pi(\sigma))))=\mbox{sign}(\pi(\sigma))\), the sign of the permutation \(\pi(\sigma)\). Using this observation and substituting (72), (73) into (70) one finds \[I_{it} = C_{K}t^{K(K-1)}\prod_{k=1}^{2K}e^{-\frac{1}{it}x_{k}^{2}}\sum_{ \sigma\in M(2K)}\mbox{sign}(\pi(\sigma))\frac{\prod_{k=1}^{K}(x_{j_{k}}-x_{i_ {k}})e^{+\frac{2}{it}x_{i_{k}}x_{j_{K}}}}{\mbox{V}(X)}(1+O(t^{\frac{1}{2}})) \tag{74}\] \[= C_{K}\frac{\mbox{Pfaff}_{1\leq i<j\leq 2K}[\frac{(x_{j}-x_{i})}{ \sqrt{t}}\exp[-\frac{1}{it}(x_{i}-x_{j})^{2}]]}{\mbox{V}(X/\sqrt{t})}(1+O( \sqrt{t})),\] where the definition of the Pfaffian was used in the last step. Comparing this answer with the exact result (17), we conclude that the untracked constant \(C_{K}\) must agree with the exact answer for all \(K\geq 1\) and that the error term \(O(\sqrt{t})\) in this stationary phase expansion is in fact identically zero. The asymptotic exactness is proved. **Acknowledgements.** Our research was partially supported by EPSRC and the Leverhulme trust. We are grateful to Yan Fyodorov and Gernot Akemann for many inspiring discussions.
2301.05843
Leveraging Large Language Models to Power Chatbots for Collecting User Self-Reported Data
Large language models (LLMs) provide a new way to build chatbots by accepting natural language prompts. Yet, it is unclear how to design prompts to power chatbots to carry on naturalistic conversations while pursuing a given goal, such as collecting self-report data from users. We explore what design factors of prompts can help steer chatbots to talk naturally and collect data reliably. To this aim, we formulated four prompt designs with different structures and personas. Through an online study (N = 48) where participants conversed with chatbots driven by different designs of prompts, we assessed how prompt designs and conversation topics affected the conversation flows and users' perceptions of chatbots. Our chatbots covered 79% of the desired information slots during conversations, and the designs of prompts and topics significantly influenced the conversation flows and the data collection performance. We discuss the opportunities and challenges of building chatbots with LLMs.
Jing Wei, Sungdong Kim, Hyunhoon Jung, Young-Ho Kim
2023-01-14T07:29:36Z
http://arxiv.org/abs/2301.05843v2
# Leveraging Large Language Models to Power Chatbots for Collecting User Self-Reported Data ###### Abstract. Large language models (LLMs) provide a new way to build chatbots by accepting natural language prompts. Yet, it is unclear how to design prompts to power chatbots to carry on naturalistic conversations while pursuing a given goal such as collecting self-report data from users. We explore what design factors of prompts can help steer chatbots to talk naturally and collect data reliably. To this aim, we formulated four prompt designs with different structures and personas. Through an online study (\(N=48\)) where participants conversed with chatbots driven by different designs of prompts, we assessed how _prompt designs_ and _conversation topics_ affected the conversation flows and users' perceptions of chatbots. Our chatbots covered 79% of the desired information slots during conversations, and the designs of prompts and topics significantly influenced the conversation flows and the data collection performance. We discuss the opportunities and challenges of building chatbots with LLMs. conversational agents, chatbots, large language models, dialogues + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. + Footnote †: copyrighted: copyrighted: AIPJing Wei conducted this work as a research intern at NAVER AI Lab. to help people develop long-term adoptions for health monitoring [23, 26, 72]. However, existing commercial chatbot frameworks, such as Dialogflow[29] and Amazon Alexa [2], predominantly only support building rule-based and scripted chatbots [64, 96]. Lacking flexible flows, these chatbots usually appear robotic and unnatural [66] and may cause boredom in long-term deployments. On the other hand, implementing chatbots that can have more diverse and dynamic conversations requires large and specific domain datasets [72]. For example, an open-domain chatbot Meena was trained on 341GB of dialogue sessions [1]. Creating such large datasets is costly so the datasets are often proprietary and inaccessible to public. Furthermore, most demonstrations of open-ended chatbots focus on performing free-form conversation in general topics and do not support end-user customizations. Little research has been done to explore low-effort bootstrapping ways to build chatbots that can effectively perform pre-defined tasks, such as inquire people about their health information, and carry on naturalistic conversations at the same time. Recent Large Language Models (LTMs; _e.g._, GPT-3 [14], PaLM [19], OPT [107], HyperCLOVA [37]), with billions of parameters pre-trained on a large amount of language corpora, provide a new way to build chatbots that have the ability to not only effectively inquire people for specified information but also converse naturally and flexibly. LLMs work differently from traditional task-specific models as they accept _prompts_ written in natural language as input to perform various tasks [14, 60]. Taking prompts that include the conversation history, LLMs can generate the following responses accordingly without any training data, thereby functioning as a chatbot. Compared to other frameworks, LLMs show great potential in scaffolding chatbots that are sensible of contexts and even respond to off-topic user messages [94]. Further, since LLMs operate on natural language inputs, people can have the opportunity to personalize or even build their own chatbots. This feature can be especially useful for people without programming experiences, such as medical practitioners [23]. Despite these potentials, it is yet fully understood how LLMs _read_ the prompt and _use_ pretrained knowledge [14, 59], the development of prompts is usually conducted through iterative trial and error [61]. While the HCI community have actively explored the use of LLMs in various domains (_e.g._, [20, 49, 104]), research that leverages LLMs for powering chatbots, particularly task-oriented ones [9, 67, 94], is still sparse. Due to the inherent characteristics of LLMs, LLM-driven chatbots may be error-prone [44] or digress from their tasks [94]. Designing robust prompts is crucial for "restricting" chatbots to conduct desired tasks. In this study, we investigate how LLMs can power chatbots to collect user self-reports while carrying on naturalistic conversations. Towards this aim, we built a set of chatbots (Figure 1) that run on GPT-3 [14] and converse to collect self-report data in four health-related topics -sleep, food intake, work and productivity, and exercise. We chose GPT-3 as an underlying LLM because it is one of the mainstream LLMs that are publicly available via commercial APIs. We formulate the model prompt to include the information slots (_i.e._, information properties of a topic) that we intend the chatbot to collect and the job identities (e.g., sleep expert for the topic sleep) to help drive the conversations. We investigate how two design factors in prompts--information specification format and personality modifier--impact the slot filling ability and the conversation style of chatbots. In total, we created 16 chatbots (4 topics \(\times\) 2 formats \(\times\) 2 personality modifiers) with different prompts. To evaluate the performance of different prompts in steering chatbots, we conducted an online study (\(N=48\)) with our chatbots on a web interface. All participants talked to chatbots of the four topics but each of them experienced chatbots run on the same prompt design. To the best of our knowledge, our work is the first to explore the usability of LLMs for building chatbots for collecting self-report data. We found that our zero-shot prompts, without either example dialogues or fine-tuning, covered 79% of the desired information slots among all dialogues. Through conversation analysis, we found that the information specification format as well as the use of personality modifier can impact the chatbots' slot filling ability and conversation styles. Also, the chatbots generally reacted to participants' self-reported answers in an empathetic way, appreciating their accomplishments as well as sympathizing participants for the negative outcomes. Consequently, some participants perceived these chatbots to be understanding and take into account their messages when responding and others indicated that they were surprised to find the chatbots' responses were accurate and detailed. Research contributions from this work are threefold: 1. Empirical results from an online study (\(N=48\)), demonstrating the _feasibility_ of chatbots powered by LLMs in not only carrying on conversations to collect specified information but also exhibiting abilities in maintaining context, state-tracking, and providing off-topic suggestions. 2. Examination on how different prompt designs and other factors impact the chatbots' behaviors, providing insights for future researchers to easily scaffold chatbots for data collection with LLMs. 3. Implications on how future LLMs-based chatbot platforms can improve the conversation quality, drawn on the analysis of the dialogue errors. ## 2. Related Work In this section, we cover related work in the areas of (1) self-report data collection through chatbots, (2) design considerations for chatbots, (3) chatbot platforms, and (4) designing LLM prompts for chatbots. ### Self-Report Data Collection through Chatbots Personal informatics systems have commonly incorporated data collection techniques to track personal health and activity [17, 53]. While various physiological or physical activity data--such as step count, heart rate, and sleep duration--can be captured automatically by sensors and wearable devices [46], various types of personal data still demand _self-reporting_ by the person who self-tracks [17]. For example, food intake (_e.g._, [21]) or work tasks (_e.g._, [39]) are not reliably captured by sensors and thus require manual inputs. In addition, reflective questions (_e.g._, Why did you eat this food? [63]) and subjective measurements (_e.g._, Sleep quality) inherently require to be captured manually. A majority of digital self-tracking tools that involved manual data capture inherited the traditional concepts of self-monitoring or journaling and provide form-based GUNs such as a list of checkboxes and text fields (Krizhevsky et al., 2017; Krizhevsky et al., 2018). However, repeated manual input on a computer or smartphone screen is burdensome and may gradually disengage people from tracking (Krizhevsky et al., 2018; Krizhevsky et al., 2018). As an input modality to lower the capture burden and enhance the richness of the captured information, natural language has recently gained interest (Krizhevsky et al., 2018; Krizhevsky et al., 2018). Prior research found that when people are allowed to insert data in free-form natural language, they tend to provide detailed answers with surrounding contexts (Krizhevsky et al., 2018; Krizhevsky et al., 2018). Going further, conversational interaction, where a system and a user communicate in natural languages, has become one emerging interface for collecting self-reports. Chatbots are considered easier to use and more accessible than GUIs as they minimize the use of graphical widgets employ the intuitive conversational interaction. Regarding data collection, a plethora of research has explored the use of chatbots in place of traditional form-based surveys (_e.g._, (Krizhevsky et al., 2018; Krizhevsky et al., 2018; Krizhevsky et al., 2018)). For example, studies with surveys with close-ended questions found that chatbots can collect the same quality, if not higher, user responses as GUIs (Krizhevsky et al., 2018; Krizhevsky et al., 2018). Xiao et al. (Xiao et al., 2018) built a chatbot to conduct interviews with open-ended questions. Compared to the traditional web survey, their participants showed higher engagement and provided higher-quality responses when talking to the chatbot. Further, incorporating more humanized traits, such as casual conversation styles (Xiao et al., 2018), self-introduction, and echo (Xiao et al., 2018), led to not only a higher level of user engagement and satisfaction but also more self-disclosure in responses. With more focus on self-reported data, prior studies leveraged chatbots to collect self-reports such as emotion (_e.g._, (Krizhevsky et al., 2018)), pain level (_e.g._, (Krizhevsky et al., 2018)), and food intake (_e.g._, (Krizhevsky et al., 2018)). For example, Bermann et al. (Krizhevsky et al., 2018) combined a chatbot with the experience sampling method (ESM, (Bermann et al., 2018)) and found that personalized chatbots have the potential to collect data on sensitive or personal topics. Mitchell et al. (Mitchell et al., 2018) compared fully-scripted, rule-based, and retrieval-based chatbots for collecting food nutrition. They found the better fulfillment of data collection is not necessarily associated with the higher perceived quality of the chatbot as a diet coach, suggesting the importance of conversational content in user experience. This work expands the line of research on chatbots that collect self-reports. In contrast to prior studies that involved predefined conversation logic or retrieval model training on domain-specific datasets, we explore the potential of LLMs in bootstrapping chatbots that can collect self-reports through conversations on four health topics--sleep, food, work, and exercise. ### Design Considerations for Chatbots Prior works in HCI explored user behaviors with chatbots and proposed suggestions to improve user experience with them. For example, Luger and Sellen (Luger and Sellen, 2018) found that people restricted their language uses when interacting with CAs. Jain et al. (Jain et al., 2019) revealed that many first-time chatbot users had disappointment and frustration with the selected chatbots: most chatbots lacked the ability to fully comprehend user messages or intentions. Since conversation breakdowns are still common (Bermann et al., 2018), several studies have explored repair strategies, such as apologies, compensation, and providing option (Xiao et al., 2018). Ashktorab et al. (Bermann et al., 2018) also evaluated other strategies, such as confirmation, repeat, keywords highlight & explanation, and recommended that chatbots should acknowledge misunderstanding in simple terms, explain model limitation in natural ways, and adapt individualized strategies. Although existing chatbot frameworks also have error recovery features, their features are not only limited but often cannot allow quick repairs (Krizhevsky et al., 2018; Krizhevsky et al., 2018; Krizhevsky et al., 2018). Another key to improving the user experience is to make chatbots more playful and human-like (Xiao et al., 2018). The level of empathy (Krizhevsky et al., 2018; Krizhevsky et al., 2018) and the repetitive rate (Xiao et al., 2018) are two commonly used metrics of human-likeness. For example, the playful interactions (_e.g._, telling jokes) or humorous responses enabled many people to start using CAs (Luger and Sellen, 2018) and it is crucial for chatbots to support sustainable playfulness (Xiao et al., 2018). Also, human-like features and fun personalities are found to make chatbots more enjoyable to interact (Jain et al., 2019). Even for work-related chatbots, some people still preferred chatbots that were human-like (Krizhevsky et al., 2018), and Liao et al. (Liao et al., 2018) envisioned that a reusable conversational module including common chit-chats and social attributes could be developed. In other words, future chatbot platforms should allow developers to easily build personalized chatbots with different personalities (Xiao et al., 2018) and conversation styles (Krizhevsky et al., 2018; Krizhevsky et al., 2018). Lastly, developers should aim to improve chatbots' ability to maintain contexts to support smoother and natural conversations (Bermann et al., 2018). In this work, we investigate whether LLMs can steer chatbots that have social attributes and can resolve conversation breakdowns. ### Chatbot Platforms Building chatbots is challenging and time-consuming, and many design suggestions discussed above are difficult to implement. Many open-domain chatbots that engage and entertain people socially are all dependent on large datasets (Bermann et al., 2018; Krizhevsky et al., 2018). In the HCI community, rule-based dialogue systems are widely used. Celino and Calegari (Celino and Calegari, 2018) built their survey chatbot with pre-defined conversation flows as they intended to avoid disappointments caused by the chatbot's inability to understand certain utterances (Luger and Sellen, 2018). Although rule-based chatbots are unlikely to cause breakdowns, the resulted rigid conversations can make people lose interest in the long term. On the other hand, Xiao et al. (Xiao et al., 2018) built their survey chatbot using Juji (Juji, 2019), which automatically equip chatbots with rich existing conversational skills. Using the Juji GUI to add questions is relatively simple, but it is unclear whether developers can modify the chatbot's expressed personality. Lastly, other commercial chatbot frameworks, such as Dialogflow (Jiang et al., 2019) and IBM Watson (Watson, 2019), also allow developers to build rule-based chatbots with GUIs (Krizhevsky et al., 2018). However, creating more dynamic conversations usually requires programming skills. Even for professional developers, it is challenging to create well-designed conversational flows and pre-define user intents and chatbot messages (Krizhevsky et al., 2018). Using LLMs to power chatbots is a new way to build chatbots (Krizhevsky et al., 2018; Krizhevsky et al., 2018). LLMs accept natural language prompts so that people without any knowledge of programming but are interested in building chatbots for data collection can create prompts (Krizhevsky et al., 2018). Hence, compared to prior methods that approach personalization by building data-driven user profiles (Krizhevsky et al., 2018; Krizhevsky et al., 2018), customizing prompts in natural languages essentially hands off the control to each individual who builds chatbots. As such, it becomes more straightforward to scaffold personalized chatbots (_e.g._, assigning a preferred personality) by revising prompts accordingly. Nevertheless, it is unclear how to design prompts for LLMs to steer chatbots that can effectively ask questions around desired information and have different conversational styles. ### Designing LLM Prompts for Chatbots Prompts are natural language texts to LLMs to produce desired outputs. With proper prompt inputs, GPT-3 can be used to translate texts, answer questions, write essays, and generate dialogues without any fine-tuning (Levevic et al., 2017). While the mechanism enabling such few-shot abilities behind LLMs is still veiled (Levic et al., 2017; Levic et al., 2017), some prompting techniques are found to improve the model performance. One technique that surprisingly improves the generation quality is by conditioning the prompt with an identity. For example, by inserting the statement "You are an expert Python programmer" into prompts, models can generate higher quality codes (B generic one despite still had occasional digressions. Hence, we experimented with another prompt format inspired by the state-tracking technique in task-oriented dialogues (Wang et al., 2018). Instead of being described literally, the slots are structured into a form (_e.g._, "Meals and snacks from yesterday: Breakfast -> [placeholder] Lunch -> [placeholder]..."; we used an empty string as a placeholder; see Figure 2, left). Both designs performed similarly in our limited internal testings, hence we aimed to investigate the performance of two formats in the user study. In terms of manipulating conversational styles, we introduced the use of a modifier in prompts-"who always shows empathy and engages my customer in conversations," to the prompt (See Figure 2, right). We hypothesize that with this modifier, the chatbot is more likely to express empathy in conversations and have a higher level of interactivity-_i.e._, use more emphatic expressions and be more responsive to user responses. Conversely, without the modifier (See Figure 2, left), we expect the chatbot to be more neutral, formally exchanging messages with users and appear less empathetic. Lastly, during our trials, we found that GPT-3 had the tendency to ask multiple questions in one turn. To restrict this behavior, we added "I only ask one question at a time." to the prompt. #### 3.1.2. Model and Parameters To power chatbots with above prompts, we used davinci-text-002, the largest and most capable model of GPT-3 as of June 2022, publicly accessible via OpenAI's API (Zhu et al., 2020). This model accepts 4,000 byte-pair encoding tokens at maximum in a prompt per request. Our prompt templates in the initial state were encoded into around only 120 tokens (3% of the limit) and allowed sufficient room for the appended conversation history. For all chatbots, we uniformly applied the same generative parameters: temperature as 0.9, the presence penalty as 0.6, and the frequency penalty as 0.5. We kept the temperature and the presence penalty unchanged based on OpenAI's suggestions and increased the frequency penalty to reduce the re-use of words. #### 3.1.3. The Web Chat Interface We implemented a web interface to host our LLM-driven chatbots, following a typical chat interface design (See Appendix A.1). The webpage was written in TypeScript (Zhu et al., 2020) on React (Zhu et al., 2020) and runs on the Node.js (Zhu et al., 2020) server. The server communicates with GPT-3 leveraging OpenAI's API (Zhu et al., 2020). To simplify the conversation flow, we disabled people to submit multiple utterances in a row. Correspondingly, the chatbots also delivered one utterance at a time. When a user submitted an utterance, the server appended the current dialog history at the end of the prompt template and fed it to GPT-3 to generate the following response. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline **Topic** & **Sleep** & **Food intake** & **Work and Productivity** & **Exercise** \\ \hline \multirow{8}{*}{**Slots**} & Time to bed & Breakfast & Work done & What workout \\ & Sleep latency & Lunch & Rate productivity (1 to 10) & Workout duration \\ & Wake up at night & Dinner & Other concerns at work & Feeling after (skipping) workout \\ & Wake up time & Snacks & What went well & Fitness concerns \\ & Sleep quality rate (1 to 10) & Feelings after eating & & \\ \hline **Job identity** & Sleep Expert & Dietitian & Life Coach & Fitness Coach \\ \hline \end{tabular} \end{table} Table 1. The targeted slot data and “persona” for each chatbot. Figure 2. Prompt design combining two factors, information format and personality modifier, in the Food intake topic. The source code for the chatbot framework and the web interface is available at [https://naver-ai.github.io/llm-chatbot](https://naver-ai.github.io/llm-chatbot). ### Online Study #### 3.2.1. Experimental Conditions Combining the two design factors, we created four designs of prompts: SP (**S**tructured format with **P**ersonality modifier), SN (**S**tructured format with **N**o personality modifier), DP (**D**escriptive format with **P**ersonality modifier), and DN (**D**escriptive format with **N**o personality modifier). Each participant was assigned to one prompt design and engaged in conversations of all four topics. (Refer to the supplementary material for all 16 variations of GPT-3 prompts created for combinations on topic and condition.) To mitigate the ordering effect among topics, half of the participants conversed in the order of Work-Food-Exercise-Sleep, and the other half in the order of Exercise-Sleep-Work-Food. Additionally, for each topic, we requested participants to engage with the chatbot twice: one in the **Positive** path (_e.g._, report high-quality sleep) and one in the **Negative** path (_e.g._, report poor sleep). Refer to Appendix A.2 for an exhaustive list of paths and hints by topic provided by us to guide participants to compose their answers for each path accordingly. #### 3.2.2. Web Chat Session After signing an electronic consent form on the study website, participants went through eight conversations (4 _topics \(*\) 2 paths_). On the web chat interface (See Appendix A.1), we put guidelines including the instructions and the conversation path that participants should follow (See Blue text in Appendix A.1, right). Since we did not incorporate ending detection algorithms, we asked participants to click the 'Next' button to proceed to the next conversation when they thought the conversation was naturally over or the chatbot kept sending repetitive messages. The completed dialogues were stored in our server. #### 3.2.3. Exit Survey After completing eight conversations, the web page automatically redirected participants to an online survey. The survey consisted of three 5-point Likert scale questions and one open-ended feedback textfield. The Likert scale questions were: (1) _"Do you think the chatbot understands your answers?"_ (2) _"Do you think the chatbot takes into account of your answers when responding?"_ and (3) _"Do you think the chatbot talks more like a human who shows more empathy or more like a robot who behaves mechanically?"_ The open-ended feedback question stated, _"If you have any other comments or thoughts about the chatbot (e.g._, _things that you've liked or disliked), please share with us."_ The first two questions can measure whether participants think the chatbot acknowledges their answers and respond accordingly and the third question is an overall measure of whether the chatbot is perceived as being empathetic. With participants' subjective evaluations, we hope to see whether the personality modifier can impact the chatbot's way of talking. ### Participants We recruited participants by word-of-mouth and posting advertisements at a large tech company, social media, and online forums in local universities. We sent the link to our study website to 83 people who filled out a screener and met our inclusion criteria: (1) aged 19 or older; (2) fluent English speaker; and (3) have the experience in talking to chatbots of any kind. 54 people completed the online study session and submitted an exit survey. The entire study lasted less than 20 minutes and all participants received e-gift cards (equivalent to $5 USD) after they completed the study. We excluded six people's data from analysis; one made significant amount of grammatical errors and the rest completed less than half dialogues. Table 2 summarizes the demographic of the final 48 participants (aged 19 to 56, 18 females). Fourteen out of 48 (29%) participants were native/bilingual and 22 out of 48 (46%) participants had never heard of or used LLMs. Each prompt design condition included 12 participants. ### Data Analysis We collected rich dialogue data and valuable user subjective evaluation feedbacks. We performed both quantitative and qualitative analysis to examine chatbots' conversation styles, the slot filling performance, and participants' experiences with our chatbots. For each dialogue, we calculated commonly used descriptive metrics such as the number of turns and the average word counts per turn, which we report in Section 4.1. _Slot Filling Performance_. Our study aimed to investigate whether LLMs can drive chatbots to effectively ask defined questions and collect desired information specified in Table 1. To calculate the amount of information that can be obtained by our chatbots, one researcher _manually_ inspected and determined whether each of the pre-defined information slots could be extracted from collected dialogues. More specifically, for _sleep quality_ and _productivity rate_, which were specified as a scale of 1 to 10, we marked the slot as filled only if a numerical value (_e.g._, 9) rather than a vague phrase (_e.g._, good sleep) was given. For _feelings after eating_ in Food, we treated the slot to be filled if feelings regarding one or more meals were covered. We report the analysis of slot filling rate in Section 4.2. Based on the binary coding, we calculated the **slot filling rate**: the ratio of the number of information slots extracted from the dialogue against the total number of slots. We use the slot filling rate to infer the data collection performance of chatbots. _Dialogue Acts and User/Chatbot Behaviors_. To understand the conversational behaviors of the chatbots, we coded _dialogue act_ for each turn of conversations. Referring to some existing taxonomies of dialogue acts (Song et al., 2016; Wang et al., 2017; Wang et al., 2018), three researchers independently coded one participant's dialogues (132 turns; 1.8%) to identify emerging dialogue acts. Additionally, researchers labeled chatbot turns that did not fit in the conversation context or originated from the inherent artifacts of an LIM. We resolved discrepancies in coding and developed the first version of codebook with three dimensions of codes: (1) essential acts and (2) empathy & engagement behaviors, and (3) problematic chatbot turns. Then two researchers reiterated the independent coding of four other participants' dialogues (1 participant from each condition, 32 dialogues in total) with the codebook. The two researchers resolved discrepancies through multiple sessions of discussion until their inter-rater reliability (Cohen's Kappa) reached 0.96 for essential acts and 0.935 for empathy & engagement behaviors. Compared to these dialogue acts, the occurrence of errors was sparse. Hence, the two researchers discussed the entire problematic turns coded by each other together and reached the full agreement. With the finalized codebooks (See Table 5, 6, and 7), the first author coded the rest of the data. As a result, each turn was classified as one of the essential acts--_greeting, task opening, required question/answers (RQ/RA), secondary question/answers (SQ/SA), statement_, and _closing_. We assigned the most prominent act to turns consisting of multiple sentences. Independent of essential acts, we multi-coded each turn with the empathy & engagement behaviors described in Table 6. For example, to a general compliment "_That's great_" (**Statement**), we assigned only the **Appreciating** behavior, whereas we also treated "_That's great to hear that your legs are feeling stronger_" (**Statement**) to be both **Acknowledgments** and **Appreciating** as the compliment directly addressed to the user input. We were interested in such acknowledging behaviors because _specificity_ was an important indicator of the capability of open-domain chatbots Adiwardana et al. (Akiwardana et al., 2018). _Statistical Analysis._ To understand the impact of the study factors, including prompt design, conversation topic, and the conversation path, to the chatbots' slot filling performance and conversational flows, we used _mixed-effect models_ because these models can handle unbalanced data repeatedly measured from the same participants (Zhu et al., 2019). For each dialogue metric we want to assess, we fitted a mixed-effect model that predicts the metric, treating each dialogue as a data point. Starting from a full model containing participants as a random effect and the four main study factors-information format, personality modifier, topic, and path-and their interactions as fixed effects, we performed the step-wise backward elimination removing variables not significantly contributing the model, through Maximum-likelihood tests. For significant variables, we performed post-hoc pairwise comparisons of the least-squared means (LSM) of the metric using emmeans(Krishnan, 2019) package in R. _Subjective Feedback._ To assess the difference among the experimental conditions, we conducted Kruskal-Wallis tests over the four rating questions. We also referenced the open-ended feedback from when interpreting the participants' reactions to specific phenomena of the conversations. ## 4. Results In this section, we report the results of our study in six parts. In Section 4.1, we provide an overview of the dialogue dataset we collected. In Section 4.2, we report the data collection performance of our chatbots and factors that impact the performance. In Section 4.3, we report the types of the essential dialogue acts and assess how the prompt design and other factors impact the dialogue acts and, in turn, the data collection performance. In Section 4.4, we report the types of the empathetic and engaging behaviors of chatbots and assess how the prompt design and other factors impact such behaviors of the chatbots. In Section 4.5, we explore the problematic chatbot utterances mainly caused by the erroneous behaviors of a large language model. Lastly, in Section 4.6, we report on participants' subjective evaluation from the exit surveys. ### Descriptive Statistics From 48 participants, we collected 374 dialogues (7,442 turns in total); 91 from SP; 96 from SN; 95 from DP, and 91 from DN. Regarding the conversation topic, 94, 91, 94, and 95 dialogues were from Sleep, Work, Exercise, and Food Intake, respectively. Eight participants missed one dialogue per each and one missed two, mainly due to temporary server issues or accidental skips. Prompt designs impacted the word lengths and the number of turns of chatbots. Table 3 summarizes the number of turns and word counts by prompt design. The average number of turns per dialogue is around 20 with more average turns under the two descriptive conditions (DP and DN). The maximum number of dialogue turns is 75 under the DP condition (only 1 dialogue). In terms of word counts, dialogues under the descriptive conditions (DP, DN) had more words than those under the structured conditions (SP, SN): both chatbots and participants uttered more words under the descriptive conditions. The DP condition, in particular, leads to the most number of words of dialogues. \begin{table} \begin{tabular}{l r r r r r} \hline \hline & & **SP** & **SN** & **DP** & **DN** \\ \hline **Age** & (Mean, range) & 31.5 (21–56) & 28.0 (21–33) & 27.5 (20–40) & 30.4 (19–42) \\ \multirow{2}{*}{**Gender**} & Male & 7 & 8 & 8 & 7 \\ & Female & 5 & 4 & 4 & 5 \\ \hline \multirow{2}{*}{**English Proficiency**} & Native/Bilingual & 4 & 2 & 4 & 4 \\ & Proficient & 8 & 10 & 8 & 8 \\ \hline \multirow{4}{*}{**Education**} & High school & 1 & 2 & 1 & 1 \\ & Bachelor & 5 & 4 & 3 & 3 \\ & Master & 4 & 5 & 7 & 7 \\ & Doctor & 2 & 1 & 1 & 1 \\ \hline \multirow{4}{*}{**Familiarity with LLMs**} & Often use it & 2 & 1 & 2 & 1 \\ & Occasionally use them & 2 & 2 & 2 & 4 \\ \cline{1-1} & Used them once or twice & 2 & 4 & 2 & 2 \\ \cline{1-1} & Never heard of/used them & 6 & 5 & 6 & 5 \\ \hline **Participants** & Total & 12 & 12 & 12 & 12 \\ \hline \hline \end{tabular} \end{table} Table 2. Participant demographics by experimental condition. ### Slot Filling Rate Prompt designs significantly impacted the slot filling performance of chatbots. Table 4 summarizes the average slot filling rates of chatbots by conditions and topics. On average, all chatbots have reached over 70% slot filling rates. The dialogues in the SP-Exercise condition had the highest rate (93%) and those in the SN-Work condition had the lowest rate (64%). The maximum-likelihood test revealed that there was no significant random effect of participants, indicating that participants have little impact on chatbots' data collection performance. On the other hand, there were significant random effects of the topics(\(p<.0001\)), the conversation paths (\(p=.01\)), and the interaction between the information formats and personality modifiers (\(p<.001\)). Figure 3 shows the significance over the 95% confidence intervals of the slot filling rate in each category of the significant variables. The dialogues in DP condition had significantly lower rates than those in SP (\(p=.01\)) and DN (\(p=.008\)). This suggests that the personality modifier impacted chatbots differently: with the modifier, chatbots with the structured prompt yield higher rates whereas chatbots with the descriptive format yield higher rates without the modifier (See Figure 2(a)). In terms of topic, Exercise dialogues had the highest rate of 88.4%, which was significantly higher than those in the other three topics: Sleep (\(p=.01\)), Work (\(p<.0001\)), and Food (\(p=.02\)) (See Figure 2(b)). Lastly, dialogues in the Positive path had significantly higher rates than those in the Negative path (\(p=.01\)) (See Figure 2(c)). As seen in Figure 4, there is a general trend that slots specified earlier in prompts were more likely to be covered by chatbots. For example, the first slots in all topics were covered in 90.3% of the dialogues, but the last specified slots in Sleep (sleep quality) and Work (what went well) were omitted around 40% of the dialogues. Interestingly, the last specified slot of Food (feelings after eating) was diligently covered: chatbots often asked how participants felt after talking about each meal rather than asking their feelings once towards the end. \begin{table} \begin{tabular}{|l|c c c c|c|} \hline & **Sleep** & **Work** & **Food Intake** & **Exercise** & **Total** \\ \hline **SP** & 0.83 (0.29) & 0.71 (0.25) & 0.85 (0.18) & 0.93 (0.14) & 0.83 (0.23) \\ \hline **SN** & 0.75 (0.30) & 0.64 (0.32) & 0.80 (0.33) & 0.88 (0.20) & 0.77 (0.30) \\ \hline **DP** & 0.67 (0.24) & 0.67 (0.28) & 0.72 (0.32) & 0.82 (0.20) & 0.72 (0.27) \\ \hline **DN** & 0.85 (0.16) & 0.83 (0.24) & 0.75 (0.24) & 0.91 (0.16) & 0.83 (0.21) \\ \hline **Total** & **0.77 (0.26)** & **0.71 (0.28)** & **0.78 (0.28)** & **0.88 (0.18)** & **0.79 (0.26)** \\ \hline \end{tabular} \end{table} Table 4. The slot filling rate (and \(SD\)) by topic and condition. Figure 3. 95% confidence intervals of slot filling rate by variables with a significant effect: (a) the combination of the information format and personality modifier represented as study condition; (b) topic; and (c) the conversation path. The asterisks with arms indicate significance between the connected categories. (Refer to Appendix A.3 for model details and statistics.) \begin{table} \begin{tabular}{|l|c c c c|} \hline & **SP** & **SN** & **DP** & **DN** \\ \hline Total number of dialogues (turns) & 91 (1,638) & 96 (1,889) & 95 (1,941) & 92 (1,975) \\ \hline Average no. of turns per dialogue (range) & 18.0 (7–45) & 19.7 (3–57) & 20.4 (7–75) & 21.47 (3–53) \\ \hline Average no. of words per dialogue & 212.3 & 240.8 & 321.7 & 277.1 \\ \hline Average no. of chatbot/user words per turn & 17.4 / 4.9 & 17.8 / 4.8 & 23.4 / 7.5 & 19.2 / 5.5 \\ \hline Percentage of organically ended conversations & 7.14.\% & 76.0\% & 77.9\% & 78.2\% \\ \hline Percentage of erroneous turns & 3.1\% & 4.3\% & 3.0\% & 3.6\% \\ \hline \end{tabular} \end{table} Table 3. Descriptive statistics of our dialogue dataset aggregated by four prompt designs. ### Essential Dialogue Acts To further understand how chatbots powered by different prompt designs talk, we categorized conversation turns into dialogue acts. We provide the summary of essential dialogue acts and their distributions in Table 5 Here, we report chatbots' essential acts regarding question/answering and non-question statements. #### 4.3.1. Required and Secondary Questions We identified two types of questions that the chatbots asked: required questions (RQ) and secondary questions (SQ). The RQs were directly related to the specified information slots, whereas SQs were not directly related to the information slots but rather follow-up details or elaboration. Despite being relevant to the conversation topic, SQs sometimes caused the conversation to digress. Although not very common (95 out of 1,029 SQ turns in total; 9.3%), participants also asked questions to the chatbot, which were all categorized as SQ/SA. The majority of collected dialogues consisted of question/answering: Overall, 4,879 out of 7,442 turns (_avg_. 64.72% of turns per dialogue; \(min=10.67\%\), \(max=93.54\%\)) were classified as RQ, RA, SQ, or SA (See Table 5). We first investigated the impact of prompt designs on the chatbots spoken RQ and SQ turn ratios using two mixed-effect models with each turn ratio as a dependent variable, respectively. Figure 5 shows the 95% confidence intervals of the chatbot-spoken RQ and SQ turn ratios by four study factors (information format, personality modifier, topic, and path). The structured format significantly increased the RQ turn ratio (\(p=.03\); see Figure 4(a)) but decreased the SQ turn ratio (\(p=.002\); see Figure 4(b)). On the other hand, the personality modifier did not impact either RQ (\(p=.94\)) nor SQ (\(p=.43\)) turn ratios and made no difference within the same information format (See Figure 4(b) and 4(f)). In terms of the conversation path, we find that, overall, the Positive path led to a higher RQ turn ratio (\(p=.01\); see Figure 4(c)) and a lower SQ turn ratio (\(p=.002\); see Figure 4(g)). However, under different topics, the conversation path had different impacts on RQ and SQ ratios. As seen in Figure 4(d), the Positive path increased the RQ turn ratio only in the Work (\(p=.02\)) and Exercise (\(p=.003\)) dialogues and also decreased the SQ turn ratio in the same topics (Work: \(p=.003\) and Exercise: \(p<.001\)). As discussed above, prompt designs, topics and conversation paths have significant impacts on chatbots' question-asking behaviors. The RQ and SQ ratios further impacted the slot filling rate. We ran the maximum-likelihood tests with two mixed-effect models fitting the slot filling rate, one with the chatbot-spoken RQ turn ratio (_i.e._, the ratio of the turns classified as RQ in a dialogue) as a fixed effect and the other with the SQ turn ratio, both with participants as a random effect. We found that the RQ turn ratio was positively correlated with the slot filling rate, whereas the SQ turn ratio was negatively correlated with it: \(\beta=1.08\), \(SE=0.12\), \(t(347.25)=8.80\), \(p<.0001\) for RQ and \(\beta=-0.77\), \(SE=0.13\), \(t(370.06)=-6.12\), \(p<.0001\) for SQ. In summary, our results suggest that chatbots with a Descriptive information format tend to ask more secondary questions, and negative answers of participants also naturally elicit more secondary Figure 4. Breakdowns of the percentage of filled slots by the order of questions for each topic. Work and Exercise consist of four slots. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt} p{113.8pt} p{113.8pt}|p{113.8pt} p{113.8pt}|} \hline **Dialogue Act** & \multicolumn{3}{c|}{**Turn Ratio (\%)**} & **Description** & **Examples** \\ & **SP** & **SN** & **DP** & **DN** \\ \hline **Greeting** & 12.40 & 12.94 & 10.67 & 10.50 & Initiation of a conversation. & How’s your day so far? \\ & & & & & & _1 feel refreshed and recharged._ \\ \hline **Task opening** & 1.92 & 1.56 & 0.54 & 0.04 & General questions that bring up the conversation topic. & How was your work and productivity \\ & & & & & & _1 fied_ \\ \hline **Required question** & 24.07 & 21.36 & 17.57 & 20.97 & QAs that are directly related to the specified information slots. & What was your lunch yesterday? (RQ) \\ **Required answer** & 23.74 & 19.91 & 17.35 & 20.75 & the specified information slots. & _1 had pork barbeque._ (RA) \\ **Secondary question** & 10.28 & 10.79 & 13.87 & 13.91 & QAs that are usually follow-ups and not specific to the slots. & What did you put on your toast? (SQ) \\ **Secondary answer** & 9.12 & 9.64 & 13.08 & 12.71 & and not specific to the slots. & _1 put strawberry jam on it_ (SA) \\ **Statement** & 14.22 & 19.54 & 22.85 & 16.94 & Non-Q&A messages such as commenting or summarizing. & It sounds like you had a great night’s sleep! \\ & & & & & & _1 fied_ \\ **Closing** & 3.88 & 4.31 & 3.96 & 4.14 & Farewell or ending messages. & You’re welcome. Have a great day! \\ \hline \end{tabular} \end{table} Table 5. Summary of essential dialogue acts with the ratio of the occurring turns per dialog (turn ratio) by condition, brief description, and exemplar turns (\(\frac{\alpha}{\alpha}:\) user turns, \(\frac{\alpha}{\alpha}:\) chatbot turns). questions (_e.g._, "_"I'm sorry to hear that you didn't workout yesterday. May I ask why?"). #### 4.3.2. Non-question Statements Statements from chatbots usually consist of chichats, comments and suggestions that do not fall within the question/answering (see Table 5 for references) and participants also tend to respond to statements with other statements. Usually, the participants' statements did not contain direct clues about the desired information slots. Overall, 1,482 out of 7,442 turns (_ag._ 18.45% per dialogue) were classified as Statement (See Table 5). Among the four conditions, dialogues of DP have the highest ratio of statements (24.99%) and SP had the lowest ratio (15.32%). ### Empathy & Engagement Behaviors Table 6 summarizes the empathy & engagement categories and their turn ratios by experimental condition. The majority of these behavior categories were coded to the chatbot turns\(-\)1,992 chatbot turns and 403 user turns were coded with one or more behavior categories\(-\)partly because participants uttered less words than chatbots (see Table 3) and chatbots usually led the conversation while participants simply responded. **Acknowledging** was the most common empathy behavior (see Table 6) as chatbots often referred to what participants previously said in generated messages. Also, we found that our chatbots often **appreciated** participants' accomplishment (_e.g._, taking good sleep, managed to exercise) or \begin{table} \begin{tabular}{|l|c c c|l|l|} \hline **Behavior Category** & \multicolumn{3}{c|}{**Turn Ratio (\%)**} & \multicolumn{1}{c|}{**Description**} & **Examples** \\ & **SP** & **SN** & **DP** & **DN** & \\ \hline **Acknowledging** & 18.22 & 18.91 & 26.24 & 22.04 & Directly referring to what & **’**_That’s great to hear that your legs are_ \\ & & & & the other said. & _feeling stronger!_ \\ \hline **Appreciating** & 9.04 & 7.36 & 11.18 & 9.86 & Complimenting the other. & **’**_That’s terrific!_ \\ \hline **Sympathizing** & 3.66 & 2.79 & 4.33 & 1.74 & Sympathizing with the other. & **’**’**_I’m sorry to hear that. What’s been going on?_ \\ \hline **Thanking** & 5.16 & 6.74 & 7.05 & 7.26 & Being grateful to the other. & **’**’**_I feel nice. Thanks for asking._ \\ \hline **Advice/suggesting** & 2.54 & 3.07 & 4.81 & 2.55 & Giving advice or suggestion. & **’**’**_I can give you some recommendations on exercises that will help you grow your_ \\ & & & & & _addictor muscles._ \\ \hline **Rejecting/disagreeing** & 0.57 & 0.66 & 0.40 & 0.66 & Rejecting or disagreeing with the other. & **’**_But the weather needs to be good for walking._ \\ \hline \end{tabular} \end{table} Table 6. Summary of empathy & engagement behaviors with turn ratio by condition, brief description, and exemplar turns. Note that the behaviors are multi-coded. Figure 5. 95% confidence intervals of the turn ratios of RQ (top; a–d) and SQ (bottom; e–h) by variables with a significant effect: The asterisks with arms indicate significance between the connected categories. Note that for (d) and (h) we did not display the significance across topics. (Refer to Appendices A.4.1 and A.4.2 for model details and statistics.) **sympathized** participants when they reported negative outcomes (_e.g._, poor sleep quality, failed at work). To investigate how the four study factors impact the empathetic behaviors of chatbots, we analyzed three mixed-effect models with the chatbot turn ratios of Acknowledging, Appreciating, and Sympathizing behaviors as a dependent variable, respectively. Figure 6 shows the 95% confidence intervals of turn ratios of the three behavior categories estimated against the study factors. The information format significantly influenced the Acknowledging and Appreciating turn ratios: Dialogues in the Descriptive format had higher ratios of the Acknowledging (\(p<.0001\); see Figure 6a) and Appreciating (\(p=.01\); see Figure 6e) turns. Personality modifier did not solely impact these two behaviors but it influenced in conjunction with the information format (See Figure 6c and 6g). However, the personality modifier in the prompt led chatbots to produce significantly more Sympathizing turns (\(p=.002\); see Figure 6j). Besides the prompt design, the conversation path strongly influenced all three empathetic behaviors: The Positive path led to higher turn ratio of Appreciating (\(p<.0001\); see Figure 6d) whereas The Negative path led to higher Acknowledging (\(p<.001\); see Figure 6h) and Sympathizing (\(p<.0001\); see Figure 6l) turn ratios. ### Problematic Chatbot Turns and User Responses In total, 6.7% of the chatbot turns (257 out of 3,916) were tagged erroneous and the four categories of erroneous turns are summarized in Table 7. These erroneous turns sometimes led to the unorganic termination of the conversation (_i.e._, participants ended the conversation before or without natural Closing messages). In the following, we cover these error types in detail. _Incorrect phrases._ In structured prompts (SP, SN), we used a symbol "->," a commonly-used delimiter for key-value pairs in LLMs, to specify the information slots. This caused GPT-3 to expose such an information structure to the output as an artifact in 16 turns of messages, all of which were generated by the structured format. In the example from Table 7, GPT-3 even incorrectly "predicted" the slots (45 minutes, example (1) in Table 7) altogether with symbols. Other times, GPT-3 also erroneously predicted the answers for participants (example (2) Table 7). Particularly, we identified 6 instances where GPT-3 predicted the user response and appended an extra \(\underline{\texttt{A}}\) turn to the generated chatbot turns (3 turns in descriptive groups and 3 turns in structured groups). _Self-talk._ GPT-3 sometimes generated turns in a first-person narrative or not directed to participants, which looked quite similar to the "self-talk" of humans. In Fragment 1, for example, Turn 03 is obviously not directed to participants. In our dataset, less than 1.1% of chatbot turns were self-talk (SP: 8; SN: 27, DP: 1, DN: 9). Participants who encountered these "self-talk" commented that these messages were "awkward" (P42), "strange" (P24), and "confusing"(P23). However, we found that participants always attempted Figure 6. 95% confidence intervals of the chatbot turn ratios for Acknowledging (a–d), Appreciating (e–h), and Sympathizing (i–l) behaviors by information format, personality modifier, study condition (combinations of format and personality modifier), and conversation path. Variables that are not significant are marked as ‘NS’ (Refer to Appendix A.5 for model details and statistics.) to continue the conversation by following the self-talk and tried to resolve the errors (See Turn 04 in Fragment 1). **Repetition.** We found that GPT-3 was susceptible to generate repetitive messages, either identically or linguistically repeating the previous chatbot turns. In total, 147 turns (3.8% of chatbot turns) were labeled to be repetitive (SP: 11, SN: 39, DP: 48, DN: 50). Identically repetitive messages occurred to 23 participants (SP: 3, SN:5, DP: 6, DN: 9) in 31 dialogues (SP: 3, SN: 9, DP: 8, DN:10). Among the four topics, work (14 dialogues) tended to have more identically repetitive messages and exercise (2 dialogues) tended have fewer identically repetitive messages. However, these messages usually served as SQ/SA in conversations, hence they rarely influenced slot filling and data collection. Linguistically repetitive messages usually share similar wording or phrase structures. Fragment 2 presents an example dialogue snippet. At Turn 01, 03, 05, and 07, the chatbot always started with a similar phrase (_i.e.,_ "_That's great..._") to compliment the participant then asked the participant a question started with "_can you_". These linguistically repetitive messages were semantically correct and _58.3%_ took place in the Acknowledging turns where the chatbot rephrased what the participant said progressed the conversation organically. However, linguistically phrased messages looked too similar, and they negatively impacted the user experience. A few participants suggested that the chatbots seemed to actually understand their responses, yet were using "a sentence template" (P29) to respond in a "predefined ways" (P13). On the other hand, repetition could also lead to "dead loops" of conversations. **Miscellaneous.** There were 37 problematic chatbot turns that did not fall into the above categories. Among these turns, 19 turns were tagged to have context errors (_i.e._, chatbots did not grasp the context at all), 10 had semantic errors (_i.e._, messages that are not of human common sense), and one had both context and semantic errors. In Fragment 3, for example, the chatbot entirely missed that the participant said "_1 hour of cardio_" But the chatbot also attempted to resolve the contextual misunderstanding by apologizing after the participant corrected it. While chatbots in our study appeared to understand people pretty well in most cases, their responses with semantic errors could be quite wrong and amusing. For instance, when being asked about workout yesterday, one participant told the chatbot that they skipped. Instead of considering the participant did not workout, the chatbot considered skipping as a jumping workout and responded "_Skipping is a type of cardiovascular exercise that can help to improve your heart health and endurance._" Besides, there were two instances when the chatbot failed to detect the ending of the conversation and restarted with the first slot question again, which, of course, led participants to abandon the conversation. Lastly, our system went offline 5 times and caused chatbots to output empty messages, which was caused by the over-frequent API calls to OpenAI. **Terminating Conversations.** With the Closing turns, we found that 75.7% of the conversations were organically ended. Among the four conditions, SP had the lowest percentage of naturally ended conversations (71.4%) than other three conditions. For 91 non-organically ended conversations, participants abandoned 48.3% of those conversations without encountering any obvious problematic errors. 19.8% of prematurely ended conversations were caused by conversations. **Miscellaneous.** There were 37 problematic chatbot turns that did not fall into the above categories. Among these turns, 19 turns were tagged to have context errors (_i.e._, chatbots did not grasp the context at all), 10 had semantic errors (_i.e._, messages that are not of human common sense), and one had both context and semantic errors. In Fragment 3, for example, the chatbot entirely missed that the participant said "_1 hour of cardio_" But the chatbot also attempted to resolve the contextual misunderstanding by apologizing after the participant corrected it. While chatbots in our study appeared to understand people pretty well in most cases, their responses with semantic errors could be quite wrong and amusing. For instance, when being asked about workout yesterday, one participant told the chatbot that they skipped. Instead of considering the participant did not workout, the chatbot considered skipping as a jumping workout and responded "_Skipping is a type of cardiovascular exercise that can help to improve your heart health and endurance._" Besides, there were two instances when the chatbot failed to detect the ending of the conversation and restarted with the first slot question again, which, of course, led participants to abandon the conversation. Lastly, our system went offline 5 times and caused chatbots to output empty messages, which was caused by the over-frequent API calls to OpenAI. **Terminating Conversations.** With the Closing turns, we found that 75.7% of the conversations were organically ended. Among the four conditions, SP had the lowest percentage of naturally ended conversations (71.4%) than other three conditions. For 91 non-organically ended conversations, participants abandoned 48.3% of those conversations without encountering any obvious problematic errors. 19.8% of prematurely ended conversations were caused by conversations. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Category** & **Turn Ratio** & **Description** & **Example** \\ \hline **Incorrect phrases** & 0.79\% & Messages with symbols or extra predictions. & **Workout duration**\(\sim\) _45 minutes_ (1) \\ \hline **Self-talk** & 1.1\% & First-person narratives or commentary messages. & **The customer’s fitness concern is that they are feeling very tired after their workout.** \\ \hline **Repetition** & 3.8\% & Repeating the same or similar utterances multiple times. & Refer to Fragment 2. \\ \hline **Miscellaneous** & 0.9\% & Other miscellaneous errors. & **(1)** \\ \hline \end{tabular} \end{table} Table 7. Categories of the chatbots’ erroneous turns with turn ratio, brief descriptions, and examples. identically repetitive and another 19.8% of were caused by linguistically repetitive messages. Lastly, the self-talk led to 3 conversations to end early, and context errors and system-offline caused the rest incomplete conversations. ### Subjective Evaluation Figure 7 summarizes the distribution of participants for rating (1) the ability to understand, (2) the ability to acknowledge user input, and (3) the level of empathy. The Kruskal-Wallis tests showed that there were no significant differences among conditions for all three questions. In general, most participants highly rated for Q1 and Q2 that the chatbots could understand them as well as acknowledge their messages: Fifteen (31.3%) participants rated 5 and 24 (50.0%) rated 4 on Q1; 11 (22.9%) rated 5 and 31 (64.6%) rated 4 on Q2. Participants showed mixed perception for the level of empathy question with a median of 3. Some participants gave positive feedback in the open-ended question. P25 who frequently used LLMs commented, "_I was surprised to see how accurate and detailed the bot's responses were_." P36 who did not have any experience with LLMs gave a similar comment--_I found it quite responsive and surprisingly considerate of my answers_." Despite the errors we presented above, P21 still complimented the chatbots: "_I liked/was satisfied of how the chatbot precisely gave info when I asked for it, and I felt that the relevance of the answer is very high and that it caught my point of question sharply_." and even suggested that, "_I felt keeping chatbot as a companion would be awesome. To regularly make casual conversation and be provided light insights about my daily life based on my casual chats_." ### Q1. The chatbot understands my messages. ## 5. Discussion Our results showed that our zero-shot chatbots achieved great abilities in asking desired questions and understanding user responses, despite also having drawbacks. Here, we reflect on the performance of our chatbots and discuss opportunities, ethical considerations, and limitations for future work. ### Designing Effective Prompts for Chatbots that Collect Self-Reports Our study showed that defining a first-person job identity as well as specifying information slots in prompts was an effective method to bootstrap chatbots that ask health-related questions. However, the slot filling performance and chatbots' behaviors were sensitive to prompt design, topic, and conversation paths. We provide the following prompt design suggestions based on our findings: _Combine Information Format and Personality Modifier Wisely._ Although the information format and personality modifier did not consistently impact slot filling rates individually, how they were combined had different impacts on slot filling rates. The information format affected chatbots' question-asking behaviors: structured formats lead to more RQs and fewer SQs and vice versa for descriptive ones. In other words, structured formats steer chatbots to ask direct questions about the specified slots whereas descriptive ones focus more on eliciting surrounding context or additional details. The personality modifier had a synergy with descriptive formats: Chatbots in DP had the lowest slot filling rates and the ratio of RQs, but had the highest number of acknowledging messages. Referring to Table 4, SP and DN have comparable slot filling rates. Therefore, to build chatbots that can show a higher level of understanding through acknowledgment, using the Descriptive format without personality modifier could be the best option. But chatbots with more direct acknowledgment may be at the risk of being awkward and too robotic. Hence, when designing prompts to power chatbots for data collection, using structured format with personality modifier would be more desirable. _Evaluate chatbot for conversation topic and path._ There is certain discrepancy of slot filling rates between topics. One reason for Figure 7. Distributions of the subjective ratings from three scale questions in the exit survey, with breakdowns by the information format and the personality modifier. such difference could be the nature of the topic. For example, the topic _work_ tended to be the most open-ended topic as people report different types of work, which could lead to more subject switches and digressions than others. We suspect that GPT-3 is more suitable to steer chatbots that are of less divergent topics and collect self-reports that are more structured. Also, considering the path of conversations also impacts the data collection rate, researchers may consider clearly specify different slots for both positive and negative paths. For example, developers can add "if the customer did not workout yesterday, I would ask them what workout they did in the past week" to the prompt for chatbots in the exercise topic. _Composition of Slots Matters._ The number and data types of slots also impact the chatbots' performance in collecting slots. With the conversation goes longer, chatbots have a tendency to miss information slots that appear later in prompts. Also, while we collected a certain amount of numerical rates for sleep quality and productivity rate, it is not guaranteed that the chatbots would cover a slot definition (_e.g._, numerical scale) as intended. Sometimes, the chatbot would simply ask "_Would you say you had a good night's sleep?_" or "_Overall, how do you feel about your work and productivity yesterday?_. Hence, it is one chatbots powered by GPT-3 to facilitate data collection, we suggest that important slots be put earlier in the prompt and the number of questions of specific data type be limited. If more data slots need to be collected, multi-stage prompts (Sundundar et al., 2017) can be considered. ### Opportunities of LLM-driven Chatbots From the study, we learned that LLM-driven chatbots are advantageous compared with traditional chatbot platforms in multiple aspects. Here we cover some noteworthy aspects drawing on the results. _Versatile Responses and Follow-up Questions._ Compared to chatbots with pre-defined dialogues, chatbots in our study can deliver a great number of versatile phrases. For example, for these scale questions, GPT-3 can output phrases such as "_Would you say that your sleep quality yesterday was a 10/10, 9/10, 8/10..._" GPT-3 can even provide clarifications for questions and ask follow-up questions which supplement the topic. However, these SQs were still on-topic and directly addressed to user inputs (See SQ/SA in Table 5). Follow-up questions are commonly used in human-administered interviews to increase interactivity (Sundar et al., 2017) and many studies suggest that chatbots that can ask on-topic follow-up questions are considered more human-like (Sundar et al., 2017; Sundar et al., 2017). Although current chatbot frameworks (_e.g._, Amazon Alexa (Bradbury et al., 2017)) (Sundar et al., 2017) support follow-up/extended questions, developers need to specify both the expected slots and the follow-up phrases (Sundar et al., 2017). On the contrary, GPT-3 could naturally ask follow-up questions, equipping chatbots with proper common sense on the topic. For example, our chatbot mapped "_Bulgogi, rice, and kimcht_" to "_a very traditional Korean meal_" in its response to the participant. Such response engages people through showing a level of "understanding." _Social Attributes._ Given the importance of social features such as chit-chat for positive user experience (Sundar et al., 2017; Sundar et al., 2017), our results show that we can easily equip GPT-3 with such social aspects. For example, our chatbots could respond naturally to the questions about their "personal life"-_e.g._, "_Do you workout yourself?_""Yes, I work out regularly myself. I find that it helps me to stay energized and focused throughout the day." Further, our chatbots were also able to give suggestions relevant to the topic. While mostly originated from common sense, some of the suggestions were in-depth and tailored. In one time, one participant asked two questions in a row (probably due to system error) were quite surprised to find that the chatbot provided a well-written response (See Turn 40 in Fragment 6). This participant even commented that "_I know a small bit about NLP but not much when it comes to generating responses. I find it fascinating that (it) can give such in-depth answers to specific topics as I find it hard to be able to train an AI to every kind of case involving that."_ _Error Recovery._ Task-oriented chatbots usually have limited number of pre-defined user intents to accomplish a specific goal. For instance, a banking chatbot can provide services such as currency-exchange conversion and introduction of credit cards (Sundar et al., 2017). However, such chatbots are usually unable to handle user messages that are out of the pre-defined intents (_e.g._, a user attempts to have small talk with the banking chatbot) (Sundar et al., 2017). Also, they may even mis-recognize in-scope messages due to the complexity of natural language (Bradbury et al., 2017). Strategies like highlighting keywords and switching topics (Bradbury et al., 2017) can help resolve conversation breakdowns at the price of making chatbots less human-like. In our case, LLM-driven chatbots could handle the out-of-scope conversations relatively well, since they could do improvisation actions relying on the ability of LLMs instead of defining intents intensively. In Fragment 4, a work chatbot with the job identity "life coach" handled the off-topic request ("_wake me up at 6 am_") by the participant smoothly and even provided tips on sleep. Even when misunderstanding occurred, chatbots sometimes attempted to resolve it. In Fragment 3, for example, the chatbot apologized for its misunderstanding and in5, GPT-3 resolved an empty message error (due to system offline) by making up an excuse for its absence. _Context Tracking._ Context is a key part in human conversations that connects multiple turns (Kang et al., 2017). Previous studies have suggested that chatbots should aim to sustain contexts to improve the dialogue efficiency (Sututut et al., 2020). Current conversational interfaces such as Google Assistant and Amazon Alexa shows certain abilities in maintaining contexts (Zhu et al., 2020); however, most of them are still criticized for not detecting contextual details (Krishnan et al., 2020; Krishnan et al., 2020). With Dialogflow, developers can define some contexts to be maintained within 5 turns2; however, it has yet achieved truly flexible conversations through this approach. In our case, the chatbots have shown impressive abilities in sustaining some contexts without dedicated mechanisms for managing contexts. P27 noted, "_I feel like it could keep track of the context well between sentences during the conversation._" Through dialogue snippets presented in Fragment 5 and 6, we can see that the context was maintained across 5 turns and 2 turns, respectively. In particular, the context (_intimidation_) would be difficult to specify with most chatbot frameworks. Further, one pattern that emerged in the dataset is that chatbots liked to give a summary of all the user input in the end of conversations, which usually covered the past conversation history and maintained contexts longer than 5 turns. Footnote 2: [https://cloud.google.com/dialogflow/es/docs/contexts-input-output](https://cloud.google.com/dialogflow/es/docs/contexts-input-output) **Low-effort Bootstrapping**. We show several opportunities of LLMs in powering chatbots above. Indeed, chatbots that provide many of the above functions, including chitchats, suggestions, and context perseverance, can be trained with rich datasets. However, collecting such dataset is challenging, and training models on big datasets is costly and often inaccessible (Beng et al., 2020). In terms of utilizing mainstream chatbot platforms to build voice applications, it is of great human effort to come up with user intents and example phrases as well as design conversation flows and logic (Sututut et al., 2020; Krishnan et al., 2020). In particular, the error handling is tricky to design as conversation breakdowns can be common and even unexpected in the wild (Krishnan et al., 2020; Krishnan et al., 2020; Krishnan et al., 2020). On the other hand, LLM's in-context learning capability enables us to skip collecting training dataset or configuring dialogue flows to create functional chatbots. Further, our results show that simple alterations of prompts can significantly influence the conversation styles of chatbots. With robust prompt designs, it is possible that people without background in AI can directly personalize chatbots using natural language. ### Drawbacks of LLM-driven Chatbots Although LLMs showed great potentials in steering chatbots, we also encountered several drawbacks of LLM-driven chatbots. Here we cover the two noteworthy drawbacks and strategies to overcome them. **Repetitiveness**. As LLMs tend to detect latent patterns in the prompts (Krishnan et al., 2020), the user messages accumulated in the prompt (See (b) in Figure 1) may unintentionally trigger patternized behaviors, making chatbots produce repetitive (although not always identical) turns. For example, many of the messages generated by GPT-3 start with "_It sounds like you..._", _parorting_ user responses and providing direct acknowledgment. Although such behaviors made many participants rate the chatbots to be "understanding", participants also criticized the awkwardness of parorting. We suspect that such repetition was partially caused by a well-known problem of LLMs: they tend to generate repetitive messages (Krishnan et al., 2020). In worse cases, chatbots stuck in "dead loops" and could not progress the conversation further. P20 even responded "please enough" to the chatbot's repetitive questions. **Randomness**. LLMs predict the most probable text followed by prompts. Therefore, the generations inherently exhibit a certain level of randomness (Krishnan et al., 2020), which is hard to explain or anticipate. Such randomness might have led to erroneous responses of our chatbots. For example, sometimes the chatbots "self-talked" (Zhu et al., 2020) or exposed machine representations as responses. What makes it worse is that as the conversation history is accumulated, erroneous responses stay in the prompt and lead to other erroneous ones. Also, the chatbots sometimes did not consistently react to the same user input. When being told the participant skipped breakfast, one chatbot under the condition DN responded, "_That's not ideal_. _Skipping breakfast can make it harder to concentrate and can cause you to overreal later in the day_," while the other chatbot under the same condition replied, "_That's okay! Some people choose to skip breakfast_". It is not explainable whether such inconsistencies randomly happened or were caused by prior user inputs. The stochastic nature of LLMs does not guarantee that they would comply with all natural language instructions in prompts. As such, compared to rule-based chatbots that can almost 100% ask pre-defined scripts (Zhu et al., 2020), we can see that not all specified information slots were asked by our chatbots during the study. Despite drawbacks discussed above, LLMs-based chatbots can become a valuable and scalable tool for researchers to collect data for personal informatics (Sutut et al., 2020). Reflecting on our findings, we propose strategies to mitigate the erroneous behaviors of the chatbots. GPT-3 tends to generate long responses, which may make chatbots to appear more robot-like. We suggest that researchers consider intentionally slowing down the responding delays. A longer gap may not only help create a more human-like chatbot (Zhu et al., 2020) but also create time for the system to run filters and algorithms to pick better messages. Drawing on problems identified from our analysis, we envision a chatbot system that generates three responses each turn (if the budget allows). Then, a repetition filter can be used to filter out identically repetitive messages. In terms of linguistically repetitive messages, the system can pick the message with the least linguistic similarity to the chatbot's last turn. The filter could also easily remove messages that have the self-talk errors or symbols. When the conversation is too long, a parallel prompt can be made to detect if the conversation is in a dead loop or a simple ending detection algorithm can deployed to end the conversation and improve the user experience. All these filters are cost-efficient to implement and could resolve many problems. For example, around 80% of errors occurred in SP are repetitive messages, self-talk, and system-offline, all of which could potentially be resolved with simple filters. Lastly, we acknowledge that running LLMs is always accompanied with uncertainty and the resultant chatbots may not be able to fulfill defined tasks every time. Hence, we recommend researchers conducting intensive testing of LLMs-powered chatbots to identify errors, understand the slot filling performance, and customize filters accordingly. ### Ethical consideration LLMs are trained on an existing corpus that may contain biased and problematic information (Krishnan et al., 2017; Krishnan et al., 2017). Many have also suggested that cautions should be taken when using LLMs, particularly in the field of healthcare delivery (Krishnan et al., 2017). In our study, we intentionally used hints to guide participants to compose their answers when conversing as we were unsure whether inappropriate content would be generated. We did not see any biased, harmful or dangerous messages from GPT-3 in our dataset. All the chatbots appeared to give conservative suggestions. For example, one participant tried to ask diet suggestions for weight loss, but the chatbot with the job identity as a _fitness coach_ suggested that "it's always best to speak with a doctor or registered dietitian before starting any weight loss plan." However, we also found some instances where chatbots failed to detect participants "teasing and nonsensical" questions and gave advice that could potentially be dangerous to follow. For example, to a participant who said "_I want to gain 50 kg of pure fat by the end of the year: How many snicker bars should I eat to complete that goal?_", the chatbot responded with a semantically problematic message: "_If you're trying to eat 3000 calories a day and you're only burning 2000, then eating 3 snickers bars a day (each bar has around 1000 calories) could help you reach your goal._" This message not only contains the incorrect fact (_i.e._, the calories of a snicker bar) but also is irrational. This example suggests the importance of giving precautions to users that the chatbots' messages do not guarantee medical or professional soundness (Krishnan et al., 2017). ### Study Limitation Due to the limited number of participants, we did not perfectly counterbalance the order of topics. Fatigue effects may not be fully mitigated for Food and Sleep topics which always came after Work and Exercise, respectively. Similarly, participants always conversed in the Positive conversation path before the Negative one, although we believe that having consistent path orders would cause less confusion and mistakes. Also, while we instructed participants to follow given conversation paths, some participants might not perfectly comply with the guides, possibly affecting significance of the pairwise comparisons. The targeted information slots consisted of only time, scale, binary, and open-ended data types. Incorporating other types of questions such as multiple choices may influence the chatbots' performance. In addition, slots in each topic had different composition of data types, so any differences among topics might be influenced by both the lexicon of the topic and the composition of data types. Also, we did not control the conversation style of participants. Since user inputs also become part of the prompts, their linguistic patterns may affect GPT-3's generations and in turn the slot filling performance or the conversation style of chatbots itself. We chose GPT-3 as the underlying LLM for our chatbots as it is mainstream and publicly accessible via a commercial API. Although the model we used shows overall state-of-the-art performance in accuracy, robustness, and fairness (_c.f._, (Xu et al., 2018)), given that LLMs can be sensitive to prompt designs (Krishnan et al., 2017), we reckon that our proposed prompts may not yield similar performance on other LLMs due to the differences in the training corpora and the model architecture. For example, newer LLMs that are improved to follow instructions in a prompt (_e.g._, text-davinci-003(Xu et al., 2018)) or optimized for dialogues (_e.g._, ChatGPT (Xu et al., 2018)) may be more diligent in filling slots. Therefore, future work may consider powering chatbots on other LLMs, with our proposed prompts as a starting point. ### Future Work Future work can explore ways to improve the performance of LLM-driven chatbots. In our study, we adopted zero-shot prompts. Researchers can try augmenting our prompts with few-shot learning by providing example dialogues (Krishnan et al., 2017), which may make chatbots have more robust question-asking abilities and can handle negative paths better (Krishnan et al., 2017). Measuring chatbots requires great human efforts so more future research into the effects of these parameters on prompts is needed to provide guidance for the development of better and more robust chatbots. Researchers can also investigate multi-stage prompting (Krishnan et al., 2017; Krishnan et al., 2017) (_i.e._, designing several prompts for different questions in one dialogue session) if they intend to collect more than 5 slots of information. Such approaches will require incorporating dialog state tracking techniques (_e.g._, (Xu et al., 2018)) for automated slot filling. Lastly, we hope future research can investigate the user perceptions of LLM-driven chatbots, or even voice-based ones like smart speakers (Krishnan et al., 2017). In this study, we focused exploring the chatbots' performance and behaviors rather than the user experience. Several participants were impressed by some of the chatbots' responses but were disappointed with repetitive messages at the same time. Hence, we are interested in seeing how people will interact with an improved version of our chatbots and whether their mental models of chatbots will change along with the advancement of chatbots (Krishnan et al., 2017). In addition, comparing user perception of LLM-driven chatbots with other mainstream chatbot frameworks (_c.f._, (Xu et al., 2018)) would provide holistic design implications for self-reporting chatbots with balanced data collection performance and user perception. ## 6. Conclusion In this study, we explored how we can use GPT-3 for powering chatbots that can reliably ask people health-related questions through natural conversations. In an empirical user study, we found that, simply through prompting, LLMs-based chatbots could effectively deliver questions and collect desired self-reports. Particularly, we evaluated how two prompt design factors--format and personality modifier--impacted the resulted chatbots' ability in slot filling and conversation styles. While LLMs can be a promising tool to build chatbots, we also discuss problematic messages occurred in our dataset. Reflecting on our results, we provide insights into the prompt design for chatbots and give suggestions on how to improve future LLMs-based chatbots. In closing, we hope this work can inform and inspire other researchers in the fields of HCI and Personal Informatics, to effectively leverage LLMs to power enjoyable chatbots for robust data collection. ###### Acknowledgements. We thank our study participants for their time and efforts. We are also grateful to Eunkyung Jo and Vassilis Kostakos, who provided feedback on this paper.
2303.13858
Unveiling the gravitationally unstable disc of a massive star-forming galaxy using NOEMA and MUSE
Using new high-resolution data of CO (2-1), H-alpha and H-beta obtained with the Northern Extended Millimeter Array (NOEMA) and the Multi-Unit Spectroscopic Explorer (MUSE) at the Very Large Telescope, we have performed a Toomre-Q disc stability analysis and studied star formation, gas depletion times and other environmental parameters on sub-kpc scales within the z~0 galaxy SDSS J125013.84+073444.5 (LARS 8). The galaxy hosts a massive, clumpy disc and is a proto-typical analogue of main-sequence galaxies at z~1-2. We show that the massive (molecular) clumps in LARS 8 are the result of an extremely gravitationally unstable gas disc, with large scale instabilities found across the whole extent of the rotating disc, with only the innermost 500 pc being stabilized by its bulgelike structure. The radial profiles further reveal that - contrary to typical disc galaxies - the molecular gas depletion time decreases from more than 1 Gyr in the center to less than ~100 Myr in the outskirts of the disc, supporting the findings of a Toomre-unstable disc. We further identified and analysed 12 individual massive molecular clumps. They are virialized and follow the mass-size relation, indicating that on local (cloud/clump) scales the stars form with efficiencies comparable to those in Milky Way clouds. The observed high star formation rate must thus be the result of triggering of cloud/clump formation over large scales due to disc instability. Our study provides evidence that "in-situ" massive clump formation (as also observed at high redshifts) is very efficiently induced by large-scale instabilities.
Johannes Puschnig, Matthew Hayes, Oscar Agertz, Eric Emsellem, John M. Cannon, Alexandra Le Reste, Jens Melinder, Göran Östlin, Christian Herenz, Veronica Menacho
2023-03-24T08:39:31Z
http://arxiv.org/abs/2303.13858v1
# Unveiling the gravitationally unstable disc of a massive star-forming galaxy using NOEMA and MUSE+ ###### Abstract Using new high-resolution data of CO (2-1), H\(\alpha\) and H\(\beta\) obtained with the Northern Extended Millimeter Array (NOEMA) and the Multi-Unit Spectroscopic Explorer (MUSE) at the Very Large Telescope, we have performed a Toomre \(Q\) disc stability analysis and studied star formation, gas depletion times and other environmental parameters on sub-kpc scales within the \(z\sim 0\) galaxy SDSS J125013.84+07344.5 (LARS 8). The galaxy hosts a massive, clumpy disc and is a proto-typical analogue of main-sequence galaxies at \(z\sim 1-2\). We show that the massive (molecular) clumps in LARS 8 are the result of an extremely gravitationally unstable gas disc, with large scale instabilities found across the whole extent of the rotating disc, with only the innermost 500 pc being stabilized by its bulgelike structure. The radial profiles further reveal that - contrary to typical disc galaxies - the molecular gas depletion time decreases from more than 1 Gyr in the center to less than \(\sim\)100 Myr in the outskirts of the disc, supporting the findings of a Toomre-unstable disc. We further identified and analysed 12 individual massive molecular clumps. They are virialized and follow the mass-size relation, indicating that on local (cloud/clump) scales the stars form with efficiencies comparable to those in Milky Way clouds. The observed high star formation rate must thus be the result of triggering of cloud/clump formation over large scales due to disc instability. Our study provides evidence that "in-situ" massive clump formation (as also observed at high redshifts) is very efficiently induced by large-scale instabilities. keywords: galaxies: starburst - galaxies: star formation - galaxies: ISM - galaxies: kinematics and dynamics - techniques: interferometric - techniques: imaging spectroscopy ## 1 Introduction Several studies based on deep field observations have revealed that at redshifts \(-3\) galaxies with total (gas+stars) masses similar to the Milky Way (\(\sim 10^{11}\) M\({}_{\odot}\)) are already in place (Dessanges-Zavadsky et al., 2017; Elbaz et al., 2018; Tacconi et al., 2018; Cassata et al., 2020). Since the Universe was then only \(\sim\)2 Gyr old, these massive objects must have formed within a very short time, thus requiring very high star formation rates (SFRs) compared to the \(z\sim 0\) Universe. In recent years, observations of galaxies at redshifts between 0 and 4 have shown that the level of star formation is mainly dictated by stellar mass and regulated by secular processes (Popesso et al., 2019). This is manifested in a tight relation between stellar mass and SFR, the so called _main sequence of star forming galaxies_(Brinchmann et al., 2004; Noeske et al., 2007; Daddi et al., 2007; Elbaz et al., 2007; Peng et al., 2010; Wuyts et al., 2011; Whitaker et al., 2012, 2014; Tomczak et al., 2016). While the slope of the relation does not vary with redshift, its intercept shifts towards higher SFRs with increasing lookback time, see for example Wuyts et al. (2011); Rodighiero et al. (2011). As shown by several studies (Tacconi et al., 2013; Genzel et al., 2015; Scoville et al., 2017; Wiklind et al., 2019), the evolution of the main sequence is driven by an increase of the (molecular) gas fraction. As a result the gas depletion time, defined as the inverse of the SFR per unit molecular gas mass, remains roughly constant even out to \(z\sim\)4. The high SFRs observed on the main sequence at high-\(z\) are thus mainly driven by increasing gas fractions. Although the strong evolution of the SFR with time (redshift) is relatively well constrained (Madau & Dickinson, 2014), the underlying physical mechanisms that drive star formation in gas-rich discs is still a matter of debate. Beside their large gas fractions, high-\(z\) main-sequence galaxies are observed to have higher gas velocity dispersions (Forster Schreiber et al., 2009; Lehnert et al., 2009; Swinbank et al., 2012; Wisnioski et al., 2015) compared to local spirals. Additionally, their morphologies show extremely massive _clumps_, exhibiting considerable fractions of the total mass. Initially, this was interpreted in the context of "bottom up" structure formation as an ongoing process of merging. However, with the advent of near-infrared integral field spectroscopy, in some of the clumpy galaxies, disc structures were found that are characterized by significant rotation (Genzel et al., 2006, 2008), a sign for associated structures rather than mergers. However, some fraction of them may still be ongoing mergers (Weiner et al., 2006; Forster Schreiber et al., 2009; Puech, 2010; Rodrigues et al., 2017). The observed discs host giant clumps with masses of \(M_{\rm cl}\lesssim 10^{9}\) M\({}_{\odot}\). It has been proposed that such clumps result from the fragmentation of massive gas discs driven by gravitational instability (Agertz et al., 2009; Dekel et al., 2009; Bournaud et al., 2012; Romeo & Agertz, 2014). Thus, the mode of star formation in gas-rich systems seems fundamentally different compared to the star formation within spiral arms as found in most galaxies at \(z\sim 0\). Much numerical work has been undertaken in the last few years to study gravitational fragmentation scenarios. But in early simulations, inefficient thermal feedback of supernovae resulted in overcooling, which then enhanced disc instability and star formation and led to an overproduction of giant clumps (Ceverino & Klypin, 2009; Agertz et al., 2009). More recently, various works within cosmological simulations and simulations of isolated disc galaxies were again focusing on disc fragmentation at high-\(z\), but including novel feedback recipes which systematically led to less fragmentation even in massive gas-rich discs and generally lower clump masses in the range \(10^{7}\)-\(10^{8}\) M\({}_{\odot}\)(Tamburello et al., 2015; Behrendt et al., 2015; Moody et al., 2014; Mandelker et al., 2017; Oklopcic et al., 2017). It was further proposed that some of the most massive observed star forming clumps might not be the result of "in-situ" disc fragmentation, but rather they could be accreted cores of massive satellite galaxies (Mandelker et al., 2017; Oklopcic et al., 2017). Thus, to date the question of whether massive clumps are a result of "in-situ" disc fragmentation or the product of accreted cores of massive satellite galaxies remains unanswered and needs iterations on both ends, theory and observations. The observational difficulty is that at high-\(z\) the spatial resolution is often too coarse to constrain relevant physical parameters (turbulence, density, timescales related to star formation). Only gravitational lensing may help to reveal details about the clumpy discs at \(z\sim 1-3\)(Dessauges-Zavadsky et al., 2019). Despite the difficulties, recent observations support galaxy-wide disc instabilities as a cause of the clumpy nature, e.g. Dessauges-Zavadsky & Adamo (2018) showed that the clump mass function at \(z\sim\)1-3 follows a power-law consistent with turbulence being the driving mechanism. Moreover, the DYNAMO survey (Fisher et al., 2014), targeting extremely rare _local_ clumpy gas-rich disc galaxies as a proxy for high-\(z\) galaxies, revealed clump properties that favour clump formation induced by galaxy-wide disc instabilities (Fisher et al., 2017; White et al., 2017; Fisher et al., 2019). In this paper, we present new highly-resolved NOEMA CO (2-1) and MUSE H\(\alpha\) observations of LARS 8, a clumpy \(z\sim 0\) galaxy drawn from the _Lyman Alpha Reference Sample_(Ostlin et al., 2014; Hayes et al., 2014). Given the basic properties of the galaxy (see Table 1) with a stellar mass of \(\sim\)\(10^{11}\) M\({}_{\odot}\) and a SFR of \(\sim\)30 M\({}_{\odot}\) yr\({}^{-1}\), LARS 8 resembles main-sequence galaxies at high-\(z\). The LARS galaxy is also known to be rotationally supported (Hereinz et al., 2016; Puschnig et al., 2020), just like face-on disc galaxies typically observed at high-\(z\) (compare Figure 1). Hereinz et al. (2016) and Micheva et al. (2018) further revealed the existence of shells at large galactocentric radii, caused by a merger event that LARS 8 must have undergone recently. Using deep high-resolution 21 cm observations, Le Reste et al. (2022) found a large neutral gas reservoir westwards of the optical galaxy disk. The galaxy is known to have a relatively high gas fraction of 27 percent, a gas depletion time of \(\sim\)1.2 Gyr (Puschnig et al., 2020) and a clumpy morphology. These properties make LARS 8 an ideal laboratory to study clump formation in a gas-rich disc galaxy. The paper is organised as follows. In Section 2 we inform about the spectroscopic observations our results are based on. The methods and tools we use to convert the observables into physical parameters (e.g. star formation rates, mass surface densities, dynamical parameters) are outlined in Section 3. The results are presented in Section 4 and subsequently discussed and compared to related works in 5. Section 6 concludes the paper with a summary. Throughout the paper, we adopt a cosmology with \(H_{0}\)=70, \(\Omega_{\rm M}\)=0.3 and \(\Omega_{\rm vac}\)=0.7. ## 2 Observations and data reduction ### NOEMA CO (2-1) cube We observed LARS 8 in a single pointing under programs W16BS and E16AG with the IRAM Northern Extended Millimeter Array (NOEMA) using eight antennas in configurations A and D, providing maximum baselines of \(\sim\)760 m and \(\sim\)180 m respectively. The target line, CO (2-1), was observed with the WideX correlator (bandwidth \(\sim\)3.6 GHz) using a tuning frequency of 222.044 GHz, corresponding to the systemic velocity derived from H I observations (Pardy et al., 2014). We further performed _on-the-fly_ mapping of LARS 8 with the IRAM 30m telescope under programs 064-15 and 178-15, allowing us to include short spacing visibility data. Extended array observations were executed on December 15, 2016 for a total on-source time of 5.2 hours under good weather conditions with a precipitable water vapour (PWV) of \(\sim\)1.8 mm. Compact array observations were executed on three days during May 2017 for a total on-source time of 6.3 hours under average weather conditions with a PWV of \(\sim\)2-3 mm. The absolute flux scale of the configuration A data was calibrated on LKHA101 using a model flux of 0.54 Jy. The sources \begin{table} \begin{tabular}{c c c c c c} \hline \hline \(D_{\rm L}\) & log \(M_{\star}\) & \(Z^{\dagger}\) & SFR & \(D_{\rm 25}^{\dagger}\) & \(f_{\rm gas}\) \\ [Mpc] & [M\({}_{\odot}\)] & & [M\({}_{\odot}\) yr\({}^{-1}\)] & [\({}^{\dagger}\)] & [\({}^{\dagger}\)k] \\ 167.5\(\pm\)12 & 10.97\(\pm\)0.10 & 8.51 & 30\(\pm\)8 & 30.8 & 27\(\pm\)16 \\ \hline \end{tabular} \end{table} Table 1: Properties of LARS 8 (see Tables 1, 9 and 10 of Puschnig et al., 2020). \(\dagger\) The metallicity \(Z\) was derived by Ostlin et al. (2014) using the \(R_{\rm 23}\)-\(P\) relation and is given in units of 12 + log(O/H). \(\ddagger\) The blue-band diameter of the 25 mag arcsec\({}^{-2}\) isophote was derived from SDSS g-band observations using an SQL query on SDSS DR7. Since the SDSS g-band isophote is typically \(\sim\)1.3 times larger than those measured in the Johnson B band, the g-band diameter was divided by that factor. 1222+216 and 3C273 were used as phase and amplitude calibrators. Average polarization mode was chosen for the amplitude calibration, because the signal was found to be polarized. 3C84 was used as bandpass calibrator. The absolute flux scale of the configuration D data obtained on May 8, 2017 was calibrated on MWC349 with a model flux of 1.87 Jy. For the observations executed on May 2 and May 3, MWC349 data was not available and 3C273 was used instead, assuming a model flux of 7.65 Jy, as measured on May 8. 1236+077 and 3C273 were used for phase and amplitude calibration, whereas 3C273 was also used as bandpass calibrator. All observations were calibrated using the IRAM reduction pipeline GILDAS/CLIC1. Data flagging was performed manually taking into account tracking errors, pointing and focus offsets as well as quality assessment through outlier rejection in time versus amplitude and phase plots as well as large phase discrepancies between the two polarizations. We remark that for the configuration A observations, for one of the antennas tracking errors of more than 4" were reported. All baselines including this antenna were thus rejected. Footnote 1: [http://www.iram.fr/IRAMFR/GILDAS/](http://www.iram.fr/IRAMFR/GILDAS/) Merging of the calibrated visibilities and short spacing correction were performed in the GILDAS/mapping environment, which was also used for imaging. Robust weighting of 0.5 was found to lead to a good compromise between sidelobe suppression and spatial resolution, both of which are important for our science case. Cleaning was done using the Hogbom algorithm (Hogbom, 1974) and a central circular support of 6" diameter. The final clean cube has an r.m.s. noise of 0.8 mJy/beam at a velocity resolution of 10 km/s. The synthetic beam size is 0.61*x0.37" with a position angle of 36". As LARS 8 is substantially extended compared to NOEMA's \(\sim\)23" field of view at the observed frequency, a primary beam correction was finally performed using the PRIMARY task within the GILDAS/mapping environment. ### MUSE observations and data reduction We observed LARS 8 with the Multi-Unit Spectroscopic Explorer (MUSE; Bacon et al., 2010) integral field spectrograph, mounted at Unit Telescope 4 of the Very Large Telescope (VLT). Spectra were obtained on the night of 18 May 2018 under conditions of new moon, airmass lower than 1.2, and with a V-band seeing of 0\(\aas@@fstack{\prime\prime}\)8. We obtained four observations of the main target, each rotated by 90 degrees compared to the previous to minimize fixed pattern noise from the image slicers, using integration times of 650 seconds. Because LARS 8 occupies a major fraction of the MUSE field-of-view, we also obtained a separate sky frame from an adjacent empty pointing using an integration time of 120 seconds. Data were reduced using Version 2.6 of the ESO pipeline, using standard methods and paying special attention to the removal of low surface brightness emission in strong nebular lines. ## 3 Methods Here we briefly describe the routines and tools that we used to obtain physical parameters from the observations. In subsections 3.1-3.3 we explain how the observed data cubes are prepared for further scientific analysis, i.e. the convolution to a common resolution and the derivation of moment maps. Subsections 3.4-3.6 summarize the assumptions and constraints used to convert the observables into physical quantities such as star formation rates, stellar and gas surface densities. In 3.7 and 3.8 the routines for the characterisation of the dynamics of the galaxy's gaseous and stellar components are presented. These dynamical quantities finally allow to constrain the gravitational instability via the Toomre parameter (see Section 3.9). We conclude in 3.11 and 3.12 with a brief description of how individual molecular gas clouds are identified in the NOEMA cube and how we use previously derived physical quantities to estimate the dynamic equilibrium pressure. ### Convolution of the NOEMA data cube to a common resolution Given the slightly lower spatial resolution of the optical data cube compared to our radio data, we convolve the latter to match the resolution of the MUSE observations. To do so, we first deconvolve the elliptical NOEMA beam from the circularized target beam (based on MUSE cube). The resulting convolution kernel is then applied onto the 3D NOEMA cube (plane-by-plane) using the scipy.signal.convvolve package (Virtanen et al., 2020). The circularized synthetic beam size of the matched-resolution NOEMA CO (2-1) data cube is 0.78". Note that throughout the paper we make use of the native resolution CO (2-1) data whenever possible (clump identification, Toomre disc stability analysis). Only plots that include both star formation rates (from H\(\alpha\)) and properties derived from the CO observations are based on the matched-resolution data. ### MUSE line extraction of H\(\alpha\), H\(\beta\) and continuum subtraction From the reduced MUSE cube, we first extract a fixed spectral range around the observed H\(\alpha\) and H\(\beta\) lines, using \(z\)=0.0382531 as the redshift and an extraction window of \(\pm\)420 km/s, centered on the systemic line-center (corresponding to \(z\)). We ensured that the [N II] Figure 1: High-\(z\) target from the PHIBBS survey (Tacconi et al., 2013) versus LARS 8 (Puschnig et al., 2020). The optical morphologies (_top panel_) as well as the CO line emission (_bottom panel_) are remarkably similar. lines are outside the extracted line window of H\(\alpha\). In order to define the continuum level at each line, individual spectral windows were defined blueward and redward of each emission line, after manual inspection of the spectral cube. For H\(\alpha\), suitable windows were found between -2500 and -1500 km/s and from 3000 to 4000 km/s. H\(\beta\) continuum levels were evaluated between -4000 and -2000 km/s as well as within the range of 2000 and 4000 km/s. The continuum correction for each line was then performed via subtraction of a linear fit, obtained from regression (using the python lmfit package) of the flux within the given velocity intervals. ### Moment maps of CO (2-1) and optical emission lines We generate moment zero maps of CO (2-1), H\(\alpha\) and H\(\beta\) via summation of the flux in masked channels, using the approach of "dilated masking". In the NOEMA cube, peak channels were identified that have a more than 4-sigma strong signal in at least three adjacent channels. The mask was then expanded in velocity space as long as the flux in two adjacent channels was above a 2-sigma limit. Additionally, we only allow connected spatial regions that cover at least the size of the synthetic beam of our observations. Moment maps of H\(\alpha\) and H\(\beta\) were created in a very similar manner, i.e. we identified channels with 4-sigma peaks and subsequently grow the mask down to a level of 2-sigma. However, given the lower spectral resolution of the MUSE data cubes, we allow to mask even single channels in velocity space rather than a number of adjacent ones. First and second moment maps were created using the same masks, with moment one being the intensity-weighted mean velocity found under the masked channels and moment two being the intensity-weighted r.m.s. velocity scatter. Moment maps are shown in Figures 2. The uncertainties of our moment maps are calculated via Gaussian error propagation using the r.m.s. outside the line masks as an estimate for the uncertainty of each masked channel. ### Star formation rates from MUSE H\(\alpha\) In order to obtain the intrinsic, extinction-corrected H\(\alpha\) flux, we calculate the dust attenuation from the Balmer decrement using the Cardelli et al. (1989) attenuation law and assume case B recombination at \(10^{4}\) K and an intrinsic, theoretical H\(\alpha\)/H\(\beta\) ratio of 2.86. The average extinction E(B-V) within an aperture of 5 arcsec radius enclosing the center of the galaxy - and thus covering the main part of the NOEMA field-of-view - is 0.7 mag with a maximum value of 1.2 mag in the central pixel and values as low as 0.2 mag in the outer region. We first convert the extinction-corrected H\(\alpha\) flux from erg/s/cm\({}^{2}\) into the corresponding luminosity (\(L_{\rm H\alpha}\)) in erg/s using a luminosity-distance of 167.5 Mpc. The star formation rates (SFRs) in units of M\({}_{\odot}\) yr\({}^{-1}\) per pixel are then calculated using the calibration of Calzetti et al. (2012): SFR=5.5 10\({}^{-42}\)\(L_{\rm H\alpha}\). These SFRs are then converted into surface densities in units of M\({}_{\odot}\) yr\({}^{-1}\) kpc\({}^{-2}\) (\(\Sigma_{\rm SFR}\)) taking into account the cosine correction factor (cos \(i\)) for the galaxy inclination \(i\) of 50\({}^{\circ}\) that we found from the rotation curve (see Section 3.7). ### Stellar surface density A stellar mass map of the galaxy is constructed by performing a pixel spectral energy distribution fit using the HST FUV and optical broad band data from the LARS project. The fitting code "the Ly\(\alpha\) extraction Software" (Ostlin et al., 2014, Melinder et al. in preparation) uses two single stellar populations with four free parameters: stellar mass for the two components, stellar age, and stellar extinction (only one of the populations have a varying age and extinction, the other one is kept at an age of 10 Gyrs and an E(B-V)\({}_{\rm s}\) of 0). The fit is performed for each pixel (or spatial bin) to produce maps of stellar continuum fluxes, mass, age, and extinction. The uncertainties on the stellar masses are estimated within the code using Monte Carlo simulations, in which random noise (corresponding to the r.m.s. in each pixel after drizzling) is added to the originally measured value. The final uncertainty is then the standard deviation obtained from the measurements in all Monte Carlo simulations. For details on the code and the data used for LARS 8 we refer the reader to Ostlin et al. (2014). To find the stellar surface density radial profile we co-add the stellar mass maps of the two components and measure the mean mass surface density in elliptical annuli that exactly match those used for the NOEMA and MUSE data. Finally, the derived mean surface densities (\(\Sigma_{\star}\)) were corrected for inclination using the same quantities as for \(\Sigma_{\rm SFR}\). The scale length of the stellar disc, \(l_{\star}\), was derived from the stellar mass map by fitting an exponential function to the inclination corrected mass profile. ### Molecular gas surface density, depletion time and gas fraction We convert our measured CO (2-1) fluxes \(S_{\rm CO}\) (in units of Jy km s\({}^{-1}\) beam\({}^{-1}\)) to CO luminosities using the definition of \(L^{\prime}_{\rm CO}\) by Solomon & Vanden Bout (2005): \[L^{\prime}_{\rm CO(2-1)}=3.25~{}10^{7}~{}S_{\rm CO(2-1)}~{}v_{\rm obs}^{-2}~{}D _{\rm L}^{2}~{}(1+z)^{-3} \tag{1}\] \(L^{\prime}_{\rm CO(2-1)}\) is then given in K km s\({}^{-1}\) pc\({}^{2}\), \(z\) is the redshift, \(D_{\rm L}\) the luminosity distance in Mpc and \(v_{\rm obs}\) the observed frequency in GHz. For the conversion from \(L^{\prime}_{\rm CO(2-1)}\) to molecular gas masses we first need to down-convert to the luminosity of the J=1-0 line (\(L^{\prime}_{\rm CO(1-0)}\)), for which we assume a line ratio CO(2-1)/(1-0) of 0.7, which is typically observed in several types of galaxies (Saintonge et al., 2017; den Brok et al., 2021). Subsequent multiplication with the conversion factor \(\alpha_{\rm CO}\) finally leads to the molecular gas masses (\(M_{\rm H_{2}}\)). Here we use \(\alpha_{\rm CO}\) = 4.5 that we derived previously using a metallicity-dependent approach (Puschling et al., 2020). This value is similar to \(\alpha_{\rm CO}\) in the Milky Way (Bolatto et al., 2013). We stress that our choice of \(\alpha_{\rm CO}\) is based on a galaxy-wide average. A lower conversion factor might be applicable in the center of the galaxy due to lower CO optical depths driven by a large velocity dispersion. However, to date, no data (e.g. \({}^{13}\)CO) is available to assess any radial trend of the conversion factor in LARS 8. Again, the final gas mass surface density map (\(\Sigma_{\rm H_{2}}\)) was corrected for the inclination of the galaxy. The molecular gas depletion time \(\tau_{\rm depl}\) and the gas fraction \(f_{\rm gas}\) were calculated in the following way: \[\tau_{\rm depl}~{}=~{}\frac{M_{\rm H_{2}}}{SFR} \tag{2}\] \[f_{\rm gas}~{}=~{}\frac{M_{\rm H_{2}}}{M_{\rm H_{2}}~{}+~{}M_{\star}} \tag{3}\] ### Molecular gas rotation curve analysis We use \({}^{3D}\)BAROLO (Di Teodoro & Fraternali, 2015) to derive the galaxy rotation curve from the NOEMA CO (2-1) data cube. The software iteratively fits 3D tilted-ring models to the cube and solves Figure 2: Maps derived from NOEMA CO (_left column_), MUSE H\(\alpha\) (_middle column_) and VLA HI 21cm (_right column_; Le Reste et al. 2022) data cubes. The top row moment-0 maps are given in units of Jy km s\({}^{-1}\) beam\({}^{-1}\) on a logarithmic scale for CO, \(10^{-20}\) erg s\({}^{-1}\) cm\({}^{-2}\) on a logarithmic scale for H\(\alpha\) and \(10^{21}\) cm\({}^{-2}\) on a linear scale for HI. The moment-1 and moment-2 maps in the middle and bottom rows are given in km s\({}^{-1}\). in each ring for inclination, position angle (PA), rotation velocity and velocity dispersion. We ran the software several times to experiment with input parameters such as pixel coordinates of the kinematic center, fixating systemic velocity, inclination and/or PA. Despite the fact that the algorithm robustly constrained inclination and PA we ultimately decided to fix the two parameters (for each ring) to 50 and 160 degrees respectively, while leaving the position of the kinematic center as free parameter. The free parameters derived for each 0.61" wide ring (i.e. the major axis of the beam) are thus the rotation velocity, the velocity dispersion (\(\sigma_{\rm g}\)) and the coordinates of the kinematic center. A summary of the results is shown in Table 2. The position-velocity diagram and the smoothed and interpolated rotation curve obtained using these constraints are shown in Figures 3 and 4. A comparison between observed and modeled quantities is found in Appendix A1. ### Stellar velocity dispersion We use the Penalized Pixel-Fitting method (pPXF) developed by Cappellari & Emsellem (2004); Cappellari (2017) to measure stellar velocity dispersion (\(\sigma_{\rm*}\)). We used a python wrapper developed in the course of the PHANGS-MUSE survey by F. Belfiore and I. Pessa (Belfiore et al., 2022; Emsellem et al., 2022) and based on the gist package (Galaxy IFU Spectroscopy Tool; Bittner et al., 2019). As required by pPXF, we first resample the MUSE data to a logarithmic wavelength axis using a channel size of 50 km s\({}^{-1}\). Following Emsellem et al. (2022), this channel size is sufficient to Nyquist sample the line spread function of MUSE for wavelengths at approximately 7000 A, while over-sampling it at the blue end. In order to avoid strong sky residuals the wavelength range for fitting was limited to 4850-7000 A. In the following we briefly describe the fitting routine as implemented in pPXF. For the stellar continuum fitting the E-MILES simple stellar population models of Vazdekis et al. (2016) are used in combination with a Chabrier (2003) initial mass function, BaSTI isochrones (Pietrinferni et al., 2004), eight ages (0.15-14 Gyr), and four metallicities ([Z/H] = [-1.5, -0.35, 0.06, 0.4]). Thus, a total number of 32 templates are used. Spectral ranges of strong ionised gas emission lines are masked using a width of \(\pm\)400 km s\({}^{-1}\). Since E-MILES offers a higher resolution than our MUSE data, the templates are convolved to the spectral resolution of our data, using an appropriate wavelength-dependent kernel. We fitted four moments of the line-of-sight velocity distribution: velocity, velocity dispersion, h3 and h4. To derive the stellar kinematics we make use of additive Legendre polynomials (12th order, in the spectral direction), and no multiplicative polynomials. The uncertainties on the kinematic parameters are formal errors as given by pPXF. In the literature, the stellar velocity dispersion in galaxies is often estimated from the stellar surface density following the prescription of Leroy et al. (2008): \[\sigma_{\rm*}\ =\ 1.67\ \sqrt{\frac{2\ \pi\ G\ l_{*}}{7.3}}\ \Sigma_{\rm*}^{0.5}, \tag{4}\] with \(\Sigma_{\rm*}\) being the observed stellar surface density in SI units (kg m\({}^{-2}\)), and \(l_{*}\) being the stellar scale length (=630 pc measured via fitting of an exponential profile to the data) in \(m\). The underly \begin{table} \begin{tabular}{c c c c c} \hline \hline Radius & \(v_{\rm rot}\) & \(\sigma_{\rm g}\) & i & PA \\ \([^{\circ}]\) & [km/s] & [km/s] & [\({}^{\circ}]\) & [\({}^{\circ}\)] \\ 0.305 & 154\(\pm\)17 & 11\(\pm\)4 & 50 & 160 \\ 0.915 & 154\(\pm\)11 & 28\(\pm\)7 & 50 & 160 \\ 1.525 & 142\(\pm\)7 & 20\(\pm\)4 & 50 & 160 \\ 2.135 & 151\(\pm\)7 & 174\(\pm\) & 50 & 160 \\ 2.745 & 176\(\pm\)12 & 17\(\pm\)6 & 50 & 160 \\ 3.355 & 215\(\pm\)12 & 11\(\pm\)7 & 50 & 160 \\ 3.965 & 223\(\pm\)10 & 9\(\pm\)5 & 50 & 160 \\ 4.575 & 210\(\pm\)9 & 6\(\pm\)5 & 50 & 160 \\ 5.185 & 240\(\pm\)15 & 8\(\pm\)4 & 50 & 160 \\ 5.795 & 272\(\pm\)30 & 7\(\pm\)4 & 50 & 160 \\ \hline \end{tabular} \end{table} Table 2: Kinematic properties for elliptical rings. Figure 4: Smoothed and interpolated rotation curve (before inclination correction) derived for LARS 8 from the NOEMA CO (2–1) cube. Figure 3: NOEMA CO (2–1) position-velocity diagram along the major axis (_top panel_) and minor axis (_bottom panel_) of LARS 8. The yellow-brown points indicate the radial bins of the derived rotation curve. ing assumptions of the equation are the following: the exponential stellar scale height h\({}_{*}\) of the galaxy does not vary with radius, and \(h_{*}\) is related to the stellar scale length \(l_{*}\) via \(l_{*}/h_{*}\)=7.3\(\pm\)2.2, i.e. the flattening ratio measured by Kregel et al. (2002). It is further assumed that the disc is isothermal in the z-direction and hydrostatic equilibrium then allows one to derive \(\sigma_{*}\) from the observed stellar surface density \(\Sigma_{*}\) and the estimated stellar scale height. Finally a fixed ratio of 0.6 between the radial and vertical component of the velocity dispersion is assumed, which is reasonable for most late-type galaxies (Shapiro et al., 2003). We refer the reader to the appendix of Leroy et al. (2008) for more details. A comparison of radial averages of the velocity dispersion derived from MUSE and estimated as explained above is shown in Figure 5. We find for LARS 8 that in particular in the central region where the velocity dispersion is highest, the estimates overshoot the true (MUSE-based) values by up to approximately 40 percent. ### Toomre \(Q\) disc stability As gravitational instability is believed to hold a key part in driving gas turbulence (Agertz et al., 2009a; Krumholz and Burkhart, 2016), we consider a theoretical framework to evaluate this instability. One of the most common ways of quantifying this instability is Toomre's \(Q\) parameter (Toomre, 1964), which governs the stability of a smaller patch inside a disc system. The Toomre parameter for an axisymmetric, fluid disc with a differential rotation, can be determined by analysing the response of the disc to a small perturbation. The growth of this perturbation is driven by gravity, expressed as a surface density wave function. By evaluating the dispersion relation, first shown by Safronov (1960), Toomre (1964) found the condition: \[Q_{g}\ =\ \frac{\kappa\ \sigma_{g}}{\pi\ G\ \Sigma_{g}}>1 \tag{5}\] for the disc being locally stable against graviational collapse. In this work we compute the epicyclic frequency \(\kappa\) as 1.41 \(\frac{v(r_{\rm pl})}{r_{\rm pl}}\sqrt{1+\beta}\) and \(\beta\ =\ \frac{d\log\ v(r_{\rm pl})}{d\log\ (r_{\rm pl})}\). For \(Q_{g}\)\(<\)1, the disc is locally unstable. In this equation, \(\kappa\) is the epicyclic frequency, \(\Sigma_{g}\) is the gas surface density, \(\sigma_{g}\) is the gas velocity dispersion (from the rotation curve analysis) and \(G\) is the gravitational constant. The physical meaning of \(\kappa\) can be thought of as the rotational support against collapse, \(\sigma\) is the pressure support against collapse and \(\Sigma\) sets the level of self-gravity driving the instability. An implementation of the method to compute \(Q_{g}\) is provided via GitHub by Puschnig (2020)2, including a working example. The above method can be expanded to a disc filled with star particles and differ only slightly from the approach of a fluid: Footnote 2: [https://github.com/astrojohames/toomreQ](https://github.com/astrojohames/toomreQ) \[Q_{*}\ =\ \frac{\kappa\ \sigma_{*}}{3.36\ G\ \Sigma_{*}} \tag{6}\] Combining the Toomre parameters for stars and gas is a necessary step to determine the stability of a multi-component disc, which is the case for most galaxies. We assume that \(\kappa\) is the same for both the gaseous and the stellar disc, i.e. that gas and stars follow the same rotation. We stress that the combined \(Q\) is derived such that it also obeys the instability criterion of \(Q\sim\)1. There have been several different approaches to combining \(Q\) parameters and an extensive look into different methods was done by Romeo and Falstad (2013). In this paper, we use the approximation of Romeo and Falstad (2013): For \(Q_{*}>Q_{g}\) (gas dominated regime): \[\frac{1}{Q}=\frac{1}{Q_{g}}+\frac{CF}{Q_{*}} \tag{7}\] For \(Q_{*}<=Q_{g}\) (star dominated regime): \[\frac{1}{Q}=\frac{CF}{Q_{g}}+\frac{1}{Q_{*}} \tag{8}\] The correction factor CF is given for both cases via: \[\frac{2\sigma_{*}\sigma_{g}}{\sigma_{*}^{2}+\sigma_{g}^{2}} \tag{9}\] ### Uncertainty of the Toomre Q parameter measurement In order to assess the uncertainty of the derived Toomre \(Q\) parameter, we propagate our measurement uncertainties, i.e. the final variance of the Toomre \(Q_{g}\) parameter is given by the sum of the following products: * squared partial derivatives of \(Q_{g}\) with respect to \(\kappa\) times the square of the propagated uncertainty of \(\kappa\), * squared partial derivatives of \(Q_{g}\) with respect to \(\sigma_{g}\) times the sum of the uncertainty of \(\sigma_{g}\), * squared partial derivatives of \(Q_{g}\) with respect to \(\Sigma_{g}\) times the squared uncertainty of \(\Sigma_{g}\). The uncertainty of \(Q_{*}\) is calculated in an analogue way, but with derivatives of \(Q_{*}\) with respect to \(\sigma_{*}\) and \(\Sigma_{*}\) and using the uncertainties on these parameters. For reference, we show the exact formulas we used below: \[\mathrm{var}(Q_{g})=\frac{\sigma_{g}^{2}\ \kappa^{2}\ \mathrm{unc}(\Sigma_{g}^{2})}{ \pi^{2}\ G^{2}\Sigma_{g}^{4}}+\frac{\sigma_{g}^{2}\ \mathrm{unc}(\kappa^{2})}{\pi^{2}\ G^{2}\ \Sigma_{g}^{2}}+\frac{\kappa^{2}\ \mathrm{unc}(\sigma_{g}^{2})}{\pi^{2}\ G^{2}\ \Sigma_{g}^{2}} \tag{10}\] \[\mathrm{var}(Q_{*})=\frac{\sigma_{*}^{2}\ \kappa^{2}\ \mathrm{unc}(\Sigma_{*}^{2})}{3.3 6^{2}\ G^{2}\Sigma_{*}^{4}}+\frac{\sigma_{*}^{2}\ \mathrm{unc}(\kappa^{2})}{3.36^{2}\ G^{2}\ \Sigma_{*}^{2}}+\frac{\kappa^{2}\ \mathrm{unc}(\sigma_{*}^{2})}{3.36^{2}\ G^{2}\ \Sigma_{*}^{2}} \tag{11}\] \[\mathrm{var}(\kappa)=\frac{1.9881\ \mathrm{unc}(v(r_{\rm gal}))^{2}}{r_{\rm gal}^{2}} \tag{12}\] Figure 5: Comparison of radial averages of the velocity dispersion derived from MUSE (\(y\)-_axis_) and estimated using the prescription of Leroy et al. (\(x\)-_axis_). The vertical errorbars show the propagated uncertainties of the measurements in each radial ring. Uncertainty for \(Q_{*}>Q_{\mathcal{B}}\) (gas dominated regime): \[\mathrm{var}(Q_{\mathrm{tot}})=\frac{CF^{2}\ \mathrm{unc}(Q_{*})^{2}}{Q_{*}^{4} \left(\frac{CF}{Q_{*}}+\frac{1}{Q_{\mathrm{gas}}}\right)^{4}}+\frac{\mathrm{unc}( Q_{\mathcal{B}})^{2}}{Q_{\mathrm{gas}}^{4}\left(\frac{CF}{Q_{*}}+\frac{1}{Q_{ \mathrm{gas}}}\right)^{4}} \tag{13}\] Uncertainty for \(Q_{*}<\mathcal{L}_{\mathcal{B}}\) (star dominated regime): \[\mathrm{var}(Q_{\mathrm{tot}})=\frac{CF^{2}\ \mathrm{unc}(Q_{\mathrm{gas}})^{2}}{Q_{ \mathrm{gas}}^{4}\ \big{(}\frac{CF}{Q_{\mathrm{gas}}}+\frac{1}{Q_{*}}\big{)}^{4}}+\frac{ \mathrm{unc}(Q_{*})^{2}}{Q_{*}^{4}\ \big{(}\frac{CF}{\mathrm{gas}}+\frac{1}{Q_{*}}\big{)}^{4}} \tag{14}\] The individual uncertainties that occur in the equations above are estimated in the following way. Uncertainties of the kinematics parameters derived with \({}^{3D}\)BAROLO are obtained via exploration of the parameter space around the best fit solutions using an MCMC approach (Iorio et al., 2017). Hence, the uncertainties for the gas velocity dispersion, rotation velocity and \(\kappa\) should be robust and statistically significant measures. We stress that asymmetric drift correction is negligible in our case, because the rotation velocities (see Table 2) of LARS 8 are more than ten times higher than the velocity dispersions (de Blok et al., 2008; Iorio et al., 2017). The uncertainties on the final rotation curve are thus equal to the uncertainties of the rotation velocity. Please refer to the individual sections on gas and stellar masses for a description of how their uncertainties were estimated. ### Molecular clumps: identification, virial mass and virial parameter We apply CPROPST00(Williams et al., 1994; Rosolowsky & Leroy, 2011; Leroy et al., 2015), an IDL package that is available through GitHub3 and was developed to identify and measure properties of molecular clouds or clumps in fits data cubes. In particular, CPROPST00 corrects for the effects of beam convolution and sensitivity when measuring physical properties such as masses or sizes of identified clouds, allowing to make unbiased (beam-independent) measurements. Using a growth-curve analysis on the observed emission line, the algorithm thus extrapolates the measurements to values one would expect in the case of perfect sensitivity. Additionally, CPROPST00 corrects for finite resolution in both the velocity and spatial domain. This is done via de-convolution of the telescope beam and the width of a spectral channel from the measured cloud size and line width. For more details, we refer the reader to the aforementioned publications. Here we report the main parameters for the find_local_max task that we applied for clump identification: delta=2, /snr, minpix=20, minarea=2, minvchan=2, friends=4, specfriends=2. Footnote 3: [https://github.com/akleroy/cpropstoo](https://github.com/akleroy/cpropstoo) The virial mass depends on measurements of the size and the observed line width (due to turbulence) of the cloud. If these quantities are known and the radial density profile is given, then following Solomon et al. (1987), the mass of the cloud under the assumptions of virial equilibrium and spherical symmetry can be calculated: \[M_{\mathrm{vir}}=\frac{3(5-2\gamma)}{G(3-\gamma)}\Delta v^{2}R \tag{15}\] The virial mass \(M_{\mathrm{vir}}\) is then given in M\({}_{\odot}\) and depends on the radial density distribution exponent \(\gamma\), the linear cloud size (R) in parsec and the full width at half-maximum (FWHM) of the line in km/s (\(\Delta\)v). Taking the frequently assumed \(\gamma\)=1 radial density distribution exponent, corresponding to a cloud radial density profile \(\rho\propto r^{-1}\)(MacLaren et al., 1988; Hughes et al., 2010), the above equation can be re-written as: \[M_{\mathrm{vir}}=1040\sigma^{2}R \tag{16}\] The units are M\({}_{\odot}\), km s\({}^{-1}\) and pc for M\({}_{\mathrm{vir}}\). \(\sigma\) and \(R\) respectively. In this equation the numerical coefficient accounts for the radial density profile, the conversion factor between FWHM and velocity dispersion (\(\Delta\)v=2.35\(\sigma\)) and the gravitational constant. Departures from virial equilibrium can be expressed via the virial parameter, \(\alpha_{\mathrm{vir}}\): \[\alpha_{\mathrm{vir}}=\frac{2K}{U}=\frac{5\sigma^{2}R}{GM_{\mathrm{lum}}}=1.12 \frac{M_{\mathrm{vir}}}{M_{\mathrm{lum}}} \tag{17}\] where \(K\) and \(U\) denote the kinetic energy and self-gravitational potential energy respectively. The quantity M\({}_{\mathrm{lum}}\) is the luminous molecular mass converted from low-J CO intensities using a conversion factor. Virialized clouds without surface pressure or magnetic support have \(\alpha_{\mathrm{vir}}\)=1, while both marginally bound clouds and clouds in free-fall collapse share energy equipartition (\(K=U\)) and thus have \(\alpha_{\mathrm{vir}}\sim\)2(Ballesteros-Paredes et al., 2011; Camacho et al., 2016; Ibanez-Mejia et al., 2016; Sun et al., 2018). ### Dynamical equilibrium pressure ISM pressure plays a crucial role in many theories of star formation (e.g. Ostriker & Shetty, 2011), as it determines the gas density distribution (Helfer & Blitz, 1997; Usero et al., 2015; Bigiel et al., 2016; Gallagher et al., 2018). Following Elmegreen (1989) we estimate the mid-plane dynamic equilibrium pressure, \(P_{\mathrm{de}}\), using the following prescription: \[P_{\mathrm{de}}\ =\ \frac{\pi\ G\ \Sigma_{\mathrm{gas}}^{2}}{2}\ +\ \Sigma_{\mathrm{gas}}\sqrt{2\ G\ \rho_{*}}\ \sigma_{\mathrm{gas}} \tag{18}\] Here, \(\Sigma_{\mathrm{gas}}\) is the total gas surface density, including the atomic and molecular component. Since our study only covers the central part of the galaxy, i.e. the high-density regime, in which most atomic gas is readily converted to molecular gas, we may only consider \(\Sigma_{\mathrm{H_{2}}}\) instead. The vertical velocity dispersion of the gas is denoted as \(\sigma_{\mathrm{gas}}\) and the parameter \(\rho_{*}\) is the mass volume density of stars and dark matter at the mid-plane, which we estimate following van der Kruit (1988) using the relation: \(\rho_{*}=\Sigma_{*}/(2\ h_{*})\), with the disc scale height \(h_{*}\). \(P_{\mathrm{de}}\) then expresses the pressure needed to balance the vertical gravity on the gas in the galaxy disc. The first term reflects the gas self-gravity, the second term reflects the weight of the gas in the potential well of the stars. Since the stellar potential in LARS 8 exceeds the gas self-gravity, we expect the second term to be dominant. ## 4 Results The methods outlined in the previous section enable us to quantify clump/cloud properties (Sections 4.1-4.2) as well as star formation relations and radial trends in LARS 8 (Section 4.3). Finally, the gravitational instability of the disc is shown in Section 4.4. ### Identification of molecular clumps in LARS 8 Applying CPROPST00 on our native resolution NOEMA CO (2-1) data cube with a channel width of 10 km/s, we could identify 12 molecular clumps in total (see Figures 6 and 7). The unbiased properties of the identified molecular clumps are summarized in Table 3. Their masses range from 10\({}^{8.1}\) to 10\({}^{9.3}\) M\({}_{\odot}\), covering linear (extrapolated) diameters between \(\sim\)600-2000 pc. Clump 7 was found to be the most massive one, located in the very center of the galaxy. The channel maps between approximately \(-\)180 km s\({}^{-1}\) and +210 km s\({}^{-1}\) in Figure 7 may suggest that CPROPST00 failed in associating extended gas to clumps in the central region of the galaxy. This is because the linewidth in the central few pixels of the galaxy is extremely wide (more than 300 km\({}^{-1}\)) due to beam smearing. Much higher resolution (spatially and spectrally) would be needed to identify individual clumps in that part of the galaxy. ### Mass-size relation for the massive clumps Figure 8 compares the derived masses and sizes of the molecular clumps identified in LARS 8 to the literature compilation of Nguyen-Luong et al. (2016), which contains giant molecular clouds (GMCs) of the Milky Way with sizes smaller than 10 pc, molecular cloud complexes (MCCs) with sizes between 10 and 1000 pc, as well as galaxies and structures larger than 1 kpc typically found at high redshift. Note that the identified structures or clumps in LARS 8 are resolved, i.e. their deconvolved diameters are at least as wide as the beam major axis. We thus conclude from Figure 8 that the clumps of diffuse molecular gas in LARS 8 are in fact scaled-up versions of the MCCs in the literature. In the mass-size relation they populate the range between MCCs and structures identified at high redshifts. However - despite an ongoing massive star formation process in LARS 8 - the clumps follow the same trend between mass and size. This finding implies a universal (constant) diffuse molecular mass surface density, even in highly star-forming galaxies such as LARS 8. The elevated star formation rates must thus result from processes _within_ the large diffuse molecular reservoirs we identified in CO (2-1). It might be that either the structures contain more over-densities (e.g. traced by HCN) or that the star formation is in some way more efficient. The latter is supported by observations of Messa et al. (2019), who have derived sizes and properties of clumps identified from very high-resolution UV photometry. They find that the range of clump sizes in LARS 8 is similar to those in normal star forming galaxies or at high redshift, i.e. 15-200 pc. However, the star formation rates per UV clump are higher and fall between those observed in the local and high-\(z\) Universe. Also, a combination of both more dense clumps and higher efficiency per clump may apply. ### Radial profiles and KS relation Figure 9 shows inclination-corrected, elliptical profiles of several quantities we have derived, centered on the maximum stellar surface density. The plots show that while the molecular gas surface density declines relatively smoothly from the center outwards, the stellar surface density is peaked in the innermost \(\sim\)500 pc. This peak may represent a bulgelike structure that is about to form, similar to observations in high redshift discs (Elmegreen et al., 2009) and as predicted by numerical simulations, e.g. in Elmegreen et al. (2008). Thereby, gas-rich disc galaxies show disc instabilities that first trigger clump formation. These clumps (and other disc matter) move inwards and merge, forming a bulge (or bulgelike-clump) that is characterized by a Sersic index n=4 (like a classical bulge) and rotation. See Rasekh et al. (2022) for a compilation of Sersic profiles for LARS galaxies. In contrast, the star formation rate density is highest in a ring-like structure located at a radius of \(\sim\)1.2 kpc. The lowering of the SFR towards the inner kiloparsec in combination with the low molecular gas fraction suggests that some process has quenched star formation in the center, e.g. AGN feedback. Alternatively, it might be that the extinction correction underestimates the true SFRs in the innermost parts, where H\(\alpha\) becomes optically thick. However, this would not explain the relatively low gas fraction in the center. Moreover, Figure 9 reveals that the molecular gas depletion time, \(\epsilon_{\rm{G}pl}\), strongly declines from more than 1 Gyr in the center to \(\sim\)100 Myr in the outer parts of the disc. This contrasts normal star-forming galaxies that typically have roughly constant (Bigiel et al., 2011) or even radially increasing gas depletion time scales (Leroy et al., 2008). This behaviour is further suggested by (some) gravity-driven theoretical models of star formation, e.g. Krumholz et al. (2012) argue that in the regime of normal star-formation the GMCs are basically decoupled from the rest of the ISM. The depletion time is then mainly set by the internal properties and processes of the GMCs - that are roughly constant in normal Milky-Way-like clouds - rather than by the large-scale behavior of the ISM. Krumholz et al. (2012) further argue that in starbursts (with a Toomre \(Q\) parameter \(\sim\)1) the depletion time should be set by the orbital (dynamical) time. However, given the fact that the orbital time increases with radius (flat rotation) one would expect from such theory that the depletion time increases with radius. This is not observed in LARS 8. The molecular Kennicutt-Schmidt relation for LARS 8 is presented in Figure 10. Each point in the plot represents an independent measurement (line-of-sight) that we calculated from the mean value within 2x2 bins (using numpy reshape). It is seen that the measurements of individual lines-of-sight exhibit a relatively large scatter within a range of roughly one order of magnitude. However, the central region forms an interesting feature that is characterised by a roughly constant star formation rate density while the molecular gas surface density varies by up to an order of magnitude, with a mean gas depletion time around \(\sim\)1 Gyr. ### Disc stability - Toomre \(Q\) analysis Using the smoothed rotation curve (see Figure 4) derived from the NOEMA CO (2-1) data cube, and subsequent calculation of the \(\beta\)-parameter and the epicyclic frequency \(\kappa\), the Toomre \(Q\) parameters for the molecular gas (\(Q_{\rm{gas}}\)), the stellar component (\(Q_{\star}\)) and the combined total instability parameter (\(Q_{\rm{tot}}\)) could be computed as a function of galactocentric radius (see Figure 11). Note that we have centered the previously discussed radial profiles on the stellar peak, while here we (have to) use the kinematic center. Between these \begin{table} \begin{tabular}{r r r r r r} \hline \hline ID & \(R_{\rm{eff}}\) & log \(M_{\rm{lum}}\) & \(\alpha_{\rm{vir}}\) & \(\sigma\) & \(v_{\rm{los}}\) \\ & [pc] & \multicolumn{2}{c}{M\({}_{\odot}\)} & [km s\({}^{-1}\)] & [km s\({}^{-1}\)] \\ 1 & 601 & 8.20 & 1.2 & 16.6 & -151 \\ 2 & 322 & 8.14 & 1.0 & 19.2 & -126 \\ 3 & 913 & 8.55 & 0.9 & 16.8 & -135 \\ 4 & 343 & 8.59 & 1.0 & 30.7 & -92 \\ 5 & 340 & 9.23 & 0.8 & 59.7 & -73 \\ 6 & 320 & 8.40 & 1.4 & 31.1 & -63 \\ 7 & 1023 & 9.26 & 0.5 & 27.8 & 4 \\ 8 & 469 & 8.51 & 1.7 & 32.0 & 53 \\ 9 & 436 & 8.76 & 1.1 & 34.5 & 56 \\ 10 & 609 & 8.89 & 0.4 & 20.5 & 135 \\ 11 & 480 & 8.57 & 1.0 & 26 & 164 \\ 12 & 436 & 8.21 & 1.9 & 24.7 & 181 \\ \hline \end{tabular} \end{table} Table 3: Properties of the identified molecular clumps. The clumps in Figures 6 and 7 are identified by the IDs as given in the Table below. See also Equations 16 and 17. Figure 6: NOEMA CO (2–1) channel maps of the northern part of LARS 8, showing velocities between -180 and +10 km/s with identified CPROPST00 structures (shown as contours). Properties of the identified clumps are summarized in Table 3. Figure 7: NOEMA CO (2–1) channel maps of the southern part of LARS 8, showing velocities between 20 and +210 km/s with identified CPROPST00 structures (shown as contours). Properties of the identified clumps are summarized in Table 3. two we find an offset of \(\sim\)0.8 arcsec or \(\sim\)650 pc. Such offset is also found in numerical simulations of Elmegreen et al. (2008) during the phase of the formation of a central bulgelike-clump. We stress that the overall shape of the radial profiles does not change if the kinematic center is used instead. Figure 11 reveals that only the innermost \(\sim\)500 pc of LARS 8 are stable. This central stability is mainly driven by high values of \(\kappa\) due to the extremely steep rise in rotation velocity that causes very high \(\beta\) values. Note that although we cannot (kinematically) resolve the central \(\sim\)500 pc, i.e. we cannot distinguish between rotation and dispersion (beam smearing), it is still possible to compute \(\kappa\). The plot further shows that the outskirts of the disc are unstable over large scales, with values of \(Q_{\rm tot}\) well below the critical limit of 1. Such highly unstable discs are not observed in normal star-forming disc galaxies (Leroy et al., 2008), but seem typical for the clumps observed in massive high-\(z\) discs (Genzel et al., 2011; Wisnioski et al., 2012; Mieda et al., 2016). The relatively high star formation rate surface densities observed in LARS 8 over large scales are thus likely the result of enhanced disc fragmentation due to \(Q_{\rm tot}<<\)1. These instabilities thus trigger the formation of massive stellar and molecular clumps. However, it seems that purely gravity-driven theoretical models of star formation do not reproduce our observations, in particular e.g. Krumholz et al. (2012) predict for galaxies in the Toomre regime (as LARS 8) a positive correlation between the molecular gas depletion time and the orbital period. As explained, this is not observed in LARS 8. Other models assume that the star formation process is self-regulated and thus leads to pressure balance in the ISM. In particular, the star-forming system is then in balance between feedback processes from star formation and the external pressure. In case of a disc galaxy the relevant pressure is then \(P_{\rm de}\), the dynamical equilibrium pressure. Based on that, e.g. Ostriker and Shetty (2011) predict a linear relation between the star formation rate surface density and the ISM pressure. We test this prediction in the next section. Figure 8: Mass-size relation for the massive clumps identified in LARS 8 (_cyan points_) with the ID numbers as given in Table 3. The clumps are compared to the literature compilation of Nguyen-Laong et al. (2016), that is based on GMC (_plus signs_) data of Onishi et al. (2002); Heyer et al. (2009); Martus et al. (2010); Roman-Duval et al. (2010); Evans et al. (2014); Shimajiri et al. (2015), MCC (_stars_) data of Rosolowsky (2007); Murray (2011); Wei et al. (2012); Miura et al. (2012, 2014); Donovan Meyer et al. (2013); García et al. (2014) and galaxies (_diamonds_) from Leroy et al. (2013); Tacconi et al. (2013); Genzel et al. (2010). Figure 10: Resolved molecular Kennicutt-Schmidt relation for LARS 8, based on SFRs from extinction-corrected H\(\alpha\) and molecular masses from CO (2–1), both corrected for inclination. Each point corresponds to a measurement in a \(\sim\)650 pc sized region/pixel. The colors indicate distance from the center. Figure 9: _Top panel_: Elliptical inclination-corrected profiles of stellar (_blue diamonds_), molecular (_cyan circles_) and SFR surface densities (_black squares_). _Bottom panel_: Same as top panel for the molecular gas depletion time (\(\tau_{\rm dapil}\)), the H\(\alpha\)-based star formation rate surface density (\(\Sigma_{\rm SFR}\)) and the molecular gas fraction (\(f_{\rm gas}\)). ## 5 Discussion We showed in the previous section that the galactic disc of LARS 8 is highly unstable, in particular at radii outwards of \(\sim\)500 pc. The Toomre \(Q\) parameter is found to be significantly lower than one (see Figure 11) and we conclude that the formation of the observed massive molecular and stellar clumps is driven by fragmentation of the disc rather than accretion of external mass or merging. Contrarily, the central region of LARS 8 was found to be different. It has a Toomre \(Q\) parameter greater than one and is thus stable, it has a relatively low gas fraction and a low star formation rate density, and it has a depletion time of more than \(\sim\)1 Gyr (which is much longer than within the disc). Utomo et al. (2017) studied the molecular gas depletion time as a function of local environment in 52 non-AGN disc galaxies drawn from the EDGE-CALIFA (Sanchez et al., 2012; Bolatto et al., 2017) survey. They find that galaxies with increased central stellar surface densities (relative to the disc) typically show a decrease in \(\tau_{\rm dgpl}\) in the center. As stellar surface density is the determining factor for ISM pressure, Utomo et al. (2017) claim that the observed shorter central gas depletion times are a consequence of higher external pressure that facilitates cloud collapse. In the center of LARS 8 we also observe an increase in stellar surface density compared to the disc, but at the same time - for the center - we find _longer_ molecular gas depletion times. Additionally, our radial plots (Figure 9) show that the star formation rate surface density sharply drops towards the center, while the molecular gas surface density in LARS 8 decreases relatively smoothly from the center to the outskirts. Some process in the center must therefore lead to quenching of star formation. We find evidence that shear is mainly responsible for the suppression of star formation in the center, as we see that \(\kappa\) increases by a factor of \(\sim\)4 between a radius of \(\sim\)1 kpc towards the innermost central region. Also the gas velocity dispersion increases in that radial regime (from \(\sim\)1 kpc to the center), however only by roughly 50 percent. Thus, it is mainly shear that causes \(Q>>1\) in the center. Feedback from supernovae (that drive the gas velocity dispersion) thus plays only a minor role (if any) for the suppression of star formation. In fact our kinematic results (see Table 2) even suggest that the gas velocity dispersion drops towards the innermost region. This observation further rules out feedback from SNe, but we caution that the measurement of velocity dispersion in the center is relatively uncertain due to beam smearing caused by the steep increase of the rotation curve. However, further support against star formation quenching due to SNe is found from stellar population synthesis performed by Melinder et al (in preparation). They show that the central \(\sim\)500 pc of LARS 8 are dominated by old stars with ages \(>\)1 Gyr. We also compare our observations to the feedback models of e.g. Ostriker et al. (2010) or Faucher-Giguere et al. (2013), which are based on a balance between energy injected through feedback and disc vertical pressure. The models predict an inverse relation between \(\tau_{\rm dgpl}\) and the vertical gas velocity dispersion. Such relation was previously observed by Fisher et al. (2019) in a set of massive and highly turbulent discs. However, from our data of the central region of LARS 8 we cannot test any such correlation, because of spatial resolution and beam smearing that makes measurements of the velocity dispersion extremely challenging. Given our current data, we thus conclude that shear is the most likely cause for the relatively low star formation rates in the center and the long depletion times. As we mentioned in the previous section, it might also be the case that the computed star formation rates in the center are somewhat spurious due to the relatively high extinction found in this region. We stress that the total galaxy-wide SFR from extinction-corrected H\(\alpha\) is in fact 50 percent higher than the SFR we previously derived in Puschnig et al. (2020) from far infrared measurements. This might be an indication that the Balmer-decrement method overestimates the true fluxes/SFRs rather than underestimating it. On the other hand, the discrepancy between infrared and H\(\alpha\) based SFRs may be the result of a star formation history with a recent burst (to which H\(\alpha\) is more sensitive). In contrast, the environmental properties of the outer disc of LARS 8 are different, in particular we find that the disc is highly unstable. The radial profiles in Figure 9 further revealed that the molecular gas depletion time in LARS 8 decreases with the galactic radius. This behaviour is contrary to what is typically observed in nearby disc galaxies in which \(\tau_{\rm dgpl}\) either stays flat or slightly increases with radius (Leroy et al., 2008). Note that models of star formation in stable discs, e.g. Krumholz et al. (2012), predict exactly such behaviour for GMCs that are basically decoupled from the large-scale ISM. In these models star formation is mainly dictated by local properties rather than large-scale effects. Additionally, Semenov et al. (2017) showed that for regular, local spiral galaxies, \(\tau_{\rm dgpl}\) is \(\sim\)1-2 Gyr due to the long time the gas spends in the non-star-forming phase, while only a small fraction of the gas is converted into stars within a short time. The difference in the radial \(\tau_{\rm dgpl}\) profiles between LARS 8 and normal star forming disc galaxies (Romeo and Mogotsi, 2017) is thus the result of the observed large-scale Toomre instabilities in LARS 8, in which the ISM is dominated by dozens of supermassive star-forming clouds that disallow the star forming regions to decouple from the ambient ISM (as they make up the ISM). Krumholz et al. (2012) also made predictions of \(\tau_{\rm dgpl}\) for starbursts in the Toomre regime (\(Q\sim\)1), for which they find that \(\tau_{\rm dgpl}\) should mainly be dictated by the dynamic timescale, i.e. \(2\pi\pi/\nu_{\rm rot}\). Our observations, however, are not in agreement with this prediction of a radially increasing gas depletion time. We argue that the observed instabilities in LARS 8 are more violent (\(Q<<\)1) and thus involve more complex physical processes such as galaxy-scale shocks or inflows (Barnes, 2004; Teyssier et al., 2010; Powell et al., 2011) which were omitted by the models of Krumholz et al. (2012). We now test our observations against models that are based on the assumption that star formation is self-regulated through a Figure 11: LARS 8 disc stability from radial Toomre \(Q\) analysis. Regions with \(Q<\)1 (_light red shaded area_) are considered unstable. The total instability parameter (\(Q_{\rm tot}\)) is shown as _black_ curve with the _grey shaded area_ indicating its uncertainty. The contribution of the stellar and the gaseous component are shown by the _blue_ and _cyan_ curves respectively. balance between ISM pressure and feedback. For example, Ostriker and Shetty (2011) and Kim et al. (2013) predict in their semi-analytic models a (nearly) linear relationship between the pressure and the star formation rate surface density: \(\Sigma_{\rm SFR}=4\,f^{-1}\,P_{\rm de}\). As described in Fisher et al. (2019) the scaling factor \(f\) can be determined from \(\sigma_{\rm gas}=0.366\,(\tau_{\rm ff}/\tau_{\rm depel})\)\(f\)(Shetty and Ostriker, 2012). This leaves the free-fall timescale \(\tau_{\rm ff}\) as the only unknown. Krumholz et al. (2012) further estimate that the range of \(\tau_{\rm ff}\) should be between 1-10 Myr for starburst galaxies, i.e. in high density regimes. In Figure 12 we plot the star formation rate densities against pressure for LARS 8 and two comparison samples. Each point in the plot represents an independent measurement (line-of-sight) that we calculated from the mean value within 2x2 bins. The dashed and dotted lines indicate the model predictions for the above mentioned range in \(\tau_{\rm ff}\) and a fixed gas depletion time of 300 Myr that we typically find in the disc of LARS 8. The Figure shows that the predicted linear relation does not fit the data, we rather find evidence for a sub-linear trend, similar to Fisher et al. (2019). The slope in LARS 8, however, seems even shallower, in particular in the low-pressure regime. We conclude that in the outskirts of the observed disc the star formation is out of equilibrium as described in feedback-regulated star formation models, and is dictated by large scale instabilities instead. The importance of the large-scale environment for star formation in LARS 8 is also reflected by the fact that the virial parameter of the identified diffuse molecular structures (see Table 3) has values that are roughly identical to those found in Milky Way GMCs or normal disc galaxies (Sun et al., 2018). Most clumps are found to be virialized with \(\alpha_{\rm vir}\sim 1\) in which kinetic and gravitational energy are roughly balanced. This provides further evidence that on the scale of a few hundred parsec the stars form in a roughly uniform environment. The high star formation rates observed in LARS 8 must thus be caused by an increase in the number of clouds that are triggered by large-scale gravitational instability (with low Toomre \(Q\)). Hence, the shorter gas depletion time scale - or higher star formation efficiency - observed in the outer disc does not imply that on our clump scales the process of star formation is more efficient, but rather that the formation of individual clumps is more efficient. Next, we discuss how the choice of a fixed CO conversion factor \(\alpha_{\rm CO}\) impacts our findings of the radial trend of \(\tau_{\rm depel}\) and the Toomre \(Q\) instability of the disc. We know from the MUSE data that there is a slight increase in metallicity towards the center of the galaxy. Hence, application of a metallicity-dependent conversion factor would only lead to a relatively lower value of \(\alpha_{\rm CO}\) in the center than in the outer part of the disc. As a result, this would only exaggerate the observed trend of decreasing molecular gas depletion with radius. For the results of our disc stability analysis, the fixed conversion factor has only minor impacts for two reasons. First, the instabilities are mainly driven by the stellar component (which is formally also shown in Romeo and Falstad, 2013). Second, it would only lead to slightly higher gas surface densities in the disc, lowering support of the disc against collapse and thus resulting in even lower \(Q\) values. However, not only the metallicity impacts the conversion factor. In infrared galaxies, but also in the centers of nearby galaxies, the nuclear zone is sometimes found to have lower \(\alpha_{\rm CO}\) caused by hotter molecular gas, thus higher velocity dispersion (which reduces the CO optical depth). This is seen e.g. in NGC 6946 (Meier and Turner, 2004). If CO optical depths in the center of LARS 8 were systematically lower, we would need to use a lower conversion factor. In the case of a typical ULIRG value (\(\alpha_{\rm CO}\sim 1\)), the central depletion time would then drop from 1.35 Gyr to 300 Myr. At the same time, this would provide even further support against collapse in the central zone. We plan to resolve this issue with observations of CO isotopologues in a future study. ## 6 Summary and conclusion We have obtained new high-resolution NOEMA CO (2-1) and MUSE spectroscopy of the \(z\sim\)0 massive, clumpy and gas-rich disc galaxy LARS 8, drawn from the _Lyman Alpha Reference Sample_. The NOEMA data was used to study the diffuse molecular gas content and its kinematics at a resolution of \(\sim\)400 pc, while the MUSE data was used to derive extinction-corrected star formation rates from H\(\alpha\) at a resolution of \(\sim\)600 pc. This enabled us - together with readily available HST photometry - to perform a disc stability analysis using the Toomre \(Q\) criterion. The main result is presented in Figure 11, showing that the disc is highly unstable (\(Q<<\)1) over large scales. On the other hand, the center of LARS 8 was found to be stable (\(Q>\)1). The NOEMA molecular data cube was further examined with CPR0PSTO0, allowing us to identify and compute physical properties of 12 individual molecular clumps (Table 3). The clumps are found to be virialized (\(\alpha_{\rm vir}\sim\)1) and they follow the mass-size relation (Figure 8). We have further derived several physical parameters such as the molecular gas depletion time, the molecular gas fraction and the dynamical equilibrium pressure. Using our results from the CO-based rotation curve (Figure 4), all (surface) quantities could be corrected for inclination effects. The radial (elliptical) profiles are shown in Figure 9. Of particular interest is the smooth radial decline of the molecular gas depletion time, ranging from more than 1 Gyr in the center to \(\sim\)100 Myr in the outer disc. This trend is outstanding, as in normal star forming galaxies the gas depletion time is observed to be constant or even slightly increasing with radius. These results lead to the following conclusions: * The disc of LARS 8 is highly unstable with \(Q<<\)1 and has relatively short gas depletion times. The identified diffuse molecular structures, however, are virialized and thus similar to GMCs in the Figure 12: Star formation rate surface density (y-axis) versus dynamical equilibrium pressure (x-axis). Each point corresponds to a measurement in a \(\sim\)650 pc sized region/pixel. The colors indicate the distance from the center. Milky Way or nearby galaxies. Hence, the short gas depletion times in the disc cannot be explained by local (sub-kpc) effects such as a higher local star formation efficiency, but must be triggered by large-scale processes that cause the formation of more massive and _denser_ molecular clumps. The observed short gas depletion times observed in the disc must thus result from more dense gas being present on sub-clump scales, i.e. density PDFs shifted towards higher values. We argue that the high star formation rates observed in LARS 8 are the result of large-scale Toomre instabilities in the galaxy disc. * The central region of LARS 8 is Toomre-stable, has the longest gas depletion time, lower gas fraction and a reduced star formation rate surface density. Given the fact that the stellar surface density (and thus the ISM pressure) is found to be highest in the center, we argue that some process must lower star formation in the central \(\sim\)500 pc. From our dynamical analysis we find evidence that shear (and not feedback from SNe) is the main driving mechanism that suppresses the star formation in the center of LARS 8, as \(\kappa\) increases by a factor of 4 from r\(-\)1 kpc to r\(-\)0 kpc. ## Acknowledgements J.P. acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No.726384/Empire). M.H. is Fellow of the Knut and Alice Wallenberg Foundation. O.A. acknowledges support from the Knut and Alice Wallenberg Foundation and from the Swedish Research Council (grant 2019-04659). This research made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration et al., 2013). ## Data Availability The MUSE raw data underlying this article are available in the ESO public archive at [http://archive.eso.org](http://archive.eso.org) and can be accessed with the program ID 0101.B-0703(A). The reduced MUSE data cube and the NOEMA data underlying this article will be shared on reasonable request to the corresponding author.
2310.13960
Linguistically Motivated Sign Language Segmentation
Sign language segmentation is a crucial task in sign language processing systems. It enables downstream tasks such as sign recognition, transcription, and machine translation. In this work, we consider two kinds of segmentation: segmentation into individual signs and segmentation into phrases, larger units comprising several signs. We propose a novel approach to jointly model these two tasks. Our method is motivated by linguistic cues observed in sign language corpora. We replace the predominant IO tagging scheme with BIO tagging to account for continuous signing. Given that prosody plays a significant role in phrase boundaries, we explore the use of optical flow features. We also provide an extensive analysis of hand shapes and 3D hand normalization. We find that introducing BIO tagging is necessary to model sign boundaries. Explicitly encoding prosody by optical flow improves segmentation in shallow models, but its contribution is negligible in deeper models. Careful tuning of the decoding algorithm atop the models further improves the segmentation quality. We demonstrate that our final models generalize to out-of-domain video content in a different signed language, even under a zero-shot setting. We observe that including optical flow and 3D hand normalization enhances the robustness of the model in this context.
Amit Moryossef, Zifan Jiang, Mathias Müller, Sarah Ebling, Yoav Goldberg
2023-10-21T10:09:34Z
http://arxiv.org/abs/2310.13960v2
# Linguistically Motivated Sign Language Segmentation ###### Abstract Sign language segmentation is a crucial task in sign language processing systems. It enables downstream tasks such as sign recognition, transcription, and machine translation. In this work, we consider two kinds of segmentation: segmentation into individual signs and segmentation into _phrases_, larger units comprising several signs. We propose a novel approach to jointly model these two tasks. Our method is motivated by linguistic cues observed in sign language corpora. We replace the predominant IO tagging scheme with BIO tagging to account for continuous signing. Given that prosody plays a significant role in phrase boundaries, we explore the use of optical flow features. We also provide an extensive analysis of hand shapes and 3D hand normalization. We find that introducing BIO tagging is necessary to model sign boundaries. Explicitly encoding prosody by optical flow improves segmentation in shallow models, but its contribution is negligible in deeper models. Careful tuning of the decoding algorithm atop the models further improves the segmentation quality. We demonstrate that our final models generalize to out-of-domain video content in a different signed language, even under a zero-shot setting. We observe that including optical flow and 3D hand normalization enhances the robustness of the model in this context. ## 1 Introduction Signed languages are natural languages used by deaf and hard-of-hearing individuals to communicate through a combination of manual and nonmanual elements Sandler and Lillo-Martin (2006). Like spoken languages, signed languages have their own distinctive grammar, and vocabulary, that have evolved through natural processes of language development Sandler (2010). Sign language transcription and translation systems rely on the accurate temporal segmentation of sign language videos into meaningful units such as signs Santemiz et al. (2009); Renz et al. (2012) or signing sequences corresponding to subtitle units1 Bull et al. (2020). However, sign language segmentation remains a challenging task due to the difficulties in defining meaningful units in signed languages De Sisto et al. (2021). Our approach is the first to consider two kinds of units in one model. We simultaneously segment single signs and phrases (larger units) in a unified framework. Footnote 1: Subtitles may not always correspond directly to sentences. They frequently split within a sentence and could be temporally offset from the corresponding signing segments. Previous work typically approached segmentation as a binary classification task (including segmentation tasks in audio signal processing and computer vision), where each frame/pixel is predicted to be either part of a segment or not. However, this approach neglects the intricate nuances of continuous signing, where segment boundaries are not strictly binary and often blend in reality. One sign or phrase can immediately follow another, transitioning smoothly, without a frame between them being distinctly _outside_ (Figure 1 and SS3.1). We propose incorporating linguistically motivated cues to address these challenges and improve sign language segmentation. To cope with contin Figure 1: Per-frame classification of a sign language utterance following a BIO tagging scheme. Each box represents a single frame of a video. We propose a joint model to segment _signs_ (top) and _phrases_ (bottom) at the same time. B=beginning, I=inside, O=outside. The figure illustrates continuous signing where signs often follow each other without an O frame between them. uous signing, we adopt a BIO-tagging approach Ramshaw and Marcus (1995), where in addition to predicting a frame to be _in_ or _out_ of a segment, we also classify the _beginning_ of the segment as shown in Figure 2. Since phrase segmentation is primarily marked with prosodic cues (i.e., pauses, extended sign duration, facial expressions) Sandler (2010); Ormel and Crasborn (2012), we explore using optical flow to explicitly model them (SS3.2). Since signs employ a limited number of hand shapes, we additionally perform 3D hand normalization (SS3.3). Evaluating on the Public DGS Corpus Prillwitz et al. (2008); Hanke et al. (2020) (DGS stands for German Sign Language), we report enhancements in model performance following specific modifications. We compare our final models after hyperparameter optimization, including parameters for the decoding algorithm, and find that our best architecture using only the poses is comparable to the one that uses optical flow and hand normalization. Reassuringly, we find that our model generalizes when evaluated on additional data from different signed languages in a zero-shot approach. We obtain segmentation scores that are competitive with previous work and observe that incorporating optical flow and hand normalization makes the model more robust for out-of-domain data. Lastly, we conduct an extensive analysis of pose-based hand manipulations for signed languages (Appendix C). Despite not improving our segmentation model due to noise from current 3D pose estimation models, we emphasize its potential value for future work involving skeletal hand poses. Based on this analysis, we propose several measurable directions for improving 3D pose estimation. Our code and models are available at [https://github.com/sign-language-processing/transcription](https://github.com/sign-language-processing/transcription). ## 2 Related Work ### Sign Language Detection Sign language detection Borg and Camilleri (2019); Moryossef et al. (2020); Pal et al. (2023) is the task of determining whether signing activity is present in a given video frame. A similar task in spoken languages is voice activity detection (VAD) Sohn et al. (1999); Ramirez et al. (2004), the detection of when human voice is used in an audio signal. As VAD methods often rely on speech-specific representations such as spectrograms, they are not necessarily applicable to videos. Borg and Camilleri (2019) introduced the classification of frames taken from YouTube videos as either signing or not signing. They took a spatial and temporal approach based on VGG-16 Simonyan and Zisserman (2015) CNN to encode each frame and used a Gated Recurrent Unit (GRU) Cho et al. (2014) to encode the sequence of frames in a window of 20 frames at 5fps. In addition to the raw frame, they either encoded optical-flow history, aggregated motion history, or frame difference. Moryossef et al. (2020) improved upon their method by performing sign language detection in real time. They identified that sign language use involves movement of the body and, as such, designed a model that works on top of estimated human poses rather than directly on the video signal. They calculated the optical flow norm of every joint detected on the body and applied a shallow yet effective contextualized model to predict for every frame whether the person is signing or not. While these recent detection models achieve high performance, we need well-annotated data including interference and non-signing distractions for proper real-world evaluation. Pal et al. (2023) conducted a detailed analysis of the impact of Figure 2: The annotation of the first phrase in a video from the test set (dgskorpus_goe_02), in yellow, signing: “Why do you smoke?” through the use of three signs: _WHY_ (+mouthed), _TO-SMOKE_, and a gesture (+mouthed) towards the other signer. At the top, our phrase segmentation model predicts a single phrase that initiates with a B tag (in green) above the B-threshold (green dashed line), followed by an I (in light blue), and continues until falling below a certain threshold. At the bottom, our sign segmentation model accurately segments the three signs. signer overlap between the training and test sets on two sign detection benchmark datasets (Signing in the Wild (Borg and Camilleri, 2019) and the DGS Corpus (Hanke et al., 2020)) used by Borg and Camilleri (2019) and Moryossef et al. (2020). By comparing the accuracy with and without overlap, they noticed a relative decrease in performance for signers not present during training. As a result, they suggested new dataset partitions that eliminate overlap between train and test sets and facilitate a more accurate evaluation of performance. ### Sign Language Segmentation Segmentation consists of detecting the frame boundaries for signs or phrases in videos to divide them into meaningful units. While the most canonical way of dividing a spoken language text is into a linear sequence of words, due to the simultaneity of sign language, the notion of a sign language "word" is ill-defined, and sign language cannot be fully linearly modeled. Current methods resort to segmenting units loosely mapped to signed language units (Santemiz et al., 2009; Farag and Brock, 2019; Bull et al., 2020; Renz et al., 2021, 2021) and do not explicitly leverage reliable linguistic predictors of sentence boundaries such as prosody in signed languages (i.e., pauses, extended sign duration, facial expressions) (Sandler, 2010; Ormel and Crasborn, 2012). De Sisto et al. (2021) call for a better understanding of sign language structure, which they believe is the necessary ground for the design and development of sign language recognition and segmentation methodologies. Santemiz et al. (2009) automatically extracted isolated signs from continuous signing by aligning the sequences obtained via speech recognition, modeled by Dynamic Time Warping (DTW) and Hidden Markov Models (HMMs) approaches. Farag and Brock (2019) used a random forest classifier to distinguish frames containing signs in Japanese Sign Language based on the composition of spatio-temporal angular and distance features between domain-specific pairs of joint segments. Bull et al. (2020) segmented French Sign Language into segments corresponding to subtitle units by relying on the alignment between subtitles and sign language videos, leveraging a spatio-temporal graph convolutional network (STGCN; Yu et al. (2017)) with a BiLSTM on 2D skeleton data. Renz et al. (2021) located temporal boundaries between signs in continuous sign language videos by employing 3D convolutional neural network representations with iterative temporal segment refinement to resolve ambiguities between sign boundary cues. Renz et al. (2021) further proposed the Changepoint-Modulated Pseudo-Labelling (CMPL) algorithm to solve the problem of source-free domain adaptation. Bull et al. (2021) presented a Transformer-based approach to segment sign language videos and align them with subtitles simultaneously, encoding subtitles by BERT (Devlin et al., 2019) and videos by CNN video representations. ## 3 Motivating Observations To motivate our proposed approach, we make a series of observations regarding the intrinsic nature of sign language expressions. Specifically, we highlight the unique challenges posed by the continuous flow of sign language expressions (SS3.1), the role of prosody in determining phrase boundaries (SS3.2), and the influence of hand shape changes in indicating sign boundaries (SS3.3). ### Boundary Modeling When examining the nature of sign language expressions, we note that the utterances are typically signed in a continuous flow, with minimal to no pauses between individual signs. This continuity is particularly evident when dealing with lower frame rates. This continuous nature presents a significant difference from _text_ where specific punctuation marks serve as indicators of phrase boundaries, and a semi-closed set of tokens represent the _words_. Given these characteristics, directly applying conventional segmentation or sign language detection models--that is, utilizing IO tagging in a manner similar to image or audio segmentation models--may not yield the optimal solution, particularly at the sign level. Such models often fail to precisely identify the boundaries between signs. A promising alternative is the Beginning-Inside-Outside (BIO) tagging (Ramshaw and Marcus, 1995). BIO tagging was originally used for named entity recognition, but its application extends to any sequence chunking task beyond the text modality. In the context of sign language, BIO tagging provides a more refined model for discerning boundaries between signs and phrases, thus significantly improving segmentation performance (Figure 1). To test the viability of the BIO tagging approach in comparison with the IO tagging, we conducted an experiment on the Public DGS Corpus. The entire corpus was transformed to various frame rates and the sign segments were converted to frames using either BIO or IO tagging, then decoded back into sign segments. Figure 4 illustrates the results of this comparison. Note that the IO tagging was unable to reproduce the same number of segments as the BIO tagging on the gold data. This underscores the importance of BIO tagging in identifying sign and phrase boundaries. ### Phrase Boundaries Linguistic research has shown that prosody is a reliable predictor of phrase boundaries in signed languages Sandler (2010); Ormel and Crasborn (2012). We observe that this is also the case in the Public DGS Corpus dataset used in our experiments. To illustrate this, we model pauses and movement using optical flow directly on the poses as proposed by Moryossef et al. (2020). Figure 3 demonstrates that a change in motion signifies the presence of a pause, which, in turn, indicates a phrase boundary. ### Sign Boundaries We observe that signs generally utilize a limited number of hand shapes, with the majority of signs utilizing a maximum of two hand shapes. For example, linguistically annotated datasets, such as ASL-LEX Sehyr et al. (2021) and ASLLVD Neidle et al. (2012), only record one initial hand shape per hand and one final hand shape. Mandel (1981, p. 87) argued that there can only be one set of selected fingers per sign, constraining the number of handshapes in signs. This limitation is referred to as the _Selected Fingers Constraint_. And indeed, Sandler et al. (2008) find that the optimal form of a sign is monosyllabic, and that handshape change is organized by the syllable unit. To illustrate this constraint empirically, we show a histogram of hand shapes per sign in SignBank2 for \(705,151\) signs in Figure 5. Footnote 2: [https://signbank.org/signpuddle2.0/](https://signbank.org/signpuddle2.0/) Additionally, we found that a change in the dominant hand shape often signals the presence of a sign boundary. Specifically, out of \(27,658\) sentences, comprising \(354,955\) pairs of consecutive signs, only \(17.38\%\) of consecutive signs share the same base hand shape3. Based on these observations, we propose using 3D hand normalization as an indicative cue for hand shapes to assist in detecting sign boundaries. We hypothesize that performing 3D hand normalization makes it easier for Figure 4: Reproduced sign segments in the Public DGS Corpus by BIO and IO tagging at various frame rates. \(99.7\%\) of segments reproduced at 25fps by BIO tagging. Figure 5: Number of hand shapes per sign in SignBank. Figure 3: Optical flow for a conversation between two signers (signer 1 top, signer 2 bottom). The x-axis is the progression across 30 seconds. The yellow marks the annotated phrase spans. (Source: Moryossef et al. (2020)) the model to extract the hand shape. We expand on this process and show examples in Appendix C. ## 4 Experimental Setup In this section, we describe the experimental setup used to evaluate our linguistically motivated approach for sign language segmentation. This includes a description of the Public DGS Corpus dataset used in the study, the methodology employed to perform sign and phrase segmentation, and the evaluation metrics used to measure the performance of the proposed approach. ### Dataset The Public DGS Corpus (Prillwitz et al., 2008; Hanke et al., 2020) is a distinctive sign language dataset that includes both accurate sign-level annotation from continuous signing, and well-aligned phrase-level translation in spoken language. The corpus comprises 404 documents / 714 videos4 with an average duration of 7.55 minutes, featuring either one signer or two signers, at 50 fps. Most of these videos feature gloss transcriptions and spoken language translations (German and English), except for the ones in the "Joke" category, which are not annotated and thus excluded from our model5. The translations are comprised of full spoken language paragraphs, sentences, or phrases (i.e., independent/main clauses). Footnote 4: The number of videos is nearly double the number of documents because each document typically includes two signers, each of whom produces one video for segmentation. Each gloss span is considered a gold sign segment, following a tight annotation scheme (Hanke et al., 2012). Phrase segments are identified by examining every translation, with the segment assumed to span from the start of its first sign to the end of its last sign, correcting imprecise annotation. This corpus is enriched with full-body pose estimations from OpenPose (Cao et al., 2019; Schulder and Hanke, 2019) and Mediapipe Holistic (Grishchenko and Bazarevsky, 2020). We use the _3.0.0-uzh-document_ split from Zhang et al. (2023). After filtering the unannotated data, we are left with 296 documents / 583 videos for training, 6 / 12 for validation, and 9 / 17 for testing. The mean number of signs and phrases in a video from the training set is 613 and 111, respectively. ### Methodology Our proposed approach for sign language segmentation is based on the following steps: 1. **Pose Estimation** Given a video, we first adjust it to 25 fps and estimate body poses using the MediaPipe Holistic pose estimation system. We do not use OpenPose because it lacks a \(Z\)-axis, which prevents 3D rotation used for hand normalization. The shape of a pose is represented as \((\text{frames}\times\text{keypoints}\times\text{axes})\). 2. **Pose Normalization** To generalize over video resolution and distance from the camera, we normalize each of these poses such that the mean distance between the shoulders of each person equals \(1\), and the mid-point is at \((0,0)\)(Celebi et al., 2013). We also remove the legs since they are less relevant to signing. 3. **Optical Flow** We follow the equation in Moryosse et al. (2020, Equation 1). 4. **3D Hand Normalization** We rotate and scale each hand to ensure that the same hand shape is represented in a consistent manner across different frames. We rotate the 21 \(XYZ\) keypoints of the hand so that the back of the hand lies on the \(XY\) plane, we then rotate the hand so that the metacarpal bone of the middle finger lies on the \(Y\)-axis, and finally, we scale the hand such that the bone is of constant length. Visualizations are presented in Appendix C. 5. **Sequence Encoder** For every frame, the pose is first flattened and projected into a standard dimension (\(256\)), then fed through an LSTM encoder (Hochreiter and Schmidhuber, 1997). 6. **BIO Tagging** On top of the encoder, we place two BIO classification heads for sign and phrase independently. \(B\) denotes the beginning of a sign or phrase, \(I\) denotes the middle of a sign or phrase, and \(O\) denotes being outside a sign or phrase. Our cross-entropy loss is proportionally weighted in favor of \(B\) as it is a _rare_ label6 compared to \(I\) and \(O\). Footnote 6: B:I:O is about 1:5:18 for signs and 1:58:77 for phrases. 7. **Greedy Segment Decoding** To decode the frame-level BIO predictions into sign/phrase segments, we define a segment to start with the first frame possessing a \(B\) probability greater than a predetermined threshold (de-faulted at \(0.5\)). The segment concludes with the first frame among the subsequent frames, having either a \(B\) or \(O\) probability exceeding the threshold. We provide the pseudocode of the decoding algorithm in Appendix B. ### Experiments Our approach is evaluated through a series of six sets of experiments. Each set is repeated three times with varying random seeds. Preliminary experiments were conducted to inform the selection of hyperparameters and features, the details of which can be found in Table 3 in Appendix A. Model selection is based on validation metrics. 1. **E0: IO Tagger** We re-implemented and reproduced7 the sign language detection model proposed by Moryossef et al. (2020), in PyTorch Paszke et al. (2019) as a naive baseline. This model processes optical flow as input and outputs \(I\) (is signing) and \(O\) (not signing) tags. Footnote 7: The initial implementation uses OpenPose Cao et al. (2019), at 50 fps. Preliminary experiments reveal that these differences do not significantly influence the results. Footnote 8: We reduce the dense _FACE_LANDMARKS_ in Mediapipe Holistic to the contour keypoints according to the variable _mediapipe_solutions_._holistic_._FACEMESH_CONTOURS_. 2. **E1: Bidirectional BIO Tagger** We replace the IO tagging heads in _E0_ with BIO heads to form our baseline. Our preliminary experiments indicate that inputting only the 75 hand and body keypoints and making the LSTM layer bidirectional yields optimal results. Footnote 8: We reduce the dense _FACE_LANDMARKS_ in Mediapipe Holistic to the contour keypoints according to the variable _mediapipe_solutions_._holistic_._FACEMESH_CONTOURS_. 3. **E2: Adding Reduced Face Keypoints** Although the 75 hand and body keypoints serve as an efficient minimal set for sign language detection/segmentation models, we investigate the impact of other nonmanual sign language articulators, namely, the face. We introduce a reduced set of 128 face keypoints that signify the signer's _face contour_8. Footnote 8: We reduce the dense _FACE_LANDMARKS_ in Mediapipe Holistic to the contour keypoints according to the variable _mediapipe_solutions_._holistic_._FACEMESH_CONTOURS_. 4. **E3: Adding Optical Flow** At every time step \(t\) we append the optical flow between \(t\) and \(t-1\) to the current pose frame as an additional dimension after the \(XYZ\) axes. Footnote 8: We reduce the dense _FACE_LANDMARKS_ in Mediapipe Holistic to the contour keypoints according to the variable _mediapipe_solutions_._holistic_._FACEMESH_CONTOURS_. 5. **E4: Adding 3D Hand Normalization** At every time step, we normalize the hand poses and concatenate them to the current pose. Footnote 9: The initial implementation uses OpenPose Cao et al. (2019), at 50 fps. Preliminary experiments reveal that these differences do not significantly influence the results. Footnote 8: We reduce the dense _FACE_LANDMARKS_ in Mediapipe Holistic to the contour keypoints according to the variable _mediapipe_solutions_._holistic_._FACEMESH_CONTOURS_. 6. **E5: Autoregressive Encoder** We replace the encoder with the one proposed by Jiang et al. (2023) for the detection and classification of great ape calls from raw audio signals. Specifically, we add autoregressive connections between time steps to encourage consistent output labels. The logits at time step \(t\) are concatenated to the input of the next time step, \(t+1\). This modification is implemented bidirectionally by stacking two autoregressive encoders and adding their output up before the Softmax operation. However, this approach is inherently slow, as we have to fully wait for the previous time step predictions before we can feed them to the next time step. ### Evaluation Metrics We evaluate the performance of our proposed approach for sign and phrase segmentation using the following metrics: * **Frame-level F1 Score** For each frame, we apply the _argmax_ operation to make a local prediction of the BIO class and calculate the macro-averaged per-class F1 score against the ground truth label. We use this frame-level metric during validation as the primary metric for model selection and early stopping, due to its independence from a potentially variable segment decoding algorithm (SS5.2). * **Intersection over Union (IoU)** We compute the IoU between the ground truth segments and the predicted segments to measure the degree of overlap. Note that we do not perform a one-to-one mapping between the two using techniques like DTW. Instead, we calculate the total IoU based on all segments in a video. * **Percentage of Segments (%)** To complement IoU, we introduce the percentage of segments to compare the number of segments predicted by the model with the ground truth annotations. It is computed as follows: #predicted segments #ground truth segments. The optimal value is 1. * **Efficiency** We measure the efficiency of each model by the number of parameters and the training time of the model on a Tesla V100-SXM2-32GB GPU for 100 epochs10. Footnote 10: Exceptionally the autoregressive models in _E5_ were trained on an NVIDIA A100-SXM4-80GB GPUA100 which doubles the training speed of V100, still the training is slow. ## 5 Results and Discussion We report the mean test evaluation metrics for our experiments in Table 1. We do not report F1 Score for _E0_ since it has a different number of classes and is thus incomparable. Comparing _E1_ to _E0_, we note that the model's bidirectionality, the use of poses, and BIO tagging indeed help outperform the model from previous work where only optical flow and IO tagging are used. While _E1_ predicts an excessive number of phrase segments, the IoUs for signs and phrases are both higher. Adding face keypoints (_E2_) makes the model worse, while including optical flow (_E3_) improves the F1 scores. For phrase segmentation, including optical flow increases IoU, but over-segments phrases by more than 300%, which further exaggerates the issue in _E1_. Including hand normalization (_E4_) on top of _E3_ slightly deteriorates the quality of both sign and phrase segmentation. From the non-exhaustive hyperparameter search in the preliminary experiments (Table 3), we examined different hidden state sizes (\(64\), \(128\), \(256\), \(512\), \(1024\)) and a range of \(1\) to \(8\) LSTM layers, and conclude that a hidden size of \(256\) and \(4\) layers with \(1e-3\) learning rate are optimal for _E1_, which lead to _E1s_. We repeat the setup of _E2_, _E3_, and _E4_ with these refined hyper-parameters, and show that all of them outperform their counterparts, notably in that they ease the phrase over-segmentation problem. In _E2s_, we reaffirmed that adding face keypoints does not yield beneficial results, so we exclude face in future experiments. Although the face is an essential component to understanding sign language expressions and does play some role in sign and phrase level segmentation, we believe that the 128 face contour points are too dense for the model to learn useful information compared to the 75 body points, and may instead confuse the model. In addition, the benefits of explicitly including optical flow (_E3s_) fade away with the increased model depth and we speculate that now the model might be able to learn the optical flow features by itself. Surprisingly, while adding hand normalization (_E4s_) still slightly worsens the overall results, it has the best phrase percentage. From _E4s_ we proceeded with the training of _E5_, an autoregressive model. Unexpectedly, counter to our intuition and previous work, _E5_ underachieves our baseline across all evaluation metrics10. Footnote 10: _E5_ should have more parameters than _E4s_, but because of an implementation bug, each LSTM layer has half the parameters. Based on the current results, we assume that autoregressive connections (even with more parameters) will not improve our models. ### Challenges with 3D Hand Normalization While the use of 3D hand normalization is well-justified in SS3.3, we believe it does not help the model due to poor depth estimation quality, \begin{table} \begin{tabular}{l l c c|c c c|c c} \hline \hline & \multicolumn{3}{c}{**Sign**} & \multicolumn{3}{c}{**Phrase**} & \multicolumn{2}{c}{**Efficiency**} \\ \cline{2-9} **Experiment** & **F1** & **IoU** & **\%** & **F1** & **IoU** & **\%** & **\#Params** & **Time** \\ \hline **E0** & **Moryossef et al. (2020)** & — & \(0.46\) & \(1.09\) & — & \(0.70\) & **1.00** & **102K** & **0.50:17** \\ \hline **E1** & **Baseline** & \(0.56\) & \(0.66\) & \(0.91\) & \(0.59\) & \(0.80\) & \(2.50\) & 454K & 1:01:50 \\ **E2** & **E1 + Face** & \(0.53\) & \(0.58\) & \(0.64\) & \(0.57\) & \(0.76\) & \(1.87\) & 552K & 1:50:31 \\ **E3** & **E1 + Optical Flow** & \(0.58\) & \(0.62\) & \(1.12\) & \(0.60\) & \(0.82\) & \(3.19\) & 473K & 1:20:17 \\ **E4** & **E3 + Hand Norm** & \(0.56\) & \(0.61\) & \(1.07\) & \(0.60\) & \(0.80\) & \(3.24\) & 516K & 1:30:59 \\ \hline **E1s** & **E1 + Depth=4** & **0.63** & **0.69** & \(1.11\) & **0.65** & \(0.82\) & \(1.63\) & 1.6M & 4:08:48 \\ **E2s** & **E2 + Depth=4** & \(0.62\) & **0.69** & \(1.07\) & \(0.63\) & \(0.84\) & \(2.68\) & 1.7M & 3:14:03 \\ **E3s** & **E3 + Depth=4** & \(0.60\) & \(0.63\) & \(1.13\) & \(0.64\) & \(0.80\) & \(1.53\) & 1.7M & 4:08:30 \\ **E4s** & **E4 + Depth=4** & \(0.59\) & \(0.63\) & \(1.13\) & \(0.62\) & \(0.79\) & \(1.43\) & 1.7M & 4:35:29 \\ \hline **E1s*** & **E1s + Tuned Decoding** & — & **0.69** & **1.03** & — & **0.85** & 1.02 & — & — \\ **E4s*** & **E4s + Tuned Decoding** & — & 0.63 & 1.06 & — & 0.79 & 1.12 & — & — \\ \hline **E5** & **E4s + Autoregressive** & \(0.45\) & \(0.47\) & \(0.88\) & \(0.52\) & \(0.63\) & \(2.72\) & 1.3M & \textasci{-}3 days \\ \hline \hline \end{tabular} \end{table} Table 1: Mean test evaluation metrics for our experiments. The best score of each column is in bold and a star (*) denotes further optimization of the decoding algorithm without changing the model (only affects _IoU_ and %). Table 4 in Appendix A contains a complete report including validation metrics and standard deviation of all experiments. as further corroborated by recent research from De Coster et al. (2023). Therefore, we consider it a negative result, showing the deficiencies in the 3D pose estimation system. The evaluation metrics we propose in Appendix C could help identify better pose estimation models for this use case. ### Tuning the Segment Decoding Algorithm We selected _E1s_ and _E4s_ to further explore the segment decoding algorithm. As detailed in SS4.2 and Appendix B, the decoding algorithm has two tunable parameters, \(threshold_{b}\) and \(threshold_{o}\). We conducted a grid search with these parameters, using values from 10 to 90 in increments of 10. We additionally experimented with a variation of the algorithm that conditions on the most likely class by _argmax_ instead of fixed threshold values, which turned out similar to the default version. We only measured the results using IoU and the percentage of segments at validation time since the F1 scores remain consistent in this case. For sign segmentation, we found using \(threshold_{b}=60\) and \(threshold_{o}=40/50/60\) yields slightly better results than the default setting (\(50\) for both). For phrase segmentation, we identified that higher threshold values (\(threshold_{b}=90,threshold_{o}=90\) for _E1s_ and \(threshold_{b}=80,threshold_{o}=80/90\) for _E4s_) improve on the default significantly, especially on the percentage metric. We report the test results under _E1s*_ and _E4s*_, respectively. Despite formulating a single model, we underline a separate sign/phrase model selection process to archive the best segmentation results. Figure 6 illustrates how higher threshold values reduce the number of predicted segments and skew the distribution of predicted phrase segments towards longer ones in _E1sE1s*_. As Bull et al. (2020b) suggest, advanced priors could also be integrated into the decoding algorithm. ### Comparison to Previous Work We re-implemented and re-purposed the sign language detection model introduced in Moryossef et al. (2020) for our segmentation task as a baseline since their work is the state-of-the-art and the only comparable model designed for the Public DGS Corpus dataset. As a result, we show the necessity of replacing IO tagging with BIO tagging to tackle the subtle differences between the two tasks. For _phrase_ segmentation, we compare to Bull et al. (2020b). We note that our definition of sign language phrases (spanning from the start of its first sign to the end of its last sign) is tighter than the subtitle units used in their paper and that we use different training datasets of different languages and domains. Nevertheless, we implemented some of their frame-level metrics and show the results in Table 2 on both the Public DGS Corpus and the MEDIAPI-SKEL dataset (Bull et al., 2020a) in French Sign Language (LSF). We report both zero-shot out-of-domain results11 and the results of our models trained specifically on their dataset without the spatio-temporal graph convolutional network (ST-GCN) (Yan et al., 2018) used in their work for pose encoding. Footnote 11: The zero-shot results are not directly comparable to theirs due to different datasets and labeling approaches. \begin{table} \begin{tabular}{l l|c c} \hline \hline **Data** & **Model** & **ROC-AUC** & **F1-M** \\ \hline \multirow{8}{*}{LSF} & **full (theirs)** & 0.87 & — \\ & **body (theirs)** & 0.87 & — \\ \cline{2-4} & **E1s (ours, zero-shot)** & 0.71 & 0.41 \\ & **E4s (ours, zero-shot)** & 0.76 & 0.44 \\ \cline{2-4} & **E1s (ours, trained)** & 0.87 & 0.49 \\ & **E4s (ours, trained)** & 0.87 & 0.51 \\ \hline \multirow{2}{*}{DGS} & **E1s (ours)** & 0.91 & 0.65 \\ & **E4s (ours)** & 0.90 & 0.62 \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation metrics used in Bull et al. (2020b). _ROC-AUC_ is applied exclusively on the _O_-tag. For comparison _F1-M_ denotes the macro-averaged per-class F1 used in this work across all tags. The first two rows are the best results taken from Table 1 in their paper. The next four rows represent how our models perform on their data in a zero-shot setting, and in a supervised setting, and the last two rows represent how our models perform on our data. Figure 6: Probability density of phrase segment lengths. For _sign_ segmentation, we do not compare to Renz et al. (2021, 2021) due to different datasets and the difficulty in reproducing their segment-level evaluation metrics. The latter depends on the decoding algorithm and a way to match the gold and predicted segments, both of which are variable. ## 6 Conclusions This work focuses on the automatic segmentation of signed languages. We are the first to formulate the segmentation of individual signs and larger sign phrases as a joint problem. We propose a series of improvements over previous work, linguistically motivated by careful analyses of sign language corpora. Recognizing that sign language utterances are typically continuous with minimal pauses, we opted for a BIO tagging scheme over IO tagging. Furthermore, leveraging the fact that phrase boundaries are marked by prosodic cues, we introduce optical flow features as a proxy for prosodic processes. Finally, since signs typically employ a limited number of hand shapes, to make it easier for the model to understand handshapes, we attempt 3D hand normalization. Our experiments conducted on the Public DGS Corpus confirmed the efficacy of these modifications for segmentation quality. By comparing to previous work in a zero-shot setting, we demonstrate that our models generalize across signed languages and domains and that including linguistically motivated cues leads to a more robust model in this context. Finally, we envision that the proposed model has applications in real-world data collection for signed languages. Furthermore, a similar segmentation approach could be leveraged in various other fields such as co-speech gesture recognition (Moryossef, 2023) and action segmentation (Tang et al., 2019). ## Limitations ### Pose Estimation In this work, we employ the MediaPipe Holistic pose estimation system (Grishchenko and Bazarevsky, 2020). There is a possibility that this system exhibits bias towards certain protected classes (such as gender or race), underperforming in instances with specific skin tones or lower video quality. Thus, we cannot attest to how our system would perform under real-world conditions, given that the videos utilized in our research are generated in a controlled studio environment, primarily featuring white participants. ### Encoding of Long Sequences In this study, we encode sequences of frames that are significantly longer than the typical 512 frames often seen in models employing Transformers (Vaswani et al., 2017). Numerous techniques, ranging from basic temporal pooling/downsampling to more advanced methods such as a video/pose encoder that aggregates local frames into higher-level 'tokens' (Renz et al., 2021), graph convolutional networks (Bull et al., 2020), and self-supervised representations (Baevski et al., 2020), can alleviate length constraints, facilitate the use of Transformers, and potentially improve the outcomes. Moreover, a hierarchical method like the Swin Transformer (Liu et al., 2021) could be applicable. ### Limitations of Autoregressive LSTMs In this paper, we replicated the autoregressive LSTM implementation originally proposed by Jiang et al. (2023). Our experiments revealed that this implementation exhibits significant slowness, which prevented us from performing further experimentation. In contrast, other LSTM implementations employed in this project have undergone extensive optimization (Appleyard, 2016), including techniques like combining general matrix multiplication operations (GEMMs), parallelizing independent operations, fusing kernels, rearranging matrices, and implementing various optimizations for models with multiple layers (which are not necessarily applicable here). A comparison of CPU-based performance demonstrates that our implementation is x6.4 times slower. Theoretically, the number of operations performed by the autoregressive LSTM is equivalent to that of a regular LSTM. However, while the normal LSTM benefits from concurrency based on the number of layers, we do not have that luxury. The optimization of recurrent neural networks (RNNs) (Que et al., 2020, 2021, 2022) remains an ongoing area of research. If proven effective in other domains, we strongly advocate for efforts to optimize the performance of this type of network. ### Interference Between Sign and Phrase Models In our model, we share the encoder for both the sign and phrase segmentation models, with a shallow linear layer for the BIO tag prediction associated with each task. It remains uncertain whether these two tasks interfere with or enhance each other. An ablation study (not presented in this work) involving separate modeling is necessary to obtain greater insight into this matter. #### Noisy Training Objective Although the annotations utilized in this study are of expert level, the determination of precise sign Hanke et al. (2012) and phrase boundaries remains a challenging task, even for experts. Training the model on these annotated boundaries might introduce excessive noise. A similar issue was observed in classification-based pose estimation Cao et al. (2019). The task of annotating the exact anatomical centers of joints proves to be nearly impossible, leading to a high degree of noise when predicting joint position as a 1-hot classification task. The solution proposed in this previous work was to distribute a Gaussian around the annotated location of each joint. This approach allows the joint's center to overlap with some probability mass, thereby reducing the noise for the model. A similar solution could be applied in our context. Instead of predicting a strict 0 or 1 class probability, we could distribute a Gaussian around the boundary. #### Naive Segment Decoding We recognize that the frame-level greedy decoding strategy implemented in our study may not be optimal. Previous research in audio segmentation Venkatesh et al. (2022) employed a You Only Look Once (YOLO; Redmon et al. (2015)) decoding scheme to predict segment boundaries and classes. We propose using a similar prediction atop a given representation, such as the LSTM output or classification logits of an already trained network. Differing from traditional object detection tasks, this process is simplified due to the absence of a \(Y\) axis and non-overlapping segments. In this scenario, the network predicts the segment boundaries using regression, thereby avoiding the class imbalance issue of the BIO tagging. We anticipate this to yield more accurate sign language segmentation. #### Lack of Transcription Speech segmentation is a close task to our sign language segmentation task on videos. In addition to relying on prosodic cues from audio, the former could benefit from automatic speech transcription systems, either in terms of surrogating the task to text-level segmentation and punctuation Cho et al. (2015), or gaining additional training data from automatic speech recognition / spoken language translation Tsiamas et al. (2022). However, for signed languages, there is neither a standardized and widely used written form nor a reliable transcription procedure into some potential writing systems like SignWriting Sutton (1990), HamNoSys Prillwitz and Zienert (1990), and glosses Johnston (2008). Transcription/recognition and segmentation tasks need to be solved simultaneously, so we envision that a multi-task setting helps. Sign spotting, the localization of a specific sign in continuous signing, is a simplification of the segmentation and recognition problem in a closed-vocabulary setting Wong et al. (2022); Varol et al. (2022). It can be used to find candidate boundaries for some signs, but not all. ## Acknowledgements This work was funded by the EU Horizon 2020 project EASIER (grant agreement no. 101016982), the Swiss Innovation Agency (Innousisse) flagship IICT (PFFS-21-47) and the EU Horizon 2020 project iEXTRACT (grant agreement no. 802774). We also thank Rico Sennrich and Chantal Amrhein for their suggestions.
2308.07587
Broadband Continuous Spectral Control of a Single Wavelength Polymer-Based Solid-State Random Laser
We demonstrate temperature-controlled spectral tunability of a partially-pumped single-wavelength random laser in a solid-state random laser based on DCM (4-dicyanomethylene-2-methyl-6-(p-dimethylaminostyryl)-4H-pyran) doped PMMA (polymethyl methacrylate) dye. By carefully shaping the spatial profile of the pump, we first achieve low-threshold, single-mode random lasing with excellent side lobes rejection. Notably, we show how temperature-induced changes in the refractive index of the PMMA-DCM layer result in a blue-shift of this single lasing mode. Continuous tunability of the lasing wavelength is demonstrated over an 8nm-wide bandwidth.
Bhupesh Kumar, Sebastian Schulz, Patrick Sebbah
2023-08-15T06:19:10Z
http://arxiv.org/abs/2308.07587v2
# Broadband Continuous Spectral Control of a Single Wavelength Polymer-Based Solid-State Random Laser ###### Abstract We demonstrate temperature-controlled spectral tunability of a partially-pumped single-wavelength random laser in a solid-state random laser based on DCM (4-dicyanomethylene-2-methyl-6-(p-dimethylaminostyryl)-4H-pyran) doped PMMA (polymethyl methacrylate) dye. By carefully shaping the spatial profile of the pump, we first achieve low-threshold, single-mode random lasing with excellent side lobes rejection. Notably, we show how temperature-induced changes in the refractive index of the PMMA-DCM layer result in a blue-shift of this single lasing mode. Continuous tunability of the lasing wavelength is demonstrated over a 8nm-wide bandwidth. ## 1 Introduction Random lasers are unconventional laser sources in which feedback is provided by randomly-distributed scattering particles. In the past two decades, random lasers have been the subject of intense theoretical and experimental study [1, 2, 3, 4, 5, 6, 7, 8, 9]. Tunability and directionality are important features that determine the application scope of any laser device in fields such as integrated spectroscopy, remote sensing, and optical communication [10]. Single wavelength random laser tunability is challenging due to the random and multimode nature of the emission spectrum, fluctuations in the emission spectrum, and lack of precise, non-invasive and reversible tuning mechanism. Spectral tunability in random lasers has been demonstrated via multiple mechanisms, including optical fiber-based random lasers [11], pump size control or scatterer concentration variation [12], gain medium thickness variation [13], dye molecule selection [14, 15], or mechanical stretching [16, 17]. Other tuning mechanisms include engineering absorption of light emission [18], switching modes associated with different lengths of silver nanorods in plasmonic random lasers [19], as well as electric-field-induced tunability [20]. However, all these mechanisms have been limited to the spectral tuning of broad linewidth or multimode random lasers. Tunable single-mode random laser has been reported in rare-earth-doped fiber random laser, emitting exclusively in the mid-near-infrared [22, 23, 24, 25]. Single-mode random lasing temperature-based tunability in the visible has been demonstrated in liquid crystal-embedded random lasers, but tunability was found to be limited to a few nanometers by the nematic-isotropic transition temperature [21]. To the best of our knowledge, continuous broadband tuning of a singlemode random laser in the visible as not yet been reported. Recently, we have demonstrated that using an iterative pump shaping optimization method, selective excitation of a particular mode of the multimode emission spectrum and single-mode operation with high side-lobe rejection can be achieved [26, 27, 28]. However, it is not possible to achieve lasing at any arbitrary desired wavelength, but only at wavelengths corresponding to lasing modes of the discrete multimode emission spectrum. Overcoming this limitation would add tunability to this technique and offer full spectral control of the random laser. In this paper, we report single-wavelength continuous broadband spectral tuning in a dye-doped solid-state random laser. Random lasers (RLs) typically produce multimode coherent light due to the random distribution of scattering particles. By combining the technique of spatial pump shaping together with the negative thermal coefficient of the refractive index of PMMA polymer, we are able to achieve single wavelength spectral control of a random laser on a single device. Specifically, we show continuous broadband tunability over a bandwidth of 8 nm. ## 2 Sample fabrication Disordered structures were fabricated using e-beam lithography on a 600 nm layer of poly(methyl methacrylate) or PMMA that was doped with 5% weight of DCM laser dye. The fluorescence spectrum of the dye is centered around 600 nm; it has a high fluorescence quantum yield and a large stoke shift (100 nm), which means it does not reabsorb the emitted light. The PMMA used had a molecular weight of 49500 and was used at a concentration of 6% weight in anisole. The active layer was obtained by spin-coating a doped-polymer solution at 1000 rpm for 60 seconds on a fused silica wafer (Edmund optics) and post-baking it at 120\({}^{\circ}\)C for 2 hours in an Figure 1: (a) High resolution SEM image of the small section of the sample from the top showing air grooves carved into the PMMA-DCM layer. (b) Schematic of the experimental setup. BE1 and BE2: Beam expanders; P: Polarizer, SLM: Spatial light modulator; M2: Mirror; S: Sample; TC: Temperature controller; CP: Copper plate; even [28]. The silica wafer has a refractive index of 1.45, compared to 1.54 for the PMMA-DCM layer. To fabricate one dimensional(1D)-disorder samples, 125 randomly distributed parallel grooves, each 200 nm wide and 50 \(\mathrm{\SIUnitSymbolMicro m}\) long, covering a total length of 1000 \(\mathrm{\SIUnitSymbolMicro m}\), were carved into the PMMA-DCM layer using e-beam lithography (CRESTEC/CABL-9000C) (see Fig. 1(a)), resulting in dielectric sections with an average length of 8 \(\mathrm{\SIUnitSymbolMicro m}\) separated by air gaps of 200 nm. The fabrication method ensures a high refractive index contrast of 0.54 between air grooves and the polymer layer, which helps in achieving random lasing action at a low threshold. We require manuscripts to identify a single corresponding author. ## 3 Experimental Setup A schematic of the experimental setup is shown in Fig.1(b). The setup consists of a frequency-doubled mode-locked Nd:YAG laser (EXPLA PL2230: 532 nm, 20 ps, maximum output energy 28 mJ, repetition rate 1-50 Hz). The beam is expanded 5X and spatially modulated by a 1952 x 1088 pixel reflective spatial light modulator (SLM) (Holoeye HES 6001, pixel size 8.0 \(\mathrm{\SIUnitSymbolMicro m}\)). The SLM is placed in the object plane of a telescope with 4X reduction and is imaged on the sample. This setup is used to create a laser strip whose width and length are adjusted according to the sample dimensions. The disorder structure is precisely aligned with the laser strip under a fixed stage Zeiss microscope (AxioExaminer A1) and imaged using an Andor Zyla sCMOS camera (22 mm diagonal view, 6.5 \(\mathrm{\SIUnitSymbolMicro m}\) pixel size) using a 10X objective. The laser emission is collected through a multimode optical fiber connected to a Horiba iHR550 imaging spectrometer equipped with a 2400 \(mm^{-1}\) grating and Synapse CCD detection system (sampling rate 1 MHz, 1024 x 256 pixels, 26 \(\mathrm{\SIUnitSymbolMicro m}\) pixel pitch). The entrance slit is 50 \(\mathrm{\SIUnitSymbolMicro m}\), resulting in a spectral resolution of 20 pm. ## 4 Results When the sample undergoes optical pumping, amplified spontaneous emission experiences multiple scattering. This provides in turn coherent optical feedback, enabling random lasing oscillations with sharp emission linewidth. Once the pump intensity reaches above the lasing threshold, the output intensity increases manifold. This nonlinear increase in output intensity confirms the onset of random lasing action. By linear fit, the lasing threshold value is found to be 35 nJ for a pump size of 450 \(\mathrm{\SIUnitSymbolMicro m}\) X 50 \(\mathrm{\SIUnitSymbolMicro m}\) (Fig.2(c)). Above the threshold, multimode lasing is achieved (illustrated in Fig.2(a)). The resulting spectrum displays randomly positioned, distinct lasing peaks, each exhibiting a typical linewidth of 0.2 nm, constrained by the resolution of the spectrometer. The directivity of our random laser is assessed, with half of the emission intensity confined within \(\pm\) 2.5 degrees from the center of the sample. This is found by scanning a 20X microscope objective placed at a distance of 10 cm from the sample edge to collect emission in a direction orthogonal to the sample length. Directional measurements are shown in inset Fig.2(c). We demonstrate spectral reproducibility by manufacturing three samples with identical disorder configurations on the same substrate. These samples are then subjected to uniform pumping at the same pump power, and their respective emission spectra are recorded. Remarkably, all three samples yield identical emission spectra, as depicted in Fig. 2(b). The high degree of spectral reproducibility is attributed to the highly precise e-beam nanolithography fabrication technique. We also examined the photostability of our quasi-1D random laser by subjecting the sample to uniform pumping using the same experimental conditions as described earlier. The pump energy was held constant at 50 nJ, employing a pump size of 450 \(\mathrm{\SIUnitSymbolMicro m}\) x 50 \(\mathrm{\SIUnitSymbolMicro m}\), and a pump laser repetition rate of 10 Hz. In Figure 2(c), we present the integrated emission intensity graphed against the laser pulse number. The photostability of the laser was observed to be 28380 pulses, equivalent to approximately 47.3 minutes, during which the integrated laser emission intensity reduced to half of its original integrated lasing intensity. To achieve singlemode lasing at any of the lasing wavelengths of the multimode emission spectrum(Fig.3(b)), we apply a nonuniform intensity profile to the pump beam by modulating the SLM. We implement an iterative optimization method [26, 28] to find the optimal pump profile for which modes other then the target mode are suppressed. Here we use Nelder-Mead simplex algorithm implemented in the fminsearch function of MATLAB to optimize the pump profile. We slightly modified the fminsearch function by setting the initial step (usually the \(\delta\) parameter in fminsearch) to 1.0 in order to explore a larger region of the 32-dimensional space. The number of pixels (32) was chosen as a compromise between sensitivity and computation time. The Matlab-generated image that is sent to the SLM is made up of 32 intensity blocks, each of which is encoded on 256 levels of grey and projected onto the sample. To start the optimization process, we choose 32 column-vectors \(V_{i}\) from the 32 x 32 binary Hadamard matrix as the initial vertices. The pump profile P(x) is therefore written as \[P(x)=\tfrac{1}{255}\sum_{i=1}^{32}\beta_{i}V_{i}\] where \(\beta_{i}\) takes discrete values in the range [0,255]. Each vector \(\beta_{i}\) corresponds to a particular pump profile associated with a particular emission spectrum \(I\) (\(\lambda\)). The optimization algorithm uses the inverse of the extinction ratio, \(\eta=I_{t}/I_{o}\), as its cost function and aims for its minimization, where \(I_{t}\) is the peak intensity of the targeted mode and \(I_{o}\) is the highest intensity among all other modes except the targeted mode. To find the optimized pump profile, the algorithm iteratively generates a new pump profile, applies it to the pump stripe, acquires the emission spectrum averaged over 10 shots, and computes the cost function. Mode selection and single mode operation at the lasing wavelength of \(\lambda\) = 602.60 nm are achieved after the convergence of the iterative optimization, as shown in Figure 3(c). After 250 iterations, the algorithm converges to a Figure 2: (a) Multimode Random laser emission spectra obtained after pumping the sample uniformly above the threshold, sample length 1000 \(\mu\)m, pump size 1000 \(\mu\)m X 50 \(\mu\)m. (b) Emission spectra were recorded for 3 different samples of identical disorder configuration at same pump power. (c) The integrated intensity of the emission spectrum with increasing Pump energy (nJ). inset: Angular distribution of the integrated output intensity of random laser emission at a distance of 10 cm from the sample edge, the blue line is a Gaussian fit. (d) Integrated intensity as a function of the number of pump pulses for the random laser. The pump energy is about 50 nJ for a uniform pump of size 450 \(\mu\)m X 50 \(\mu\)m at a repetition rate of 10 Hz. The blue line is an exponential fit. pump profile that suppresses modes other than the targeted mode. The selected lasing mode has a sidelobe rejection ratio (measured as the ratio of the peak intensity count of the target mode w.r.t. maximum noise level in the spectrum) of 800, which corresponds to 53 dB. The next highest intensity count, except for the target mode, in the emission spectrum is 4 counts, which is close to the average noise level present in the spectrum. We also tested the robustness of our pump profile optimization method to achieve single mode lasing by selecting 50 different lasing modes on different samples. Figure 3(d) shows the sidelobe rejection of selected modes. All the modes are selected with an excellent side-lobe rejection of more than 40 dB. Next, we demonstrate how the emission wavelength of the selected lasing mode can be tuned by changing the temperature of the sample. RL sample was positioned on a copper plate featuring a rectangular hole. Placement of the sample over the hole, enabled us to pump the RL sample from the bottom. The copper plate was connected to a heating probe to heat the sample and a sensor to monitor the temperature. Figure 4(a) shows the blue shift of single-mode random laser emission initially centered at \(\lambda\) = 602.60 nm, when temperature is increased. The inset in figure 4(a) shows the linear temperature dependence of the spectral shift. A linear fit gives a slope of d\(\lambda\)/dT= -0.02 nm/\({}^{\circ}\)C., which means that a 5 \({}^{\circ}\)C increase results in a spectral shift of 0.1 nm. Overall, we obtained a total shift of 1 nm between 25 \({}^{\circ}\)C to 75 \({}^{\circ}\)C. When the sample is cooled down to 25 \({}^{\circ}\)C, the initial spectral position is recovered and the process is perfectly reversible. This behavior is Figure 3: (a) Optical microscope image of field intensity distribution near the sample surface when pumped above the threshold. (b) Multimode emission spectra for a uniform pump strip of size 450 \(\mu\)m X 50 \(\mu\)m. (c) Singlemode lasing at \(\lambda\)= 602.60 nm after an iterative pump optimization process. Inset shows the optimized pump profile in grayscale. (d) Iterative optimization of pump shaping has been applied to select 50 lasing modes. The sidelobe rejection ratio(SR) for each individually selected lasing mode w.r.t maximum noise count in the emission spectrum is plotted in log representation (dB). easily explained by the fact that PMMA polymer has a negative thermal coefficient of refractive index [29, 30, 31]. Its refractive index therefore decreases with increasing temperature, which results in a decrease in the optical path length within the polymer layer and a blue-shift. This process is reversible. We also performed a spectral stability test at a constant temperature higher than room temperature in a minimum airflow lab environment. An optimally-pumped lasing sample emitting light at \(\lambda\)= 602.60 nm is supplied with constant heat to achieve a wavelength shift of 0.60 nm at sample temperature of 55 \({}^{\circ}\)C. The emission spectra are recorded at 3 intervals of 15 minutes each. Lasing emission at @\(\lambda\)= 602.00 nm remains stable within \(\pm\) 0.1 nm as shown in Fig.4(b). Since all lasing modes within the multimode spectrum can be selected individually [28], the tuning range can be in principle extended over the whole gain curve by hopping from mode to mode. To demonstrate continuous tunability by mode hopping over a broader frequency range, we need first to identify disordered structures that provide a free spectral range smaller than the frequency shift we can achieve with a single lasing mode (typically 1 nm). Ideally, we need 10-15 modes within 8 to 10 nm spectral bandwidth of the gain curve. Interestingly, by varying Figure 4: (a) The spectral blue shift of 1 nm in the single mode lasing spectrum at \(\lambda\) = 602.60 nm with increasing temperature of the polymer layer. The figure in the inset shows the temperature vs wavelength plot.(b) Emission spectrum stability at constant temperature. The emission spectrum was recorded at an interval of 15, 30, 45 minutes when the sample was heated to a constant temperature of 55\({}^{\circ}\)C. (c) Number of lasing modes for three different sample sets plotted as a function of increasing deviation (D) from the mean periodic position of 8 \(\mu m\) in a sample of length 1000 \(\mu m\) having 125 air grooves. (d) Continuous tunability over a bandwidth of 8 nm by tuning of 10 different individually selected lasing modes M1= 600.2 nm, M2= 600.9 nm, M3= 601.50 nm, M4= 602.6 nm, M5= 603.3 nm, M6= 604.1 nm, M7= 605.00 nm, M8= 605.800 nm, M9= 606.900 nm, M10= 608.00 nm. the degree of disorder, the spectral density of lasing modes varies, as shown in Fig. 4c where the average number of lasing modes is plotted as a function of disorder deviation from the mean periodic position. Here, a 1000 \(\mu\)m sample with 125 air grooves has been considered, with increasing disorder ranging from 0.1 \(\mu\)m to 3 \(\mu\)m deviation from the mean spatial period of 8 \(\mu\)m. The reason for it is that the spatial confinement of the eigenmodes increases with increasing disorder, resulting in an increasing number of lasing modes able to reach threshold. We found that a disorder pattern with a deviation of \(\pm\) 0.5 \(\mu\)m is enough to yield an average of 13 lasing modes within the wavelength range of 600 nm to 608 nm. We choose a pump length of 450 \(\mu\)m and run iterative optimization of the pump profile to select 10 lasing modes distributed over the range of 600-608 nm. The thermal-induced spectral shift is recorded for all the modes. Single-wavelength continuous broadband tunability by mode hopping is achieved over 8 nm, as shown in Fig.4(d). ## 5 Conclusion In this paper, we have used a stable solid-state dye-based random laser to demonstrate temperature-induced tunability in the visible. By enforcing singlemode operation using pump shaping method, we have shown how temperature-induced change of the refractive index can achieve spectral tunability. In contrast to other mechanisms reported in the literature, our proposed method ensures singlemode tuning, it is non-invasive and does not require any modification of the sample. This random laser offers the freedom to control the free spectral range (FSR) by changing the degree of disorder. We have demonstrated that the emission wavelength of any individually selected lasing mode can be continuously blue-shifted by up to 1 nm when increasing the temperature of the PMMA-DCM layer. By hoping from one mode to another, we have demonstrated remarkable broadband tunability-range of 8 nm. This tunability-range can in principle be further increased by exploring different disorder configurations and dyes. Such a tunable random laser offers the advantage of simplicity of fabrication, large tunable bandwidth, compact in size, and the ability to operate in harsh environments compared to conventional tunable lasers that might be bulky, or require precise tuning and control mechanisms, and have limited spectral bandwidth. This single-wavelength tunable random laser holds promising potential for future applications for on-chip tunable laser sources as well as wearable temperature sensors. ## 6 Acknowledgement We extend our heartfelt appreciation to Dr. Leonid Wolfson for his unwaering dedication to the lab, and to Dr. Yossi Abulafia for his valuable assistance with the fabrication process. We are grateful to the Bar-Ilan Institute of Nanotechnology and Advanced Materials for providing us access to their fabrication facilities. ## 7 Disclosures The authors declare no conflicts of interest. ## 8 Data Availability Statement Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
2306.08697
Modulation-free Laser Stabilization Technique Using Integrated Cavity-Coupled Mach-Zehnder Interferometer
Stable narrow-linewidth light sources play a significant role in many precision optical systems. Electro-optic laser frequency stabilization systems, such as the well-known Pound-Drever-Hall (PDH) technique, have been key components of stable laser systems for decades. These control loops utilize an optical frequency noise discriminator (OFND) to measure frequency noise and convert it to an electronic servo signal. Despite their excellent performance, there has been a trade-off between complexity, scalability, power consumption, and noise measurement sensitivity. Here, we propose and experimentally demonstrate a modulation-free laser stabilization technique using an integrated cavity-coupled Mach-Zehnder interferometer (MZI) as an OFND. The proposed architecture maintains the sensitivity and performance of the PDH architecture without the need for any modulation. This significantly improves overall power consumption, simplifies the architecture, and makes it easier to miniaturize into an integrated photonic platform. An on-chip microring resonator with a loaded quality factor of 2.5 million is used as the frequency reference. The implemented chip suppresses the frequency noise of a semiconductor laser by 4 orders of magnitude. The integral linewidth of the free-running laser is suppressed from 6.1 MHz to 695 KHz. The passive implemented photonic integrated circuit occupies an area of 0.456 mm$^2$ and is integrated on AIM Photonics 180 nm silicon-on-insulator process.
Mohamad Hossein Idjadi, Kwangwoong Kim
2023-06-14T18:38:21Z
http://arxiv.org/abs/2306.08697v1
Modulation-free Laser Stabilization Technique Using Integrated Cavity-Coupled Mach-Zehnder Interferometer ###### Abstract Stable narrow-linewidth light sources play a significant role in many precision optical systems. Electro-optic laser frequency stabilization systems, such as the well-known Pound-Drever-Hall (PDH) technique, have been key components of stable laser systems for decades. These control loops utilize an optical frequency noise discriminator (OFND) to measure frequency noise and convert it to an electronic servo signal. Despite their excellent performance, there has been a trade-off between complexity, scalability, power consumption, and noise measurement sensitivity. Here, we propose and experimentally demonstrate a modulation-free laser stabilization technique using an integrated cavity-coupled Mach-Zehnder interferometer (MZI) as an OFND. The proposed architecture maintains the sensitivity and performance of the PDH architecture without the need for any modulation. This significantly improves overall power consumption, simplifies the architecture, and makes it easier to miniaturize into an integrated photonic platform. An on-chip microring resonator with a loaded quality factor of 2.5 million is used as the frequency reference. The implemented chip suppresses the frequency noise of a semiconductor laser by 4 orders of magnitude. The integral linewidth of the free-running laser is suppressed from 6.1 MHz to 695 KHz. The passive implemented photonic integrated circuit occupies an area of 0.456 mm\({}^{2}\) and is integrated on AIM Photonics 180 nm silicon-on-insulator process. ## I Introduction Precise laser frequency control is a crucial requirement in various applications, including optical communication [1; 2], optical atomic clocks [3], microwave photonic [4], and sensing [5] which makes stable and narrow linewidth lasers indispensable part of precision optical experiments. Researchers have explored various methods to suppress unwanted laser frequency noise using optical feedback[6; 7], electro-optic feedback [8; 9], and electro-optic feed-forward [10] techniques. The Optical feedback method relies on the Rayleigh scattering [11] within the cavity and can be an effective and promising way to suppress the laser frequency noise. This mandates the co-design of laser system and optical signals with an ultra-high quality factor (Q-factor) cavity. Electro-optic techniques, on the other hand, leverage state-of-the-art and mature electronic devices and systems to control the laser frequency precisely. This will push some of the challenges from optical domain into the electrical one where control and manipulation of the signals and systems are comparatively more manageable and cost-effective. A key building block in electro-optic laser stabilization techniques is an optical frequency noise discriminator. An OFND measures frequency fluctuations by comparing to a frequency reference and translates it into an electronic signal that can be processed in the electrical domain. Different OFND configurations have been explored such as "squash" locking technique [12; 13], the PDH laser stabilization method [8], and the unbalanced MZI [9; 14]. The PDH loop stands out as the most well-known precision laser instrumentation technique among the extensively utilized OFNDs [15; 16; 17; 18]. Using the PDH technique, a sharp asymmetric error signal can be generated, which can then be utilized as a servo signal to stabilize the laser frequency. Despite excellent performance, the PDH scheme requires electro-optic phase modulator and relatively fast and complex electronics for modulation and demodulation which increases the power consumption and area for an integrated PDH chip [19; 20]. Alternatively, a passive-only unbalanced MZI can serve as an OFND where the two arms of the MZI are phase locked at the quadrature point [21]. Although the unbalanced MZI has a simple architecture to implement on a chip, achieving high-frequency detection sensitivity comparable to the PDH method requires either a large optical delay line or a substantial electronic gain that comes at the cost of chip area of overall system power consumption. Here, we propose and experimentally demonstrate a modulation-free laser stabilization technique using a cavity-coupled MZI on a silicon photonic chip as an OFND. The proposed frequency noise discrimination technique utilizes a high Q-factor cavity coupled to a MZI which breaks the trade-off between sensitivity, complexity, chip area, and power consumption. With a careful design of the on-chip high Q-factor cavity coupled to an MZI, 4 orders of magnitude suppression in frequency noise of a commercially available distributed feedback (DFB) laser is achieved. On-chip thermal tuners are implemented for the potential trimming of fabrication-induced errors and also for facilitating the broadband operation of the OFND. The proposed architecture combines the advantages of a passive-only structure, which offers simplicity and less electronic power consumption for error signal processing, with the high sensitivity of the widely utilized PDH technique. The proof-of-concept integrated photonic chip is fabricated in AIM Photonic commercially available 180 nm silicon-on-insulator (SOI) process. The photonic chip occupies an area of 0.456 mm\({}^{2}\) and consumes only about 50 \(\mu\)W power for reverse biasing the balanced photo detector. The proposed architecture offers a promising solution for achieving sensitive, simple, and low power laser frequency stabilization systems and sets the stage for the development of low-cost, scalable, and stable integrated lasers. ## II Results ### The principle of operation Figure 1(a) shows the block diagram of an electro-optic laser frequency noise reduction loop. As mentioned earlier, the key part of this loop is an OFND that senses the frequency fluctuations of the incoming laser signal, compares it with an optical frequency reference (\(f_{ref}\) ), and generates an electronic signal whose amplitude is proportional to the intensity of frequency fluctuations. This error signal is amplified in the electrical domain and fed back into the laser to stabilize its frequency. The details of the close-loop operation and the linearized block diagram of the control loop is discussed comprehensively in Supplementary Note 1. The OFND can mathematically be Figure 1: **The cavity-coupled MZI frequency discriminator.** (a) The block diagram of an electro-optic laser frequency stabilization using an optical frequency noise discriminator (OFND). The OFND response is asymmetric around the frequency reference point,\(f_{ref}\), and hence small frequency fluctuations translate into electrical signal by the OFND gain. (b) The conceptual diagram of the proposed cavity-coupled MZI. \(T(\omega)\) is the transfer function of a frequency reference (_e.g._ optical cavity) coupled into the top arm of the MZI. (c) Numerical analysis comparing the normalized error signal for the proposed cavity-coupled MZI, a conventional unbalanced MZI and the PDH architecture. \(\Delta f\) and \(\Delta\nu\) are the laser offset frequency compared to \(f_{ref}\) and the free-spectral range of the cavity. represented by an asymmetric transfer function where the small frequency perturbation around \(f_{ref}\) is amplified by a gain (\(K_{FD}\)) and converted into an electronic error signal. The slope of the transfer function also indicates the sensitivity of the OFND in measuring frequency noise. Figure 1(b) shows the proposed cavity-coupled MZI as an OFND that maintains a simple passive structure without any need for fast optical phase modulation. In this method, the incoming laser intensity is split equally into two MZI branches using a broadband Y-junction. The top branch of the MZI is coupled into an optical frequency reference (_e.g._ an optical resonator or a cavity). The amplitude and phase of the electric field at the output of the cavity is affected by that of the frequency reference, \(T(\omega)\). The bottom arm of the MZI is used to interfere with the frequency reference output electric field using a directional coupler. a balanced photodetector is used to photodetect the output of the MZI and by subtracting the currents, the error signal (\(i_{error}\)) is generated. In other words, the proposed architecture is a coherent detector that uses the input laser signal (bottom arm of the MZI) to down-convert the signal at the output of the cavity. In this way the sharp asymmetric phase transition in the transfer function of the cavity at frequency of \(f_{ref}\) will translate in a sharp electrical error signal that can be used to lock the laser in a feedback loop. As discussed in Supplementary Note 2, the error signal can be written as \[i_{error}(\omega)=RP_{0}|T(\omega)|\times Sin(\psi(\omega)-\phi), \tag{1}\] where \(|T(\omega)|,\psi(\omega),\phi,P_{0}\), and \(R\) are the amplitude and phase of the optical reference at the frequency of \(\omega\), phase difference between arms of the MZI controlled by a thermal phase shifter, intensity of the electric field at the input of the MZI, and the responsivity of the photodetectors, respectively. Figure 1(c) is the numerical analysis of the normalized error signal of the PDH, a conventional unbalanced MZI, and the proposed cavity-coupled MZI structure using Eq. (1). To ensure a fair comparison, assuming a fixed area to implement the OFND, a 1 mm circumference ring resonator is used as the frequency reference in both the PDH and the cavity-coupled MZI. The length mismatch between the arms of the unbalanced MZI is also set to 1 mm. The waveguide loss is assumed 0.2 dB/cm. The details of the numerical comparison is presented in Supplementary Note 3. As shown in Fig. 1(c), the error signal of the cavity-coupled MZI is significantly sharper than the conventional unbalanced MZI for a same given setting, and indeed is comparable to that of the PDH offering the same level of sensitivity but much simpler architecture. ### Modulation-free laser frequency noise suppression system Figure 2(a) shows the block diagram of the implemented modulation-free laser stabilization scheme using the implemented cavity-coupled MZI as an OFND. As illustrated in Fig. 2(a), the laser output is coupled into the chip via a grating coupler and divided in half by a Y-junction. In the top branch, a high Q-factor microring resonator is used as an optical frequency reference which filters the amplitude and phase of the incoming light. The phase response of the ring resonator exhibits rapid and asymmetric change around its resonance. When combined with the amplitude response, this characteristic provides sufficient information to determine the offset between the laser frequency and \(f_{ref}\), as well as whether the laser frequency is higher or lower than \(f_{ref}\). The output Figure 2: **The modulation-free laser stabilization scheme.** (a) The proposed cavity-coupled MZI is used as an OFND in a feedback loop for laser frequency noise suppression. A high Q-factor Euler microring resonator is utilized as a frequency reference. Half of the on-chip laser intensity is injected to the microring resonator and coherently interfered with the bottom branch and converted to electrical signal using balanced Germanium photodetectors. A small portion of the ring resonator output is used to monitor its response. The generated error signal, \(i_{error}\), is then amplified, and fed into the PID controller. (b) The micro-photograph of the integrated photonic chip in AIM Photonics 180 nm silicon-on-insulator (SOI) process. The size of the photonic integrated circuit measures 0.95 mm \(\times\) 0.48 mm. PID: proportional-integral-derivative, TIA: trans-impedance amplifier. of the MZI uses a 2x2 adiabatic broadband coupler[22] terminated to balanced photodetectors. As described in detail in Supplementary Note 2, the output error signal is asymmetric around \(f_{ref}\) and can be used as a servo signal. The error signal is amplified and converted to voltage using an off-chip trans-impedance amplifier (TIA). The voltage signal is then fed into a proportional-integral-derivative (PID) controller, which modulates the laser current and corrects any frequency error. To compensate for potential fabrication-induced errors and adjust the microring resonance frequency and the optical phase of the bottom MZI branch, thermal phase shifters that are thermally isolated by a deep trench are utilized. Figure 2(b) shows the micro-photograph of the photonic integrated circuit implemented in AIM Photonics 180 nm SOI process. The silicon photonic chip area is 0.456 mm\({}^{2}\). ### High Q-factor silicon microring resonator Utilization of a microring resonator as an optical frequency reference plays a crucial role in OFND performance. A stable high Q-factor microring resonator, when coupled to the MZI, effectively enhances the sensitivity of the OFND. This, in turn, directly impacts the closed-loop operation and the ultimate laser frequency noise suppression. It is important to highlight that the proposed architecture can be implemented not only on various material platforms, such as low-loss silicon nitride, but also using bench-top ultra-low expansion and stable etalons. Choosing silicon as a platform to implement the optical frequency reference is a trade-off between different design considerations such as potential co-integration with CMOS electronics, chip area, scalability, cost, and ultimate frequency stability. In order to achieve a high Q-factor microring cavity, it is necessary to minimize intra-cavity losses including silicon nanophotonic waveguides. The propagation loss is influenced by several factors, with interface scattering and bend radiation loss being the most significant in a state-of-the-art silicon photonic foundry process [23; 24; 25]. The top-bottom surface roughness of a waveguide is well-controlled by the foundry and it is not a design parameter. However, careful design of waveguide width can significantly improve the propagation loss. Figure 3 illustrates the implemented microring resonator in silicon. Utilizing multi-mode waveguides can reduce TE-mode overlap with side-wall roughness and greatly enhances the waveguide transmission [23]. The implemented multi-mode waveguide is 2.2 \(\mu\)m wide and the theoretical fit model applied to the measured microring response suggests a propagation loss of approximately 0.2 dB/cm. Although wide multi-mode waveguides can greatly reduce interfacial scattering loss, bend radiation loss increases significantly if a tight bend is used [26], especially for a highly multi-mode waveguide. An Euler bend is employed to achieve a compact multi-mode bend with minimal excitation of higher-order modes and mode cross-talk. The ring resonator is designed to achieve critical coupling to maximize OFND sensitivity, however, the fabricated ring are slightly under-coupled due to the potential fabrication-induced errors. Supplementary Note 4 discusses OFND gain sensitivity to waveguide loss and ring coupling ratio. As illustrated in Fig. 3, the fundamental TE-mode remains preserved within the bent multi-mode waveguide. The implemented microring resonator has a circumference of about 950 \(\mu\)m which corresponds to a free spectral range (FSR) of about 83.5 GHz with a loaded Q-factor of about 2.5 million at the resonance wavelength of 1550.73 nm. ### The open-loop operation: device characterization and error signal Figure 4(a) shows the schematic of the measurement setup to characterize the open-loop performance and the error signal. A tunable continues wave laser (TOPTICA CTL 1550) with a wavelength of 1550.7 nm is coupled into the chip via the on-chip grating coupler. To ensure linear operation and thermal stability of the high Q-factor microring resonator, laser power coupled to the grating coupler is set to 0.7 mW using a variable optical attenuator (VOA). The silicon chip temperature is stabilized at 27\({}^{o}\)C. The laser frequency is continuously scanned within the range of 30 GHz. A calibrated fiber-based MZI with a FSR of 20 MHz is used for time-frequency translation. As shown in Fig. 4(a), a sniffer photodetector with 5% coupling ratio is utilized to monitor the resonance response. The resonance response of the ring and the error signal are measured si Figure 3: **High Q-factor silicon microring resonator.** In order to minimize the interface scattering loss due to waveguide edge roughness, a wide multi-mode waveguide is utilized. Moreover, to avoid excitation of higher order modes and Euler bend is used. As shown by the Finite-Difference Time-Domain simulation of the waveguide cross-sections (y-z and x-y planes), the fundamental TE-mode is well preserved inside the cavity. The higher order mode excitation is less than -23 dB. multaneously using an oscilloscope. Figure 4(b) shows the response of the ring where the Q-factor is about 2.5 million. The measured extinction ratio is about 4.8 dB that suggests the fabricated microring is slightly under coupled, likely due to fabrication error in the coupling gap. Figure 4(c) shows the measured normalized asymmetric error signal that suggests OFND sensitivity of 1.3\(\times\)10\({}^{-8}\) Hz\({}^{-1}\) which agrees well with analytical models. Since the ring and MZI are optimized for 1550 nm, there was no need to utilize Heaters 1 and 2 during the measurement. Figure 4: **The open loop response.** (a) Measurement setup for the open loop and microring resonator characterization. (b) The microring resonator response is measured via the top photodetector sniffer. The ring has a measured extinction ratio (ER) of 4.8 dB and a loaded Q-factor of 2.5 million. (c) The measured asymmetric error signal. Heaters are off during the test. CW: continuous wave, OSA: optical spectrum analyzer, GC: grating coupler, BPD: balanced photodetector. ### The closed-loop operation and laser frequency noise suppression As mentioned previously, the asymmetric error signal can serve as the servo signal to precisely adjust and lock the laser frequency to the resonance frequency of the cavity. Figure 5 (a) shows the block diagram of the closed-loop measurement setup. As shown in Fig. 5(a), a DFB laser (AeroDiode 1550LD-2-0-0-1) with a free-running integral linewidth of 6.1 MHz is used. The laser output is directed into the chip by utilizing a 90%/10% coupler, which is then followed by a Figure 5: **The closed-loop operation.** (a) Measurement setup for closed-loop operation and laser locking experiment. The small portion of the laser output power is used in the delayed self-heterodyne (DSH) for frequency noise measurement. Heaters are off during this measurement.(b) The power spectral density of frequency noise of the AeroDiode DFB laser under free-running and closed-loop operation. The highlighted regions indicate frequency band which contribute to the laser linewidth. The frequency noise of the DFB laser is suppressed by 40 dB at low Fourier frequencies and is limited to the silicon microring thermorefractive noise (TRN). VOA: variable optical attenuator, TIA: trans-impedance amplifier, CW: continues wave, GC: grating coupler. VOA. This arrangement allows for the adjustment of the coupled power into the chip to about 0.5 mW. The generated error signal is amplified using a low noise TIA with a gain of 5 K\(\Omega\) and total input-referred current noise of 3.4 pA/\(\sqrt{Hz}\) which is three orders of magnitude lower than the thermorefractive noise (TRN) of the microring. Part of the laser power is used in a fiber-based delayed self-heterodyne (DSH) interferometer and the output signal is digitized and sent to a computer for processing [27]. Figure 5(b) shows the frequency noise measurement results for the free-running and locked laser. As shown in Fig. 5(b), the cavity-coupled MZI OFND can effectively suppress the frequency noise of the free-running DFB laser by 40 dB at low Fourier frequencies. The frequency noise reduction bandwidth is about 300 KHz which is mainly limited by the frequency modulation response of the laser [28] and the modulation bandwidth of the current driver (\(<\)1 MHz). The \(\beta\)-separation line indicates highlighted regions that contribute to the laser linewidth [29]. As shown by the highlighted regions in Fig. 5(b), the integral linewidth of the free-running laser is reduced from 6.1 MHz to 695 KHz under closed-loop condition. It is worth mentioning that the suppressed frequency noise is limited to the TRN of the silicon microring resonator [30; 31]. ## III Discussion Ideally, a cavity that supports a larger mode volume and is made from a temperature insensitive material can enable a lower TRN limit, resulting in a more stable laser. Ultra-low loss silicon nitride and micro-fabricated mirrors [32] present themselves as promising technology platforms for fulfilling these requirements. However, this improvement comes at the cost of chip area, packaging complexity, and availability of the technology for large-scale and robust integrated photonic chip manufacturing. On the other hand, silicon photonic platforms are evolving rapidly and while silicon may not be the most suitable choice for achieving highly stable micro-cavities, it certainly serves as an excellent alternative in numerous applications where strict level of laser stability may not be required. The combination of robust large-scale silicon photonic manufacturing and the ability to monolithically integrate with mature CMOS electronics provides integrated silicon photonics with a significant advantage over other alternatives. It is important to note that the choice of technology depends on the specific application. Our proposed architecture can be implemented on a suitable technology platform to effectively meet the requirements of an application. In conclusion, we have experimentally demonstrated the first modulation-free laser frequency noise stabilization using integrated cavity-coupled MZI. The integrated cavity-coupled MZI consists of a high-Q Euler microring resonator with a loaded quality factor of about 2.5 million and extinction ratio of about 4.8 dB which suggests normalized OFND gain of 1.3\(\times\)10\({}^{-8}\) Hz\({}^{-1}\). The cavity-coupled MZI is used to suppress the frequency noise of a commercially available DFB laser by 40 dB at 1 KHz offset frequency. This corresponds to the integral linewidth reduction of the free-running laser from 6.1 MHz to 695 KHz. The implemented chip occupies 0.456 mm\({}^{2}\) and is integrated on AIM Photonics 180 nm process. ## IV Methods ### Photonic chip implementation All photonic devices are monolithically integrated on AIM Photonics commercially available 180 nm SOI process. The laser is coupled into the chip vertically using a grating coupler. A Y-junction with excess loss of less than 0.5 dB is utilized at the input of the MZI to split the laser intensity equally. A high Q-factor microring resonator is coupled to the top arm of the MZI. The ring resonator is made with wide multi-mode waveguides and Euler bends to both reduce the interfacial scattering loss and avoid excitation of higher-order modes. Thermal phase shifters are integrated into the photonic chip for potential adjustment of phase or compensation for fabrication-induced errors. The two arms of the MZI are combined using an adiabatic broadband directional coupler. A balanced photodetector is used to generate the error signal. The responsivity and dark current of the on-chip Germanium photodetector at 2.5 V reverse bias voltage are 1.16 A/W (at 1550 nm) and 40 nA, respectively. The photonic chip occupies area of 0.456 mm\({}^{2}\). ### The closed-loop operation in presence of noise sources Different noise sources contribute to the ultimate frequency noise limit. The two main sources of noise are the TRN of the reference cavity and the total electronic noise that includes input referred noise of electronics and shot noise of balanced photodetector. The closed-loop operation and minimum achievable frequency noise in the presence of these noise sources are discussed in Supplementary Note 1. ### Delayed self-heterodyne frequency noise measurement A fiber-based delayed self-heterodyne setup is built for laser frequency noise measurement. The first null of the MZI is at 8.7 MHz, corresponding to a single-mode fiber length of about 23 m. An acousto-optic frequency shifter is used in the second arm of the MZI to up-convert the laser phase noise to 27 MHz which helps to mitigate electronic flicker noise for better measurement sensitivity. The two arms of the MZI are cross-coupled and fed into two independent low noise TIAs. The outputs of the TIAs are digitized at 250 MS/s and recorded for a duration of 5 msec and sent to a computer for processing. ### The OFND gain sensitivity analysis The design of the high Q-factor microring resonator in the OFND directly impacts the frequency noise detection gain and laser frequency noise suppression. Supplementary Note 4 discusses the sensitivity of OFND gain to both the coupling ratio of the ring and the waveguide propagation loss. ## Data availability The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. ## Acknowledgements We thank Nicolas K. Fontaine and Andrea Blanco-Redondo for helpful discussions and support. ## Author contributions M.H.I. conceived the project idea. M.H.I. designed, simulated and laid out the integrated photonic circuits and conducted measurements. M.H.I designed the printed circuit board and the on-board electronics. K.K. packaged the integrated photonic chip. M.H.I. wrote the manuscript. ## Competing interests The authors declare no competing interests. ## References * [1]H. Al-Taiy, N. Wenzel, S. Preussler, J. Klinger, and T. Schneider (2014) Optics Letters39 (1), pp. 5826-5834. Cited by: SSI. * [2]D. J. Blumenthal, H. Ballani, R. O. Behunin, J. E. Bowers, P. Costa, D. Lenoski, P. A. Morton, S. B. Papp, and P. T. Rakich (2020) Journal of Lightwave Technology38 (1), pp. 3376-3380. Cited by: SSI. * [3]A. D. Ludlow, M. M. Boyd, J. Ye, E. Peik, and P. O. Schmidt (2015) Reviews of Modern Physics87 (1), pp. 637-640. Cited by: SSI. * [4]J. Li, H. Lee, and K. J. Vahala (2013) Nature communications4 (1), pp. 2097-2098. Cited by: SSI. * [5]G. Marra, C. Clivati, R. Luckett, A. Tampellini, J. Kronjager, L. Wright, A. Mura, F. Levi, S. Robinson, A. Xuereb, et al. (2018) Science361 (1), pp. 486-488. Cited by: SSI. * [6]W. Liang, V. Ilchenko, D. Eliyahu, A. Savchenkov, A. Matsko, D. Seidel, and L. Maleki (2015) Nature communications6 (1), pp. 7371-7379. Cited by: SSI. * [7]B. Dahmani, L. Hollberg, and R. Drullinger (1987) Optics letters12 (1), pp. 876-878. Cited by: SSI. [MISSING_PAGE_POST] * Bagheri et al. [2009]M. Bagheri, F. Aflatouni, A. Imani, A. Goel, and H. Hashemi, Optics letters **34**, 2979 (2009). * Chen et al. [2017]G. F. Chen, J. R. Ong, T. Y. Ang, S. T. Lim, C. E. Png, and D. T. Tan, Scientific reports **7**, 7246 (2017). * Bauters et al. [2011]J. F. Bauters, M. J. Heck, D. John, D. Dai, M.-C. Tien, J. S. Barton, A. Leinse, R. G. Heideman, D. J. Blumenthal, and J. E. Bowers, Optics express **19**, 3163 (2011). * Vlasov and McNab [2004]Y. A. Vlasov and S. J. McNab, Optics express **12**, 1622 (2004). * Fahrenkopf et al. [2019]N. M. Fahrenkopf, C. McDonough, G. L. Leake, Z. Su, E. Timurdogan, and D. D. Coolbaugh, IEEE Journal of Selected Topics in Quantum Electronics **25**, 1 (2019). * Jiang et al. [2018]X. Jiang, H. Wu, and D. Dai, Optics express **26**, 17680 (2018). * Yuan et al. [2022]Z. Yuan, H. Wang, P. Liu, B. Li, B. Shen, M. Gao, L. Chang, W. Jin, A. Feshali, M. Paniccia, et al., Optics Express **30**, 25147 (2022). * Vankwikelberge et al. [1989]P. Vankwikelberge, F. Buytaert, A. Franchois, R. Baets, P. Kuindersma, and C. Fredriksz, IEEE journal of quantum electronics **25**, 2239 (1989). * Di Domenico et al. [2010]G. Di Domenico, S. Schilt, and P. Thomann, Applied optics **49**, 4801 (2010). * Huang et al. [2019]G. Huang, E. Lucas, J. Liu, A. S. Raja, G. Lihachev, M. L. Gorodetsky, N. J. Engelsen, and T. J. Kippenberg, Physical Review A **99**, 061801 (2019). * Panuski et al. [2020]C. Panuski, D. Englund, and R. Hamerly, Physical Review X **10**, 041046 (2020). * Jin et al. [2022]N. Jin, C. A. McLemore, D. Mason, J. P. Hendrie, Y. Luo, M. L. Kelleher, P. Kharel, F. Quinlan, S. A. Diddams, and P. T. Rakich, Optica **9**, 965 (2022). **Supplementary Information:** **Modulation-free Laser Stabilization Technique Using Integrated Cavity-Coupled Mach-Zehnder Interferometer** Mohamad Hossein Idjadi\({}^{*}\) and Kwangwoong Kim _Nokia Bell Labs, 600 Mountain Ave, Murray Hill, NJ 07974, USA._ \({}^{*}\)Corresponding author: [email protected] **Supplementary Note 1: the closed-loop operation in presence of noise sources** Supplementary Figure 1 shows the simplified block diagram of the closed-loop laser frequency noise suppression. The control loop is linearized around the reference frequency (\(f_{ref}\)), in presence of different noise sources. As shown in Supplementary Figure 1, there are three main noise sources in the loop contributing to the closed-loop noise performance. These include laser intrinsic frequency noise (\(\delta f_{n}\)), thermorefractive noise (TRN, \(\delta f_{TRN}\)) of the frequency reference used in the optical frequency noise discriminator (OFND), and the total electronics noise (\(\delta i_{n}\)) that includes input referred noise of electronics and shot noise of balanced photodetector. The small signal current perturbation due to the frequency fluctuations is \[\delta i_{error}=K_{FD}\times(\delta f_{laser}+\delta f_{TRN}), \tag{1}\] **Supplementary Figure 1. The linearized closed-loop block diagram. \(\delta i_{n},\delta f_{n},\) and \(\delta f_{TRN}\) are input refereed noise of electronics, intrinsic frequency noise of laser, and thermorefractive noise of the cavity used in the OFND.** where \(\delta f_{laser}\) and \(K_{FD}\) [Hz/A] are laser frequency noise under closed-loop condition and conversion gain of OFND, respectively. The small change in the error signal is amplified by electronic gain, \(K_{E}\) [A/A]. The total current noise at the input of the electronics, \(\delta i_{n}\), includes the input-referred current noise of the amplifiers and shot noise of the balanced photodetector. The laser current perturbation is \[\delta i_{laser}=K_{E}\times(\delta i_{n}+\delta i_{error}). \tag{2}\] The laser current perturbations modulate the frequency of the laser by current to frequency conversion gain \(K_{L}\) [Hz/A], \[\delta f_{error}=K_{L}\times\delta i_{laser}, \tag{3}\] where \(\delta f_{error}\) is the small perturbation in the frequency of the laser. Under the closed-loop operation, \[\delta f_{laser}=\delta f_{n}-\delta f_{error}. \tag{4}\] Using Supplementary Equations (1) to (4) (assuming \(K_{E}K_{L}K_{FD}>>1\)), \[\delta f_{laser}\approx(\frac{1}{K_{E}K_{L}K_{FD}})\delta f_{n}-(\frac{1}{K_ {FD}})\delta i_{n}-\delta f_{TRN}. \tag{5}\] Since \(\delta f_{n},\delta i_{n}\), and \(\delta f_{TRN}\) are independent random variables with an average of zero, the power spectral density (PSD) of the laser frequency noise is \[S_{laser}(f)\approx(\frac{1}{K_{E}K_{L}K_{FD}})^{2}S_{0}(f)+(\frac{1}{K_{FD}} )^{2}S_{n}(f)+S_{TRN}(f), \tag{6}\] where \(S_{laser}(f),S_{0}(f),S_{n}(f)\), and \(S_{TRN}(f)\) are the power spectral densities of stabilized laser frequency noise, free-running laser frequency noise, electronic noise, and cavity TRN, respectively. As suggested by Supplementary Equation (6), the PSD of the free-running laser is suppressed by the open-loop gain. It is worth noting that the modulation gain of laser, \(K_{L}\) can not be changed after laser fabrication. Although the open loop gain can be increased electronically, it comes at the cost of loop bandwidth, power consumption, and even loop stability. Utilizing a sensitive OFND can significantly enhance frequency noise detection gain (\(K_{FD}\)) and enhance the frequency noise suppression. Moreover, the ultimate laser frequency noise under closed-loop operation is limited by \[S_{limit}(f)=(\frac{1}{K_{FD}})^{2}S_{n}(f)+S_{TRN}(f). \tag{7}\] According to Supplementary Equation (7), the TRN of the cavity can impose a limit on the achievable stability. Therefore, considering the specific requirements of an application, the ultimate achievable frequency noise can be engineered by a careful design and proper selection of the material platform to implement the frequency reference [1]. **Supplementary Note 2: the enhanced frequency noise discriminator using cavity-coupled MZI** Supplementary Figure 2(a) shows the block diagram of a generalized cavity-coupled MZI OFND architecture. Within this model, we consider a generalized transfer function of an optical frequency reference, \(T(.)\). The electric field at the input of the OFND can be expressed as \[E_{in}(t)=\sqrt{P_{0}}e^{jwt}, \tag{8}\] where \(\omega\) and \(P_{0}\) are the instantaneous laser frequency and laser power, respectively. A Y-junction is used to split the power equally. The electric fields before the output coupler are \[E_{1}(t)=\sqrt{\frac{P_{0}}{2}}T(\omega)e^{jwt}, \tag{9}\] \[E_{2}(t)=\sqrt{\frac{P_{0}}{2}}e^{j\phi}e^{jwt}, \tag{10}\] where \(T(\omega)\) and \(\phi\) are the complex optical reference transfer function and the phase shift introduced by the thermal phase shifter at the bottom branch of MZI, respectively. A directional coupler combines the electric field. The interfered output fields at the top and bottom ports can be written as \[E_{o1}(t) =\frac{1}{\sqrt{2}}(E_{1}(t)+jE_{2}(t)), \tag{11}\] \[E_{o2}(t) =\frac{1}{\sqrt{2}}(E_{2}(t)+jE_{1}(t)). \tag{12}\] The optical signal at the output of the MZI is photodetected using a balanced photodetector. The photocurrents \(i_{1}\) and \(i_{2}\) are calculated as \[i_{1}(t) =R|E_{o1}(t)|^{2}, \tag{13}\] \[i_{2}(t) =R|E_{o2}(t)|^{2}, \tag{14}\] where \(R\) is the responsivity of the photodetectors. Using Supplementary Equations (11) to (14), the error signal, \(i_{1}(t)-i_{2}(t)\), can be written as \[i_{error}(t)=2R\Im\bigg{(}E_{1}(t)E_{2}(t)^{*}\bigg{)}, \tag{15}\] where \(\Im(.)\) denotes the imaginary operator. Combining Supplementary Equations (9),(10), and (15), the error signal can be simplified to \[i_{error}(\omega)=RP_{0}|T(\omega)|\times Sin(\psi(\omega)-\phi), \tag{16}\] where \(|T(\omega)|\) and \(\psi(\omega)\) are the amplitude and phase of the optical reference at the frequency of \(\omega\). Please note that \(\omega\) can be written as \[\omega(t)=\omega_{0}+\delta\omega_{n}(t), \tag{17}\] where \(\omega_{0}\) and \(\delta\omega_{n}(t)\) are the nominal frequency and frequency noise of the laser. Supplementary Figure 2(b) and 2(c) show the numerical simulation of the frequency reference response and the asymmetric error signal, respectively. For numerical analysis purposes, we assume that the frequency reference is a critically coupled ring resonator with an approximately 80 GHz free-spectral range (FSR) (equivalent to a 1 mm circumference) and a Q-factor of around 3.3 million, corresponding to a propagation loss of approximately 0.2 dB/cm. As shown in Supplementary Figure 2(c), the error signal exhibits an asymmetric response around the frequency reference that can be used as a servo signal to lock the laser. **Supplementary Note 3: comparative analysis of the PDH, an unbalanced MZI, and the cavity-coupled MZI** Numerous OFNDs have been proposed and demonstrated in the past, including the well-known PDH architecture and the unbalanced MZI [2]. Supplementary Figures 3 (a) to 3 (c) show the block diagram of the PDH, an unbalanced MZI, and the proposed cavity-coupled MZI OFNDs, respectively [3]. As shown in Supplementary Figure 3(a), an electrical local oscillator modulates the incoming electric field using an optical phase modulator. The phase modulated signal is filtered by a cavity (optical frequency reference) followed by a photodetector. The photodetected signal is amplified using a trans-impedance amplifier and down-converted by the same local oscillator frequency using a frequency mixer. The down-converted signal is lowpass filtered to generate the error signal. The detailed mathematical derivation of the PDH error signal has been published previously [4]. Supplementary Figure 3(b) shows the schematic of an unbalanced MZI where the frequency reference is the length difference (true-time delay) in one arm of the MZI. Using Supplementary Equation (16) and substituting \(T(\omega)=e^{j\omega\tau}\), we can conclude that the error signal is sinusoidal. Supplementary Figure 3(c) shows the block diagram of the cavity-coupled MZI where the frequency reference is a microring resonator. Supplementary Figure 3(d) and 3(e) display the calculated error signal and the zoomed-in view of it, respectively. To ensure a fair comparison, given the same chip area, the circumference of the ring in both the PDH scheme and the cavity-coupled MZI configuration is set to 1 mm, which is equivalent to the length mismatch in the unbalanced MZI arrangement. For the purpose of this analysis, a propagation loss of 0.2 dB/cm is assumed. Also, the local oscillator frequency in PDH architecture is 1 GHz, and the optical and electrical phase shifters (\(\phi\)) are optimized accordingly. As shown in Supplementary Figure 3(d), all three error signals are asymmetric around the reference frequency (\(f_{ref}\)). As suggested by Supplementary Figure 3(d), the proposed cavity-coupled configurations has significantly more sensitivity compared to a conventional unbalanced MZI (\(K_{FD}=\)8\(\times\)10\({}^{-11}\) Hz\({}^{-1}\)). As plotted in Supplementary Figure 3(e), despite having a less complex design and potentially lower power consuming electronics, the proposed architecture exhibits a sensitivity (\(K_{FD}=\)1.7\(\times\)10\({}^{-8}\) Hz\({}^{-1}\)) comparable to that of the PDH (\(K_{FD}=\)1.14\(\times\)10\({}^{-8}\) Hz\({}^{-1}\)). It is important to highlight that in order to achieve the same OFND gain as the cavity-coupled MZI, an unbalanced MZI requires an equivalent delay of 17 nsec. However, implementing such a delay line, which is equivalent to approximately 1.27 m of TE-mode silicon waveguide, is not practical to implement on a silicon chip. Additionally, it avoids the need for phase modulation, which would introduce unwanted residual amplitude modulation [5]. **Supplementary Note 4: the OFND gain sensitivity analysis** As discussed in Supplementary Note 1, the gain of the OFND (\(K_{FD}\)) plays significant role in suppressing laser frequency noise. Therefore, careful design and parameter sensitivity analysis are necessary to ensure optimal performance. The main building block in the proposed cavity-coupled MZI OFND is the frequency reference which is an on-chip high Q-factor microring resonator. To achieve higher OFND gain, a microring resonator with higher frequency selectivity is essential. This can be accomplished by having a higher Q-factor and a large extinction ratio. The ring resonator coupling ratio (\(\kappa\)) and waveguide propagation loss (\(\alpha\)) are the two crucial parameters that ultimately affect the aforementioned ring properties and OFND gain. Supplementary Figure 4 shows the simulation of the normalized OFND gain sensitivity to the ring resonator coupling and waveguide loss. In this simulation, a ring resonator with circumference of 1 mm, waveguide loss (\(\alpha_{0}\)) of 0.2 dB/cm, and critical coupling condition (\(\kappa_{0}=0.5\%\)) is considered. Maximum gain of OFND (\(K_{FD0}\)) happens at the lowest waveguide loss (\(\alpha/\alpha_{0}=1\)) and critical coupling condition (\(\kappa/\kappa_{0}=1\)), which in this scenario is about 1.7\(\times 10^{-8}\) Hz\({}^{-1}\). Supplementary Figure 4 demonstrates that for operating close to optimum performance (i.e., \(K_{FD}/K_{FD0}>0.9\)), it is necessary to have the waveguide loss within 10% of the nominal value. However, the coupling ratio margin in this case does not require such a stringent constraint. It is worth mentioning that, even the fabricated chip may be in the lower performance regions because of the fabrication induced errors and inaccurate estimation of roughness, the overall loop gain can simply be compensated by the electronic amplifiers. **Supplementary Figure 4. The OFND gain sensitivity analysis to the ring resonator coupling and waveguide loss.** In this simulation, the OFND gain is normalized to the ideal scenario for a critically coupled ring resonator (\(\kappa_{0}=0.5\%\)) with circumference of 1 mm and waveguide loss \(\alpha_{0}\) of 0.2 dB/cm. The maximum normalized gain (\(K_{FD0}\)) is 1.7\(\times 10^{-8}\) Hz\({}^{-1}\). ## Supplementary Table
2305.00943
On the non-abelian Hodge locus I
We partially resolve conjectures of Deligne and Simpson concerning $\mathbb Z$-local systems on quasi-projective varieties that underlie a polarized variation of Hodge structure. For local systems of "compact type", we prove (1) a relative form of Deligne's finiteness theorem, for any family of quasi-projective varieties, and (2) algebraicity of the corresponding non-abelian Hodge locus.
Philip Engel, Salim Tayou
2023-05-01T16:56:41Z
http://arxiv.org/abs/2305.00943v1
# On the non-abelian Hodge locus I ###### Abstract. We partially resolve conjectures of Deligne and Simpson concerning \(\mathbb{Z}\)-local systems on quasi-projective varieties that underlie a polarized variation of Hodge structure. For local systems of "compact type", we prove (1) a relative form of Deligne's finiteness theorem, for any family of quasi-projective varieties, and (2) algebraicity of the corresponding non-abelian Hodge locus. ###### Contents * 1 Introduction * 2 Variations of Hodge structures * 3 Boundedness of monodromy representations * 4 Douady spaces of polarized distribution manifolds * 5 Algebraicity of the non-abelian Hodge locus ## 1. Introduction Let \(\Pi=\pi_{1}(Y,*)\) be the fundamental group of a smooth quasi-projective variety. A fundamental result of Deligne [1] is that, up to conjugacy, only finitely many representations \(\rho\colon\Pi\to\operatorname{GL}_{n}(\mathbb{Z})\) underlie a \(\mathbb{Z}\)-polarized variation of Hodge structure (\(\mathbb{Z}\)-PVHS) over \(Y\). We are primarily concerned with two questions here: * If instead, one has a family \(\mathcal{Y}\to\mathcal{S}\) of smooth quasi-projective varieties, then do only finitely many representations of \(\Pi\) underlie a \(\mathbb{Z}\)-PVHS on some (unspecified) \(Y_{s}\)? * In the relative moduli space of flat connections \(M_{\operatorname{dR}}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n})\), is the locus underlying a \(\mathbb{Z}\)-PVHS algebraic? The first question is due to Deligne [1, Question 3.13]. Simpson [11, Conjecture 12.3] posed and made progress on the second question, proving that this locus is analytic. Note that the two questions are related: Q2 implies Q1 because an algebraic set will have only finitely many connected components, and the representation of \(\Pi\) is locally constant along a locus of flat connections underlying a \(\mathbb{Z}\)-PVHS. We answer both questions, under the following assumption: **Definition 1.1**.: Let \(\rho\colon\Pi\to\operatorname{GL}_{n}(\mathbb{Z})\) be a group representation and let \(\mathbf{H}(\mathbb{R})\) denote the Zariski-closure of \(\operatorname{im}(\rho)\) in \(\operatorname{GL}_{n}(\mathbb{R})\). Let \[\mathbf{H}(\mathbb{Z}):=\mathbf{H}(\mathbb{R})\cap\operatorname{GL}_{n}( \mathbb{Z}).\] We say that \(\rho\) is of _compact type_ if \(\mathbf{H}(\mathbb{Z})\subset\mathbf{H}(\mathbb{R})\) is cocompact. **Theorem 1.2**.: _Let \(\mathcal{Y}\to\mathcal{S}\) be a family of smooth quasi-projective varieties. Then the flat connections in \(M_{\operatorname{dR}}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n})\) underlying a \(\mathbb{Z}\)-PVHS with compact type monodromy form an algebraic subvariety._ _In particular, if \(\Pi=\pi_{1}(Y_{0},*)\) for some \(0\in\mathcal{S}\), then only finitely many compact type representations of \(\Pi\) underlie a \(\mathbb{Z}\)-PVHS on some fiber \(Y_{s}\), up to an appropriate identification._ The appropriate identification mentioned in the theorem above is explained in Definition 3.1. A useful feature of the compact type case is that, due to Griffiths' generalization of the Borel extension theorem, a \(\mathbb{Z}\)-PVHS on \(Y_{s}\) extends over a projective, simple normal crossings compactification \(\overline{Y}_{s}\). We may stratify \(\mathcal{S}\) into loci over which \(\mathcal{Y}\) admits a relative simple normal crossings compactification. This is achieved by induction on dimension, applying resolution of singularities over the generic point of each stratum. Note that both Q1 and Q2 are Zariski-local on \(\mathcal{S}\). So both Q1 and Q2 (in the compact type case) reduce to families of smooth projective varieties. Hence, for the remainder of the paper, we assume that \(\mathcal{Y}\to\mathcal{S}\) is smooth projective, and \(\mathcal{S}\) is quasiprojective. Our result also answers a question asked by Landesman and Litt [10, Question 8.2.1], again in the cocompact case. ### The non-abelian Hodge locus In [20], Simpson defined \(M_{\operatorname{Dol}}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n})\), resp. \(M_{\operatorname{dR}}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n})\), the relative Dolbeault space, resp. the relative de Rham space: \(M_{\operatorname{Dol}}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n})\) is a relative moduli space of semistable Higgs bundles \((\mathcal{E},\phi)\) with vanishing rational Chern classes and \(M_{\operatorname{dR}}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n})\) is a relative moduli space of vector bundles with flat connection. Let \(N_{\operatorname{Dol}}\subset M_{\operatorname{Dol}}(\mathcal{Y}/\mathcal{S}, \operatorname{GL}_{n})\) be the fixed point set of the \(\mathbb{G}_{m}\)-action \((\mathcal{E},\phi)\mapsto(\mathcal{E},t\phi)\) and let \(N_{\operatorname{dR}}\) be its image in \(M_{\operatorname{dR}}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n})\) under the non-abelian Hodge correspondence. Define \[M_{\operatorname{dR}}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n}(\mathbb{ Z}))\subset M_{\operatorname{dR}}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n})\] to be the flat bundles having integral monodromy representations on a fiber of \(\mathcal{Y}\to\mathcal{S}\). Following Simpson [22, SS12], we define the non-abelian Hodge locus, called the Noether-Lefschetz locus in _loc. cit._, \[\operatorname{NHL}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n}):=N_{\mathrm{ dR}}\cap M_{\mathrm{dR}}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n}(\mathbb{Z})).\] These are the flat vector bundles underlying a \(\mathbb{Z}\)-PVHS. The precise phrasing of Simpson's conjecture on the non-abelian Hodge locus [22, Conjecture 12.3] is: **Conjecture 1.3**.: \(\operatorname{NHL}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n})\) _is an algebraic variety and the inclusions into \(M_{\mathrm{dR}}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n})\) and \(M_{\mathrm{Dol}}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n})\) are algebraic morphisms._ When the base \(\mathcal{S}\) is projective, Conjecture 1.3 is a consequence of Serre's GAGA theorem, as explained in [22, Corollary 12.2]. Furthermore, we have a decomposition \[\operatorname{NHL}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n})= \operatorname{NHL}_{c}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n})\sqcup \operatorname{NHL}_{nc}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n})\] according to whether the monodromy representation is compact type or non-compact type. Our main Theorem 1.2 proves \(\operatorname{NHL}_{c}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n})\) is algebraic. The case of non-compact type monodromy will be explored in future work of the authors. ### Strategy of the proof The proof splits into two parts, each of a rather different nature. First, Q1 is proven, using techniques from hyperbolic and metric geometry. Then, the resolution of Q1 is used to prove Q2, by applying more algebraic techniques. #### 1.2.1. Finiteness of monodromy representations By slicing by hyperplanes, Q1 can be reduced to the case of curves, and in turn, to the universal family \(\mathcal{C}_{g}\to\mathcal{M}_{g}\), for \(g\geq 2\). Let \[\Phi\colon C\to\Gamma\backslash\mathbb{D}\] be the period map associated to a \(\mathbb{Z}\)-PVHS of compact type on some \(C\in\mathcal{M}_{g}\). Every genus \(g\) Riemann surface \(C\) admits a hyperbolic metric, and Deligne's finiteness result relies critically on the length-contracting property of \(\Phi\)[10, 10.1]. But as the curve \(C\in\mathcal{M}_{g}\) degenerates, the length-contracting property alone ceases to be useful: The monodromy representation will be determined by curves whose hyperbolic geodesic representatives have length growing to infinity. These geodesics grow in length as they cross hyperbolic collars forming near the nodes of the limiting curve. Thus, our key lemma, see Proposition 3.16, is that the image of a length-decreasing harmonic map from a hyperbolic collar to a symmetric space is bounded, even as the transverse length to the collar grows to infinity. #### 1.2.2. Algebraicity of \(\mathrm{NHL}_{c}(\mathcal{Y}/\mathcal{S},\mathrm{GL}_{n})\) Our main tool for proving Q2 is an algebraization theorem for Douady spaces of Griffiths transverse, compact analytic subspaces of arithmetic manifolds \(\Gamma\backslash\mathbb{D}\) which parameterize period images of \(\mathbb{Z}\)-PVHS's with big monodromy. The local analytic branches of the non-abelian Hodge locus are the isomonodromic deformations of a fixed integral representation which underlie a \(\mathbb{Z}\)-PVHS. Hence the fibers of \(\mathcal{Y}\to\mathcal{S}\) along such a branch admit a period map \(\Phi_{s}\colon Y_{s}\to\Gamma\backslash\mathbb{D}\). The images \(\Phi_{s}(Y_{s})\) of such period maps are closed analytic spaces, tangent to the Griffiths distribution on \(\Gamma\backslash\mathbb{D}\), of bounded volume with respect to the Griffiths line bundle. When \(\Gamma\backslash\mathbb{D}\) is compact, we prove that such period images are parameterized by a product of a compact Moishezon space and a sub-period domain of \(\mathbb{D}\) accounting for the factors where the monodoromy representation is finite. We identify the non-abelian Hodge locus as a relative space of maps of bounded degree from \(\mathcal{Y}/\mathcal{S}\) to the universal family over the Moishezon space. Then Q2 follows for period maps with a fixed target \(\Gamma\backslash\mathbb{D}\). The set of such arithmetic quotients \(\Gamma\backslash\mathbb{D}\) which can appear is bounded using the resolution of Q1. Theorem 1.2 follows. ### Organization of the paper In SS2 we recall some background results on polarized variations of Hodge structures and period domains. In SS3, we prove the relative version of Deligne's finiteness theorem, for representations of compact type. Then in SS4, we introduce the Douady and Barlet spaces in the general context of polarized distribution manifolds and prove their key properties. In SS5, we prove algebraicity of the compact type non-abelian Hodge locus. ### Acknowledgements The first author thanks P. Smillie for suggesting a proof of Proposition 3.12, R. Krishnamoorthy for many helpful discussions, and B. Bakker and D. Litt for their insights. The second author thanks A. Landesman for bringing this question to his attention and for useful conversations, and D. Maulik, Y.-T. Siu, and N. Tholozan for useful conversations. ## 2. Variations of Hodge structures We recall in this section some background results on polarized variations of Hodge structures and fix notations. Our main references are [1, 17]. ### Monodromy and Mumford-Tate group Let \(Y\) be a complex manifold and let \(\mathbb{V}:=(V_{\mathbb{Z}},F^{\bullet},\psi)\) be a polarized variation of Hodge structure of weight \(k\) on \(Y\). Here \(V_{\mathbb{Z}}\) is the \(\mathbb{Z}\)-local system, \(F^{\bullet}\) is the Hodge filtration on \(V_{\mathbb{Z}}\otimes\mathcal{O}_{Y}\), and \(\psi\) is the polarization. Let \(\mathbf{G}\) be the _generic Mumford-Tate_ group of the variation and let \(\mathbf{H}\) be the algebraic monodromy group of \(\mathbb{V}\). We recall that \(\mathbf{G}\) is the Mumford-Tate group of the Hodge structure over a very general point of \(Y\) and \(\mathbf{H}\) is the defined as follows: fix a base point \(*\in Y\) and denote the monodromy representation associated to the local system \(V_{\mathbb{Z}}\) by \(\rho\colon\pi_{1}(Y,*)\to\mathrm{GL}(V_{\mathbb{Z},*})\), which lands in the subgroup \(\mathrm{Sp}(V_{\mathbb{Z},*})\) or \(\mathrm{O}(V_{\mathbb{Z},*})\) depending on the parity of the weight. Then \(\mathbf{H}\) is the identity component of the \(\mathbb{Q}\)-Zariski closure of the image of \(\rho\). The groups \(\mathbf{G}\) and \(\mathbf{H}\) are reductive algebraic groups over \(\mathbb{Q}\) and by a classical theorem of Deligne and Andre, \(\mathbf{H}\) is a normal subgroup of \(\mathbf{G}^{\mathrm{der}}\), the derived group of \(\mathbf{G}\). It follows that we have a decomposition over \(\mathbb{Q}\) of the adjoint groups \(\mathbf{G}^{\mathrm{ad}}=\mathbf{H}^{\mathrm{ad}}\times\mathbf{H}^{\prime}\). Let \(\mathbb{D}\) be the _Mumford-Tate domain_ associated to the variation. It is a complex analytic space, homogeneous for \(G:=\mathbf{G}^{\mathrm{ad}}(\mathbb{R})^{+}\) and it can be identified with a quotient \(G/K\) where \(K\subset G\) is a compact subgroup. In terms of Hodge structures, \(K\) is the real subgroup preserving each \(V^{p,q}\) and the Hodge pairing between \(V^{p,q}\) and \(V^{q,p}\). From the theory of symmetric spaces, \(\mathbb{D}\) is an analytic open subset of the _compact dual_\(\mathbb{D}^{\vee}\), a projective subvariety of a symplectic or an orthogonal flag variety with specified Mumford-Tate group. There then exists a parabolic subgroup \(P\subset G_{\mathbb{C}}\) such that \(\mathbb{D}^{\vee}=G_{\mathbb{C}}/P\) and \(P\cap G=K\). The variation of Hodge structure \(\mathbb{V}\) on \(Y\) is completely described by its holomorphic _period map_: \[\Phi:Y\to\Gamma\backslash\mathbb{D},\] where \(\Gamma\subset\mathbf{G}(\mathbb{Z})\) is a finite index subgroup preserving \(V_{\mathbb{Z}}\) such that the monodromy representation factors through \(\Gamma\). Up to taking a finite etale cover of \(Y\), we can assume that \(\Gamma\) is neat, hence acting freely on \(\mathbb{D}\). Then the quotient \(X_{\Gamma}:=\Gamma\backslash\mathbb{D}\) is a connected complex manifold, called a _connected Hodge manifold_, see [17, Definiton 3.18]. It is the classifying space of polarized \(\mathbb{Z}\)-Hodge structures on \(V_{\mathbb{Z}}\) whose generic Mumford-Tate group is contained in \(\mathbf{G}\), with level structure corresponding to \(\Gamma\). In general, \(X_{\Gamma}\) is not algebraic unless \(\mathbb{D}\) is Hermitian symmetric. In that case, \(X_{\Gamma}\) is in fact quasiprojective by the Baily-Borel theorem [1], and \(\Phi\) is algebraic by the Borel hyperbolicity theorem [10], see also [1] for another proof. We can furthermore refine the period map by taking into account the algebraic monodromy group \(\mathbf{H}\). The Mumford-Tate domain \(\mathbb{D}\) decomposes according to the decomposition \(\mathbf{G}^{\mathrm{ad}}=\mathbf{H}^{\mathrm{ad}}\times\mathbf{H}^{\prime}\) of adjoint groups as \(\mathbb{D}=\mathbb{D}_{H}\times\mathbb{D}_{H^{\prime}}\) where \(\mathbb{D}_{H}\) is an \(H:=\mathbf{H}^{\mathrm{ad}}(\mathbb{R})^{+}\)-homogeneous space. Up to a finite etale cover of \(Y\), we can assume that the lattice \(\Gamma\) decomposes as \(\Gamma=\Gamma_{H}\times\Gamma_{H^{\prime}}\) where \(\Gamma_{H}\subset\mathbf{H}(\mathbb{Z})\) and \(\Gamma_{H^{\prime}}\subset\mathbf{H}^{\prime}(\mathbb{Z})\) are arithmetic subgroups. Then the projection of the period map \(\Phi\) is constant on the second factor and hence the period map takes the following shape: \[\Phi:S\to\Gamma_{H}\backslash\mathbb{D}_{H}\times\{t_{Y}\}\hookrightarrow \Gamma\backslash\mathbb{D},\] where \(t_{Y}\) is a Hodge generic point in \(\mathbb{D}_{H^{\prime}}\). Then \(X_{\Gamma_{H}}\times\mathbb{D}_{H^{\prime}}\) serves as a classifying space of \(\mathbb{Z}\)-PVHS on a lattice isometric to \(V_{\mathbb{Z},*}\) whose generic Mumford-Tate group is contained in \(\mathbf{G}\), and whose monodromy factors through \(\Gamma_{H}\). The classifying map for such a variation factors through the inclusion of \(X_{\Gamma_{H}}\times\{t\}\) for some fixed \(t\). ### Automorphic vector bundles Given any complex linear representation of \(\chi\colon K\to\mathrm{GL}(W)\), there is an associated holomorphic vector bundle \(G\times_{K}W\to\mathbb{D}\) which is \(\Gamma\)-equivariant and hence descends to a holomorphic vector bundle over \(X_{\Gamma}\). In particular, for any \(p\), the natural representation of \(K\) on \(V^{p,q}\) defines a holomorphic vector bundle on \(\mathbb{D}\) which is identified to the \(p\)th graded piece \(F^{p}/F^{p+1}\) of the Hodge filtration. Any character \(\chi\colon K\to\mathbb{S}^{1}\) defines an equivariant holomorphic line bundle \(L_{\chi}\to\mathbb{D}\). For example, if the character \(\chi\) is the determinant of the action of \(K\) on \(V^{p,q}\), we get the line bundle \(L_{p}=\det(F^{p}/F^{p+1})\). Any such equivariant line bundle admits a unique (up to scaling) left \(G\)-invariant hermitian metric \[h\colon L_{\chi}\otimes\overline{L}_{\chi}\to\mathbb{C}.\] **Definition 2.1**.: The _Griffiths bundle_\(L\to X_{\Gamma}\) is defined by \[L:=\bigotimes_{p\geq 0}(L_{p})^{\otimes p}.\] We denote the descent to \(X_{\Gamma}\) of the equivariant vector bundles \(F^{p}\), line bundles \(L_{p}\), and the hermitian metrics \(h\) by the same symbols. **Remark 2.2**.: While \(F^{\bullet}\) defines a filtration of holomorphic vector bundles over \(X_{\Gamma}\), it does not, in general, define a \(\mathbb{Z}\)-PVHS over \(X_{\Gamma}\) for the tautological local system, because Griffiths' transversality fails. Recall that the tangent space to the Grassmannian at a subspace \(W\subset V\) is canonically isomorphic to \(\mathrm{Hom}(W,V/W)\). Since \(\mathbb{D}\) is an open subset of a flag variety \(\mathbb{D}^{\vee}\), we have an inclusion \[T\mathbb{D}\subset\bigoplus_{p}\mathrm{Hom}(F^{p},V/F^{p}).\] The Griffiths transversality condition on a \(\mathbb{Z}\)-PVHS over \(Y\) implies that the differential \(d\Phi\) of the period map lands in an appropriate subspace of the tangent space: **Definition 2.3**.: The _Griffiths distribution_\(T^{||}\subset T\mathbb{D}\) is the holomophic subbundle of the tangent bundle defined by \[T^{||}_{F^{\bullet}}:=T_{F^{\bullet}}\mathbb{D}\cap\bigoplus_{p}\operatorname{ Hom}(F^{p},F^{p-1}/F^{p}).\] It is \(G\)-invariant, and so descends to a distribution in \(TX_{\Gamma}\) which we also denote by \(T^{||}\). The following proposition is [1, Prop. 7.15]. **Proposition 2.4**.: _Let \(\omega_{L}:=\frac{i}{2\pi}\partial\overline{\partial}\log h\in\Lambda^{1,1}(X_ {\Gamma},\mathbb{R})\) be the curvature form of the Hermitian metric \(h\) on \(L\). Then \(\omega_{L}\big{|}_{T^{||}}\) is positive definite, in the sense that for any nonzero \(v\in T^{||}_{\mathbb{R}}\),_ \[\omega_{L}(v,Jv)>0.\] From this, Griffiths concluded that the image of \(\Phi\) admits a holomorphic line bundle with positive curvature. In particular, using a generalization of the Kodaira embedding theorem due to Grauert, he proved, see [1, Thm. 9.7]: **Theorem 2.5**.: _Let \(\Phi\colon Y\to X_{\Gamma}\) be the period map of a \(\mathbb{Z}\)-PVHS on a compact, complex manifold \(Y\). Then \(\Phi(Y)\), with its reduced analytic space structure, is a projective algebraic variety._ It seems though, that some conditions of Grauert's theorem do not always hold. In particular, it may not be the case that \(T\Phi(Y)\subset T^{||}\) due to singularities on \(\Phi(Y)\). An independent proof and strengthening to the non-compact case was given in [1, Thm. 1.1]. ## 3. Boundedness of monodromy representations Let \(\mathcal{S}\) be a smooth connected quasi-projective complex algebraic variety and let \(\pi:\mathcal{Y}\to\mathcal{S}\) be a smooth projective morphism. Our goal in this section is to prove that there are only finitely representations \(\pi_{1}(Y_{0})\to\operatorname{GL}_{n}(\mathbb{Z})\), up to conjugacy, which underlie a \(\mathbb{Z}\)-PVHS of compact type on some fiber \(Y_{s}\) of \(\pi:\mathcal{Y}\to\mathcal{S}\), after an identification \(\pi_{1}(Y_{0},*)\simeq\pi_{1}(Y_{s},*)\) moving the base point in the universal family. Slicing \(\mathcal{Y}\) by hyperplanes, we can apply the Lefschetz theorem to reduce to the case of a relative smooth projective curve \(\mathcal{C}\to\mathcal{S}\) (passing to a finite Zariski cover of \(\mathcal{S}\) if necessary). Then, we may as well assume that \(\mathcal{S}=\mathcal{M}_{g}\) and that \(\mathcal{C}=\mathcal{C}_{g}\) is the universal curve. This is a particular instance of a question asked by Deligne, for representations of compact type, see [12, Question 3.13]. We can decompose \(\mathcal{M}_{g}\) into two subsets, the _thick_ part and the _thin_ part. Let \(C\in\mathcal{M}_{g}\) be a Riemmann surface of genus \(g\) and let \(\gamma\in\pi_{1}(C)\) be a loop. Then \(C\) has a unique hyperbolic metric of constant curvature \(-1\), in the conformal equivalence class defined by the complex structure on \(C\). There is a unique representative of the free homotopy class of \(\gamma\) which is a hyperbolic geodesic for this metric. Let \(\ell_{C}(\gamma)\) denote its hyperbolic length. Then, the thick part of \(\mathcal{M}_{g}\) is a compact subset \(\mathcal{M}_{g}^{\geq\epsilon}\subset\mathcal{M}_{g}\) consisting of all curves \(C\in\mathcal{M}_{g}\), for which \(\ell_{C}(\gamma)\geq\epsilon\) for all \(\gamma\in\pi_{1}(C)\), see [13, Cor. 3]. First, we deal with the thick part. The proof follows, nearly vertabim, Deligne's proof [12] of finiteness of monodromy representations underlying \(\mathbb{Z}\)-PVHS on a fixed curve \(C\). **Definition 3.1**.: Let \(\Pi_{g}\) be the surface group: \[\Pi_{g}=\langle\alpha_{1},\beta_{1},\dots,\alpha_{g},\beta_{g}\,\big{|}\,\prod _{i=1}^{g}\alpha_{i}\beta_{i}\alpha_{i}^{-1}\beta_{i}^{-1}=1\rangle.\] Fix a pointed Riemann surface \((C_{0},*_{0})\in\mathcal{M}_{g,1}=\mathcal{C}_{g}\) of genus \(g\) and an isomorphism \(\pi_{1}(C_{0},*_{0})\simeq\Pi_{g}\). Then a path in \(\mathcal{C}_{g}\) connecting \((C_{0},*_{0})\) to \((C,*)\) produces an identification \[\pi_{1}(C,*)\simeq\pi_{1}(C_{0},*_{0})\simeq\Pi_{g}.\] We call such an identification _admissible_. Two such admissible identifications can be compared by an automorphism of \(\Pi_{g}\) induced by a path from \((C_{0},*_{0})\) to itself, i.e., an element of \(\pi_{1}(\mathcal{C}_{g},(C_{0},*_{0}))\). The paths connecting \((C_{0},*_{0})\) to itself keeping \(C_{0}\in\mathcal{M}_{g}\) constant in moduli induce the inner automorphisms \(\operatorname{Inn}(\Pi_{g})\). The paths connecting \((C_{0},*_{0})\) to itself by moving \(C_{0}\in\mathcal{M}_{g}\) in moduli induce an inclusion of the mapping class group \(\operatorname{Mod}_{g}\subset\operatorname{Out}(\Pi_{g})\) as an index \(2\) subgroup of \(\operatorname{Out}(\Pi_{g})\), corresponding to orientation. So any isomorphism \(\pi_{1}(C,*)\simeq\Pi_{g}\) induced by an oriented homeomorphism \((C,*)\to(C_{0},*_{0})\) is admissible. **Proposition 3.2**.: _Let \(\rho\colon\pi_{1}(C,*)\to\operatorname{GL}_{n}(\mathbb{Z})\) be the monodromy representation of a \(\mathbb{Z}\)-PVHS of rank \(n\) on some \(C\in\mathcal{M}_{g}^{\geq\epsilon}\) in the thick part of the moduli space. There is an admissible identification \(\pi_{1}(C,*)\simeq\Pi_{g}\) identifying \(\rho\) with one of a finite list of representations \(\Pi_{g}\to\operatorname{GL}_{n}(\mathbb{Z})\), up to conjugacy._ Proof.: A theorem of Procesi [14] states that, up to conjugacy, a semisimple representation \(\rho\colon\Pi\to\operatorname{GL}_{n}(\mathbb{C})\) from any finitely generated group \(\Pi\) is uniquely determined by the function \[\{1,\ldots,m\} \to\mathbb{C}\] \[j \mapsto\operatorname{tr}(\rho(\delta_{j}))\] for some finite generating set \((\delta_{j})_{1\leq j\leq m}\) of the group, where \(m\) depends only on \(\Pi\) and \(n\). Choose, for once and all, such a generating set \(\delta_{1},\ldots,\delta_{m}\) for the surface group \(\Pi_{g}\). We call this set the _Procesi generators._ Deligne's argument relies on the famous length-contracting property of period maps, due to Griffiths [10, 10.1]: **Theorem 3.3**.: _There is a \(G\)-invariant metric on \(\mathbb{D}=G/K\) for which any holomorphic, Griffiths transverse map \(\Delta\to\mathbb{D}\) from a holomorphic disk is length-contracting for the hyperbolic metric on \(\Delta\)._ Choose a cover of \(\mathcal{M}_{g}^{\geq\epsilon}\) by a finite number of contractible, compact subsets \(\{V_{i}\}_{i\in I}\). Choosing a base-point consistently over \(V_{i}\), the fundamental groups \(\pi_{1}(C,*)\) for all \(C\in V_{i}\) are uniquely identified, by the contractibility of \(V_{i}\). Let \(\pi_{1}(C,*)\simeq\Pi_{g}\) be an admissible identification, and consider the resulting family of Procesi generators \((\delta_{j})_{1\leq j\leq m}\) of \(\pi_{1}(C,*)\) for \(C\in V_{i}\). Then \(\ell_{C}(\delta_{j})\) is a continuous function on \(V_{i}\) which, by compactness, is bounded. Hence there exists some \(M\) for which \(\ell_{C}(\delta_{j})\leq M\) for all \(1\leq j\leq m\) and all \(C\in V_{i}\). Suppose that \(\rho\colon\pi_{1}(C,*)\to\Gamma\) is the monodromy representation of a \(\mathbb{Z}\)-PVHS for some \(C\in V_{i}\). Then, applying Theorem 3.3 to the hyperbolic uniformization \(\Delta\to C\), we conclude that there exists a point \(x\in\mathbb{D}\) for which \(d_{\mathbb{D}}(x,\rho(\delta_{j})\cdot x)\leq M\). In particular, \(x\) may be taken as the period image of some point on the lift to \(\Delta\) of the hyperbolic geodesic representing \(\delta_{j}\). Thus, \(\rho(\delta_{j})\) has bounded translation length, and thus, bounded trace, by Lemma 3.4. See [11, Corollaire 1.9]. **Lemma 3.4**.: _Let \(g\in G\) and suppose that \(d_{\mathbb{D}}(x,g\cdot x)\leq M\) for some \(x\in\mathbb{D}\). There is a bound \(N\), depending only on \(\mathbb{D}\) and \(M\), for \(\operatorname{tr}(g)\)._ Proof.: Fix a base point \(x_{0}\in\mathbb{D}\) and choose some \(h\in G\) for which \(h\cdot x_{0}=x\). Then \[d_{\mathbb{D}}(x,g\cdot x)=d_{\mathbb{D}}(h\cdot x_{0},gh\cdot x_{0})=d_{ \mathbb{D}}(x_{0},h^{-1}gh\cdot x_{0})\leq M.\] Since the closed ball of radius \(M\) around \(x_{0}\) is compact, and the map \(G\to G/K=\mathbb{D}\) has compact fibers, we conclude that the set \[\{k\in G\,\big{|}\,d_{\mathbb{D}}(x_{0},k\cdot x_{0})\leq M\}\] is compact. As the trace is a continuous function, we conclude that \(\operatorname{tr}\) is bounded on the above set, in terms of \(M\) alone. We conclude that \(\operatorname{tr}(h^{-1}gh)=\operatorname{tr}(g)\) is bounded. Hence the trace \(\operatorname{tr}(\rho(\delta_{j}))\) is bounded in terms of \(\ell_{C}(\delta_{j})\leq M\), and hence it is bounded globally on \(V_{i}\) by some integer \(N\). It is furthermore an integer, as \(\rho\) lands in \(\operatorname{GL}_{n}(\mathbb{Z})\). Since there are only finitely many possibilities for a map \(\{1,\dots,m\}\to\{-N,\dots,N\}\), there are only finitely many monodromy representations achieved for a \(\mathbb{Z}\)-PVHS over any \(C\in V_{i}\). Since the indexing set \(I\) is finite, we conclude the same over \(\mathcal{M}_{g}^{\geq\epsilon}\), up to conjugacy. Thus, it remains to consider the thin part of the moduli space \(\mathcal{M}_{g}^{<\epsilon}\) consisting of smooth curves with systole less than \(\epsilon\). **Definition 3.5**.: A _collar_\(A\) is the Riemann surface with boundary \[\left\{re^{i\theta}\in\mathbb{H}\left|\begin{array}{l}1\leq r\leq r_{0}\\ \theta_{0}\leq\theta\leq\pi-\theta_{0}\end{array}\right.\right\}\sim\] where \(\tau\sim r_{0}\tau\). A _half-collar_ is the subregion where \(\theta\leq\frac{\pi}{2}\). We recall a famous result due to Keen [10]. The sharpness is due to Buser [14, Thm. C]. **Lemma 3.6** (Collar Lemma).: _Every simple closed geodesic \(\gamma\) of length \(\ell\) on a complete hyperbolic surface \(C\) is contained a hyperbolic collar \(A_{\gamma}\subset C\) of transverse length \(\ln\left(\frac{e^{\ell/2}+1}{e^{\ell/2}-1}\right)\). Furthermore, any two such collars associated to disjoint geodesics are disjoint._ The function \[F(\ell):=\ln\left(\frac{e^{\ell/2}+1}{e^{\ell/2}-1}\right)\] satisfies \(\lim_{\ell\to 0^{+}}F(\ell)=+\infty\), and is monotonically decreasing towards zero as \(\ell\to+\infty\). In terms of the constants \(r_{0},\theta_{0}\) of Definition 3.5, we have \(r_{0}=e^{\ell}\) and \(\theta_{0}=\cos^{-1}(e^{-\ell/2})\). The perimeter of a boundary component of this collar is \(\ell(1-e^{-\ell})^{-1/2}.\) More generally, the formula is \(\operatorname{Per}(A)=\frac{\ell}{\sin(\theta_{0})}\). For \(C\in\mathcal{M}_{g}^{<\epsilon}\), let \(\{\gamma_{1},\dots,\gamma_{k}\}\) be the set of simple closed curves of hyperbolic length less than \(\epsilon\). Choosing \(\epsilon\) smaller than the fixed point of the function \(F(\ell)\), we conclude that all such curves are disjoint. So \(k\leq 3g-3\), with equality when \(\{\gamma_{1},\dots,\gamma_{k}\}\) form a pair-of-pants decomposition of \(C\). We now recall the result of Bers [1, 1]: **Theorem 3.7**.: _There exists a constant \(B_{g}\) for which any hyperbolic surface of genus \(g\) admits a pair-of-pants decomposition, all of whose curves have length bounded above by \(B_{g}\)._ By choosing \(\epsilon\) so that \(F(\epsilon)>B_{g}\), any such pair of pants decomposition _must_ contain all simple closed curves of length less than \(\epsilon\), as any pair of pants decomposition not including \(\gamma_{j}\) would include a curve that crossed the collar of Lemma 3.6. Thus, we may extend the set \(\{\gamma_{1},\ldots,\gamma_{k}\}\) to a full pair of pants decomposition \(\{\gamma_{1},\ldots,\gamma_{3g-3}\}\) in such a way that \(\ell_{C}(\gamma_{j})\leq B_{g}\) for all \(j\). A pair of pants \(P(\ell_{1},\ell_{2},\ell_{3})\) is uniquely specified by the three cuff lengths \(\ell_{1},\ell_{2},\ell_{3}\in\mathbb{R}^{+}\). Two adjacent pairs of pants, glued along \(\gamma_{i}\) in a pants decomposition of \(C\), contain a collar \(A_{\gamma_{i}}\) of transverse length at least \(F(\ell(\gamma_{i}))\), but with the bounds \(B_{g}\) on the chosen pairs of pants, we can do better: **Proposition 3.8**.: _Suppose \(P(\ell_{1},\ell_{2},\ell_{3})\) is a pair of pants with \(\ell_{i}\leq B_{g}\). There exists a constant \(C_{g}>0\) for which each cuff is contained in a half-collar of perimeter at least \(C_{g}\)._ Proof.: The key is to observe that even as \(\ell_{i}\to 0\), the geometry of \(P(\ell_{1},\ell_{2},\ell_{3})\) converges, with the cuff \(\gamma_{i}\) limiting to a hyperbolic cusp, and the half-collars limiting to the horoball neighborhoods. Therefore \(P(\ell_{1},\ell_{2},\ell_{3})\) makes sense, for all \(0\leq\ell_{i}\leq B_{g}\). For each such surface, each cuff (resp. cusp) has a definite half-collar (resp. horoball) neighborhood of non-zero perimeter. So the maximal such perimeter is a continuous function on the compact set \([0,B_{g}]^{3}\), never equal to zero, and thus has a nonzero minimum. **Definition 3.9**.: The _truncated pair of pants_\(P^{o}(\ell_{1},\ell_{2},\ell_{3})\) (Fig. 1) is the complement of the half-collars in \(P(\ell_{1},\ell_{2},\ell_{3})\) with perimeter \(C_{g}\). If \(\ell_{i}\geq C_{g}\) we need not truncate the corresponding cuff. Making \(C_{g}\) sufficiently small, we may assume that the (up to) three half-collars we cut are disjoint. **Remark 3.10**.: The issue with truncating pairs of pants by the universal (half-)collar of Lemma 3.6 is that the limit of its perimeter is \[\lim_{\ell\to 0^{+}}\ell(1-e^{-\ell})^{-1/2}=0.\] So the universal collar is not sufficient to bound the geometry (e.g. as measured by the hyperbolic diameter) of the truncated pair of pants, when \(\ell\to 0\). Hence the need for Proposition 3.8. Consider the three seam geodesics connecting cuffs of \(P(\ell_{1},\ell_{2},\ell_{3})\). These seams intersect each boundary component of \(P^{o}(\ell_{1},\ell_{2},\ell_{3})\) and \(P(\ell_{1},\ell_{2},\ell_{3})\) at two points. We call these (six total) points the _distinguished boundary points_ of \(P^{o}(\ell_{1},\ell_{2},\ell_{3})\) and \(P(\ell_{1},\ell_{2},\ell_{3})\). Note that the distinguished points on a given cuff are diametrically opposite. So when two pants are glued, the four total distinguished points on the cuff alternate which pants decomposition they come from. **Proposition 3.11**.: _Suppose \(\ell_{1},\ell_{2},\ell_{3}\leq B_{g}\) for some constant \(B_{g}\). Let \(\mu\) be a homotopy class of paths on the truncated pair of pants \(P^{o}(\ell_{1},\ell_{2},\ell_{3})\), terminating at two distinguished points of the boundary. Then, \(\mu\) has a representative of bounded distance \(D_{\mu}\) independent of \(\ell_{i}\)._ Proof.: The minimal length representative of \(\mu\) on any truncated pair of pants is finite, and furthermore, this minimal length is continuous as one varies the \(\ell_{i}\). This holds even when some \(\ell_{i}=0\), corresponding to cusped pairs of pants. The proposition follows because \((\ell_{1},\ell_{2},\ell_{3})\) is restricted to lie in the compact set \([0,B_{g}]^{3}\). Thus, the length-contracting property will be useful inside the truncated pair of pants. But the following issue arises: As the core curve of a collar \(A\) shrinks, the length of any transverse geodesic grows. So the length-contracting property ceases to be useful on the collars, at least on its own. Thus, the next proposition is absolutely crucial. **Proposition 3.12**.: _Let \((M,g)\) be a simply connected Riemannian manifold with non-positive sectional curvature and let \(\Psi\colon A\to M\) be a length-contracting, harmonic map from a collar. Assume the perimeter of \(A\) is bounded above by \(C_{g}\). Then, the image of \(A\) is contained in a ball of bounded radius \(\frac{1}{2}(C_{g}+\pi)\)._ Proof.: Recall that the collar \(A\) is parameterized by polar coordinates \((r,\theta)\in\mathbb{H}\) (Def. 3.5) where \(r\in\mathbb{R}_{>0}/(r_{0})^{\mathbb{Z}}\) is the circle coordinate on the collar, and \(\theta\in[\theta_{0},\pi-\theta_{0}]\) is the transverse coordinate. Let \(p_{0}\) be a point on the boundary component of \(A\) defined by \(\theta=\theta_{0}\). Define \[d\colon A \to\mathbb{R}_{\geq 0}\] \[q \mapsto\operatorname{dist}_{g}(\Psi(p_{0}),\Psi(q)).\] As \(M\) has non-positive sectional curvature and \(\pi_{1}(M)\) is trivial, the distance function \(\operatorname{dist}_{g}(\Psi(p_{0}),\cdot)\colon M\to\mathbb{R}_{\geq 0}\) is convex. The composition of a convex function with a harmonic function is subharmonic, so the function \(d\) is subharmonic. Let \(S^{1}(q)\) denote the circle containing \(q\in A\) (varying only the coordinate \(r\)) and define \[d_{\max}(\theta):=\max_{q^{\prime}\in S^{1}(q)}\,d(q^{\prime}),\] which is now circularly symmetric, and so is only a function of \(\theta\). It suffices to prove that \(d_{\max}\) is bounded. Since the rotation action on \(A\) is conformal, the pullback along the rotation action of \(d(q)\) is subharmonic. Thus \(d_{\max}(\theta)\), as a maximum of subharmonic functions, is also subharmonic. The hyperbolic metric is \(y^{-2}(dx^{2}+dy^{2})\) on the upper half-plane, and so \(g_{\operatorname{hyp}}(\frac{\partial}{\partial\theta},\frac{\partial}{ \partial\theta})=1\) when \(\theta=\frac{\pi}{2}\). So the length-contracting property, along with the triangle inequality, implies \[\big{|}\tfrac{\partial}{\partial\theta}(d(q))\big{|}\leq 1\text{ when } \theta(q)=\tfrac{\pi}{2},\text{ and so}\] \[\big{|}\tfrac{d}{d\theta}(d_{\max}(\theta))\big{|}\leq 1\text{ when } \theta=\tfrac{\pi}{2}.\] Thus \(d_{\operatorname{rel}}(\theta):=d_{\max}(\theta)-\theta\) has a non-positive derivative at \(\theta=\frac{\pi}{2}\). On the other hand, \(\theta\) is harmonic so \(d_{\operatorname{rel}}(\theta)\) is again subharmonic. As a subharmonic function with a non-positive derivative at \(\frac{\pi}{2}\), we have that \(d_{\operatorname{rel}}(\theta)\) is bounded above by its value at the left endpoint \(p_{0}\) for all \(\theta\leq\frac{\pi}{2}\). Let \(D\leq\frac{1}{2}\mathrm{Per}(A)\leq\frac{1}{2}C_{g}\) denote the hyperbolic diameter of a boundary component of \(A\). By the length-contracting property, we have \(d_{\operatorname{rel}}(\theta_{0})\leq D-\theta_{0}\) so \[d_{\max}(\theta)\leq D+(\theta-\theta_{0})<\tfrac{1}{2}(C_{g}+\pi)\text{ for all } \theta\leq\tfrac{\pi}{2}.\] Applying the same argument to a point \(p_{0}\) on the other boundary component of the collar, we conclude that for a point \(p^{\prime}\) on the core curve, the ball of radius \(\frac{1}{2}(C_{g}+\pi)\) about its image contains the image of the boundary of \(A\) entirely. We conclude the result by the maximum principle, as \(q\mapsto\operatorname{dist}_{g}(\Psi(p^{\prime}),\Psi(q))\) is subharmonic. **Lemma 3.13**.: _There is a constant \(\mu_{n}>0\) depending only on \(n\) such that: For any arithmetic group \(\Gamma\) acting on a period domain \(\mathbb{D}\) classifying \(\mathbb{Z}\)-PVHS of rank at most \(n\), and for any \(p\in\mathbb{D}\), we have_ \[d_{\mathbb{D}}(p,\gamma(p))>\mu_{n}\text{ for all }\gamma\in\Gamma\text{ non-quasi-unipotent}.\] Proof.: There are only finitely many possible spaces \(\mathbb{D}\), corresponding to real Lie groups \(G\) of Hodge type and bounded rank, and compact subgroups \(K\subset G\). Let \(\chi_{\gamma}(t)\) denote the characteristic polynomial of \(\gamma\). Since it is monic of degree \(n\), we can apply the following effective form of Kronecker's theorem: **Theorem 3.14** ([1]).: _Let \(\alpha\) be an algebraic integer of degree \(d\leq n\). Either \(\alpha\) is a root of unity, or the largest Galois conjugate of \(\alpha\) has absolute value at least_ \[c_{n}=1+\frac{1}{52n\log(6n)}.\] Factoring \(\chi_{\gamma}(t)\) into irreducible factors, this theorem bounds the norm of the largest eigenvalue of \(\gamma\) away from \(1\), whenever \(\gamma\) is non-quasi-unipotent. Let \(\lambda_{1},\dots,\lambda_{n}\) be these eigenvalues and let \[L_{\gamma}:=\inf_{p\in\mathbb{D}}d_{\mathbb{D}}(p,\gamma(p))\] be the translation length. As \(L_{\gamma}\) is conjugation-invariant, it is solely a function \(F_{\mathbb{D}}(\lambda_{1},\dots,\lambda_{n})\) of the eigenvalues. Let \(S=G/K_{\max}\) be the symmetric space associated to the real group \(G\). Here \(K_{\max}\subset G\) is a maximal compact subgroup containing \(K\). Consider the map \[\mathbb{D}=G/K\xrightarrow{\pi}G/K_{\max}=S.\] For appropriate left \(G\)-invariant metrics, this map is length-contracting. Then, \(L_{\gamma}\geq\inf_{p\in S}d_{S}(p,\gamma(p))\). The formula for the translation length on \(S\) is the same as for the symmetric space \(\operatorname{SL}_{n}(\mathbb{R})/\operatorname{SO}_{n}(\mathbb{R})\): \[\inf_{p\in S}d_{S}(p,\gamma\cdot p)=\sqrt{(\log|\lambda_{1}|)^{2}+\dots+(\log |\lambda_{n}|)^{2}}.\] Hence, taking \(\mu_{n}<\log|c_{n}|\) and applying Theorem 3.14, we conclude that \(L_{\gamma}>\mu_{n}\) for non-quasi-unipotent \(\gamma\). **Corollary 3.15**.: _Consider a \(\mathbb{Z}\)-PVHS of rank \(n\) of compact type over a curve \(C\). Up to passing to a finite etale cover of fixed degree, there is an \(\epsilon>0\) such that, for any \(\gamma\in\pi_{1}(C)\) with \(\ell_{C}(\gamma)<\epsilon\), the monodromy of \(\gamma\) is trivial: \(\rho(\gamma)=I\in\Gamma\)._ Proof.: This follows from Lemma 3.13, the length-contracting property, and the fact that in the compact type case, the only quasi-unipotent elements of \(\Gamma\) are of finite order. Note that for all possible \(\Gamma\subset\operatorname{GL}_{n}(\mathbb{Z})\), the torsion can be killed at a fixed finite level, since this holds for the entire group \(\operatorname{GL}_{n}(\mathbb{Z})\). **Proposition 3.16**.: _Let \((C,\gamma)\) and \(\epsilon\) be as above, and let \(A\) be a hyperbolic collar on \(C\) containing \(\gamma\), of perimeter \(C_{g}\). Then the period map \(A\to\Gamma\backslash\mathbb{D}\) lifts to a period map \(\Phi\colon A\to\mathbb{D}\). Furthermore, the image of \(\Phi\) is contained in a ball of bounded radius \(B\)._ Proof.: The restriction of the period map to \(A\) lifts to \(\mathbb{D}=G/K\) by Corollary 3.15, because the monodromy of the core curve is trivial, and the core curve generates \(\pi_{1}(A)\). Define \(\Psi=\pi\circ\Phi\) to be the composition of the period map \(\Phi\colon H\to\mathbb{D}\) with the quotient map \(\pi\colon\mathbb{D}\to S=G/K_{\max}\) to the symmetric space. Then \(\pi\) is harmonic. So \(\Psi\), as the composition of a holomorphic and a harmonic map, is harmonic. Applying Proposition 3.12, we conclude that for \(p,q\) two points on the two boundary components of \(A\), the distance \[d_{S}(\Psi(p),\Psi(q))\] is bounded. Here we use that \(S\) is non-positively curved, simply connected, and that \(\pi\colon\mathbb{D}\to S\) is distance-decreasing, so \(\pi\circ\Phi\) is also distance-decreasing. The fibers of \(\pi\) are isometric, compact submanifolds \(K_{\max}/K\subset\mathbb{D}\). We conclude that the distance between \(\Phi(p)\) and \(\Phi(q)\) is also bounded, for instance, by the above distance plus the covering radius of a fiber of \(\pi\). **Theorem 3.17**.: _Up to admissible identification and conjugation, there are only finitely many representations \(\rho\colon\Pi_{g}\to\operatorname{GL}_{n}(\mathbb{Z})\), of compact type, which underlie a \(\mathbb{Z}\)-PVHS on some curve in \(\mathcal{M}_{g}\)._ Proof.: Let \(\Phi\colon C\to\Gamma\backslash\mathbb{D}\) be the period map of a \(\mathbb{Z}\)-PVHS of rank \(n\) on some curve \(C\in\mathcal{M}_{g}^{<\epsilon}\). Take a Bers pair-of-pants decomposition of \(C\) as in Theorem 3.7. There are only finitely many topological types of pants decomposition for surfaces of a given genus \(g\). Fix a set of "reference" pants decompositions \(\{R_{k}\}\), one for each possible topological type. We also fix a "reference" triple of seams on each pair of pants in \(R_{k}\) in such a way that the four points on each cuff alternate (or coincide in pairs; there are 4 topologically distinct ways to do this for each cuff). On each reference decomposition, choose an admissible identification \(\pi_{1}(R_{k},*)\simeq\Pi_{g}\). This specifies a "reference" set of Procesi generators \((\delta_{j})_{1\leq j\leq m}\) for each topological type \(R_{k}\) of pants decomposition. Then, we fix representatives of each Procesi generator \(\delta_{j}\) which decompose into a collection of segments of the following two types: 1. segments contained in a pair of pants, which terminate at marked points on the cuffs, and 2. segments circling around the cuff which connect two marked points coming from adjacent pairs of pants. Using the Bers pants decomposition of \(C\), we may identify \(C\to R_{k}\) with one of the references, by an oriented homeomorphism preserving the decomposition, and sending the hyperbolic seams to the reference seams. This induces an admissible identification with \(\Pi_{g}\) and the given representatives of the Procesi generators \(\delta_{j}\) can be pulled back to \(C\). Applying Proposition 3.8, we further decompose each generator \(\delta_{j}\) into three types of segments: 1. segments geodesically crossing a half-collar of perimeter \(C_{g}\), 2. segments geodesically winding around a cuff, of a fixed homotopy class \(\nu\) relative to two distinguished points coming from opposite pairs of pants, and 3. segments in a fixed homotopy class \(\mu\) relative to two distinguished points on a truncated pair of pants \(P^{o}(\ell_{1},\ell_{2},\ell_{3})\) satisfying \(\ell_{i}\leq B_{g}\). Let \(\widetilde{\Phi}\colon\widetilde{C}\to\mathbb{D}\) be the lift of the period map to the universal cover of \(C\) and let \([0,1]\) be the lift of the loop \(\delta_{j}\) to a segment in \(\widetilde{C}\). Then 1. \(D_{\mu}\) bounds the length of a representative of a relative homotopy class \(\mu\) in the truncated pairs of pants (Prop. 3.11), 2. \(L_{\nu}=B_{g}\cdot\text{winding}(\nu)\) bounds the length of the geodesic representing \(\nu\) purely in terms of the relative homotopy class, 3. \(B\) bounds the radius of a ball covering the image of a collar (Prop. 3.16) whose core curve has length less than \(\epsilon\), 4. \(B^{\prime}\) bounds the length of a transverse geodesic on a half-collar with core curve of length at least \(\epsilon\) and perimeter \(C_{g}\), and 5. \(e\) is the total number of collars crossed. Thus \(d_{\mathbb{D}}(\widetilde{\Phi}(0),\widetilde{\Phi}(1))\) is bounded. We conclude by Lemma 3.4 that in turn, the trace \(\operatorname{tr}(\rho(\delta_{j}))\) is bounded. Then, the theorem follows as in Proposition 3.2. **Corollary 3.18**.: _Let \(\mathcal{S}\) be a smooth connected quasi-projective complex algebraic variety and let \(\pi:\mathcal{Y}\to\mathcal{S}\) be a smooth projective morphism. There are only finitely representations \(\pi_{1}(Y_{0},*)\to\operatorname{GL}_{n}(\mathbb{Z})\), up to conjugacy, which underlie a \(\mathbb{Z}\)-PVHS of compact type on some fiber \(Y_{s}\) of \(\pi:\mathcal{Y}\to\mathcal{S}\), after an identification \(\pi_{1}(Y_{0},*)\simeq\pi_{1}(Y_{s},*)\) induced by moving \(*\) in \(\mathcal{Y}\)._ Proof.: This follows from the discussion at the beginning of the section, using the Lefschetz hyperplane theorem. ## 4. Douady spaces of polarized distribution manifolds In this section we abstract some key elements of the Hodge manifolds, in the case where \(\Gamma\) is cocompact. **Definition 4.1**.: A _distribution manifold_\((X,T^{||})\) is a compact, complex manifold \(X\), together with a holomorphic subbundle \(T^{||}\subset TX\) of its tangent bundle (i.e. a holomorphic distribution).1 Footnote 1: We do not require the distribution to be integrable. Let \(L\to X\) be a holomorphic line bundle and let \(h\) be a Hermitian metric on \(L\). We say that \((L,h)\) is _positive_ on \((X,T^{||})\) if the \((1,1)\)-form \(\omega_{L}:=\frac{i}{2\pi}\partial\overline{\partial}\log h\) satisfies \(\omega_{L}\big{|}_{T^{||}}>0\). We call \((L,h)\) a _polarization_ of the distribution manifold \((X,T^{||})\). We now recall fundamental results on the analogues of the Hilbert and Chow varieties for complex manifolds and analytic spaces. **Definition 4.2**.: An _analytic cycle_ on \(X\) is a finite formal \(\mathbb{Z}\)-linear combination \(\sum_{i}n_{i}[Z_{i}]\) of irreducible, closed, reduced analytic subspaces \(Z_{i}\subset X\) of a fixed dimension. An analytic cycle is _effective_ if \(n_{i}\geq 0\). We have then the following fundamental result of Barlet, see [1]. **Theorem 4.3**.: _Effective analytic cycles on \(X\) are parameterized by a countable union of analytic spaces, locally of finite type._ Call a connected component \(\mathfrak{B}\) of this analytic space a _Barlet space_. **Remark 4.4**.: In general, a Barlet space of an analytic space \(X\) of finite type need not be of finite type, even if \(X\) is a smooth, proper \(\mathbb{C}\)-variety. A famous counterexample is due to Hironaka: let \(C,D\subset M\) be two smooth curves in a smooth projective \(3\)-fold \(T\), with \(C\cap D=\{p,q\}\). We can consider the variety \[\widehat{M}:=Bl_{\widehat{C}}Bl_{D}(M\setminus q)\cup Bl_{\widehat{D}}Bl_{C}( M\setminus p),\] that is, we blow up \(M\) along \(C\) and \(D\), but in opposite orders at \(p\) and \(q\). If \(f\) is a fiber of one of the exceptional divisors, then the Barlet space containing \(f\) is not of finite type, as \(f\) admits a deformation to a cycle of the form \(f+(Z_{1}+Z_{2})\) where \(Z_{1}\) and \(Z_{2}\) are the strict transforms of the fibers at \(p\) and \(q\) of the first blow-up in the second blow-up. **Definition 4.5**.: Let \((X,T^{||})\) be a distribution manifold. A _parallel Barlet space_\(\mathfrak{B}^{||}\) of \((X,T^{||})\) is a connected component of the sublocus of \(\mathfrak{B}\) defined by the following property: \[\sum_{i}n_{i}[Z_{i}]\in\mathfrak{B}^{||}\text{ iff there is a dense open set}\] \[Z^{o}\subset\cup_{i}Z_{i}\text{ for which }TZ^{o}\subset T^{||}.\] This is visibly a locally closed analytic condition on the Barlet space. In fact, much more is true: **Theorem 4.6**.: _Let \((X,T^{||},L,h)\) be a polarized distribution manifold. Any parallel Barlet space \(\mathfrak{B}^{||}\) is a proper analytic space._ _Furthermore, there are only finitely many Barlet spaces parameterizing cycles of pure codimension \(d\) on which \(c_{1}(L)^{n-d}\) is bounded._ Proof.: Let \(g\) be an arbitrary hermitian metric on \(X\), for instance, we can construct \(g\) via a partition of unity. Define a smooth distribution \(T^{\perp}\subset TX\) by \(T^{\perp}_{x}:=(T^{||}_{x})^{\perp g}\). Then, we have a \(g\)-orthogonal splitting \(TX=T^{||}\oplus T^{\perp}\) as smooth \(\mathbb{C}\)-vector bundles. Let \(g^{\perp}\) denote the degenerate, semi-positive hermitian form on \(TX\) which is defined by \((0,g\big{|}_{T^{\perp}})\) with respect the decomposition \(TX=T^{||}\oplus T^{\perp}\). Let \(N>0\) and define a symmetric tensor by \[\widetilde{g}(v,w):=\omega_{L}(v,Jw)+Ng^{\perp}(v,w)\in S^{2}T^{*}X.\] We claim that \(\widetilde{g}\) is a Hermitian metric on \(X\) for sufficiently large \(N\). This follows from \(\omega_{L}(v,Jw)\) being positive-definite on \(T^{||}\), \(g^{\perp}\) vanishing on \(T^{||}\) and being positive definite on \(T^{\perp}\), and compactness of \(X\). For any codimension \(d\) analytic cycle \(Z:=\sum_{i}n_{i}[Z_{i}]\in\mathfrak{B}^{||}\), define \[\operatorname{vol}_{L}(Z)=\sum_{i}n_{i}\int_{Z_{i}}c_{1}(L)^{n-d}=[Z]\cdot c_{ 1}(L)^{n-d}.\] Observe that \(c_{1}(L)^{n-d}\) is pointwise positive on \(Z_{i}^{o}\subset Z_{i}\). Furthermore \(\operatorname{vol}_{L}(Z)\) is constant on a connected component of \(\mathfrak{B}^{||}\) because it is given as the intersection number on the right. Next, we define \[\operatorname{vol}_{\widetilde{g}}(Z):=\sum_{i}n_{i}\int_{Z_{i}}\operatorname{ vol}_{\widetilde{g}|Z_{i}}\] and observe \(\operatorname{vol}_{\widetilde{g}}(Z)=\operatorname{vol}_{L}(Z)\) because \(\widetilde{g}(\cdot,\cdot)\big{|}_{T^{||}}=\omega_{L}(\cdot,J\cdot)\big{|}_{ T^{||}}\) and \(TZ_{i}^{o}\subset T^{||}\). Thus, \(X\) admits a hermitian metric \(\widetilde{g}\) in which \(\operatorname{vol}_{\widetilde{g}}(Z)\) is constant on a connected component of \(\mathfrak{B}^{||}\), equal to \([Z]\cdot c_{1}(L)^{n-d}\). Let \(Z^{(1)},Z^{(2)},\cdots\) be a countable sequence of effective analytic cycles in (possibly different) connected components \(\mathfrak{B}^{||,(i)}\), for which \(\operatorname{vol}_{L}=\operatorname{vol}_{\widetilde{g}}\) remains bounded. By a theorem of Harvey-Schiffman, [10, Thm. 3.9] we can extract a convergent subsequence that converges to an effective analytic cycle \(Z^{(\infty)}\) for which \(\operatorname{vol}_{\widetilde{g}}(Z^{(i)})\) converges to \(\operatorname{vol}_{\widetilde{g}}(Z^{(\infty)})\). Such convergence defines the topology on \(\mathfrak{B}\). By [11, Prop. 2.3], the \(Z^{(i)}\) converge in the sense of currents of integration to \(Z^{(\infty)}\), and in particular, the integrals \(\int_{Z^{(i)}}\omega_{L}^{n-d}\) must converge to \(\int_{Z^{(\infty)}}\omega_{L}^{n-d}\) and so remain bounded. Additionally, we have \(\operatorname{vol}_{\widetilde{g}}(Z^{(\infty)})=\operatorname{vol}_{L}(Z^{( \infty)})\) and this equality holds for any choice \(N\) in the definition \(\widetilde{g}=\omega_{L}+Ng^{\perp}\). We conclude that there is a Zariski-dense open subset \(Z^{o}\subset Z^{\infty}\) for which \(TZ^{o}\subset(T^{\perp})^{\perp}=T^{||}\), as otherwise \(\operatorname{vol}_{\widetilde{g}}(Z^{(\infty)})\) would increase as \(N\) increases. Thus, the union of all components \(\mathfrak{B}^{||}\) for which \(c_{1}(L)^{n-d}\) is bounded is sequentially compact. Hence each component of \(\mathfrak{B}^{||}\) is a compact analytic space, and there are only finitely many components with bounded \(\operatorname{vol}_{L}\). The theorem follows. We now consider the analogue of Hilbert spaces. A _Douady space_ of \(X\) is an analytic space \(\mathfrak{D}\) parametrizing flat families of closed analytic subspaces of \(X\), see [13, SS9.1] for a precise definition. By the main theorem of Douady [13, pp. 83-84], there is a universal analytic subspace \(\mathcal{Z}\subset\mathfrak{D}\times X\) which is flat over \(X\), and any flat family parameterized by a base \(M\) is the pullback along an analytic classifying morphism \(M\to\mathfrak{D}\). In general, a Douady space may only be locally of finite type, for similar reasons as the Barlet spaces. Given a sub-analytic space \(Z\subset X\), we can define an effective analytic cycle \([Z]\in\mathfrak{B}\) called the _support_. It is the positive linear combination \(\sum_{i}n_{i}[Z_{i}]\) where \(Z_{i}\) are the irreducible components of the reduction of \(Z\) that have top-dimensional set-theoretic support, and \(n_{i}\) is the generic order of non-reducedness of \(Z\) along \(Z_{i}\), see [11, Sec. 3.1]. There is an analogue, the _Douady-Barlet morphism_\([\cdot]\colon\mathfrak{D}\to\mathfrak{B}\), of the Hilbert-Chow morphism, sending an analytic space to its support. **Theorem 4.7** ([11, Prop. 3.4]).: _The Douady-Barlet morphism is proper on each component \(\mathfrak{D}\) of the Douady space._ **Definition 4.8**.: A _parallel Douady space_\(\mathfrak{D}^{||}\) is a connected component of the sublocus of \(Z\in\mathfrak{D}\) for which \([Z]\in\mathfrak{B}^{||}\). **Remark 4.9**.: It is important to note that the Zariski tangent space of \(Z\in\mathfrak{D}^{||}\) is not required to lie in \(T^{||}\). For instance, consider a flat family \(\mathcal{Z}^{*}\to C^{*}=C\setminus 0\) of complex submanifolds of \(X\), with the tangent bundle \(T\mathcal{Z}_{t}\) lying in \(T^{\parallel}\) for all \(t\in C^{*}\). The flat limit \(Z_{0}\) over the puncture might be nilpotently thickened in directions outside of \(T^{\parallel}\), if the total space of the family itself does not have a tangent bundle \(T\mathcal{Z}^{*}\) lying in \(T^{\parallel}\), and this could even occur generically along \(Z_{0}\). **Corollary 4.10**.: _Let \((X,T^{\parallel},L,h)\) be a polarized distribution manifold. Then, each connected component of \(\mathfrak{D}^{\parallel}\) is a proper analytic space._ Proof.: This follows directly from Theorem 4.7 and Theorem 4.3. **Theorem 4.11**.: _Let \(Z\in\mathfrak{D}^{\parallel}\) lie in a parallel Douady space. Then \(Z\) is projective, and \(L\big{|}_{Z}\) is an ample line bundle._ Proof.: A simplification of the proof in [1, Thm. 1.1] applies. It follows from Siu and Demailly's resolution [11, 12, 13] of the Grauert-Riemenschneider conjecture, applied to a resolution of \(Z\), that \(Z\) is Moishezon. We have the following lemma: **Lemma 4.12**.: _Let \(S\) be an open stratum of the singular stratification of \(\cup_{i}Z_{i}\). Then \(TS\subset T^{\parallel}\)._ Proof.: By assumption, there is a dense open \(Z^{o}\subset\cup_{i}Z_{i}\) for which \(TZ^{o}\subset T^{\parallel}\). We claim that \(TS\subset\overline{TZ^{o}}\) lies in the Zariski closure of \(TZ^{o}\) in \(TX\). Then the result will follow as \(T^{\parallel}\) is Zariski-closed in \(TX\). Let \(Z_{i}\) be an irreducible component containing \(S\) and consider the map \(d\pi_{i}\colon T\widetilde{Z}_{i}\to TX\) from a resolution. Let \(\widetilde{Z}_{i}^{o}:=\pi_{i}^{-1}(Z_{i}\cap Z^{o}).\) As \(d\pi_{i}\) is continuous and \(d\pi_{i}(T\widetilde{Z}_{i}^{o})\subset TZ^{o}\), we have \(\operatorname{im}(d\pi_{i})\subset\overline{TZ^{o}}\). The claim follows if we can show \(\operatorname{im}(d\pi_{i})\supset TS^{\prime}\), for a dense open \(S^{\prime}\subset S\), i.e. can we lift a generic tangent vector of \(S^{\prime}\) to \(\widetilde{Z}_{i}\)? This is immediate from the generic smoothness of \(\pi_{i}\big{|}_{\pi_{i}^{-1}(S)^{\operatorname{red}}}\). Lemma 4.12 implies we have \(L^{d}\cdot V>0\) for any subvariety \(V\) of dimension \(d\), because \(TV\) is generically contained in the tangent bundle of some singular stratum \(S\) and \(\frac{i}{2\pi}\partial\overline{\partial}\log(h)\) is positive definite on \(T^{\parallel}\). So \(Z\) satisfies the Nakai-Moishezon criterion. Then, a theorem of Kollar [14, Thm. 3.11] implies that \(Z\) is projective. **Definition 4.13**.: Let \(\mathfrak{C}\subset(\mathfrak{D}^{\parallel})^{\operatorname{red}}\) be an irreducible component of a parallel Douady space. For \(Z_{t}\in\mathfrak{C}\) let \(L_{t}:=L\big{|}_{Z_{t}}\). We say that \(\mathfrak{C}\) has _maximal variation_ if there exists an analytic open set \(U\subset\mathfrak{C}\) for which \((Z_{s},L_{s})\not\simeq(Z_{t},L_{t})\) for all \(s,t\in U\), \(s\neq t\). **Theorem 4.14**.: _Let \(\mathfrak{C}\) be an irreducible component of a parallel Douady space of \((X,T^{\parallel},L,h)\) with maximal variation. Then \(\mathfrak{C}\) is Moishezon._ Proof.: Let \(u\colon\mathfrak{Z}^{||}\to\mathfrak{C}\) be the universal flat family and let \(\mathfrak{L}\to\mathfrak{Z}^{||}\) be the universal polarizing line bundle. For any fixed \(n\in\mathbb{N}\), the locus \(\mathfrak{C}_{n}\subset\mathfrak{C}\) of projective (Thm. 4.11) schemes \(Z\in\mathfrak{C}\) on which \(nL=n\mathfrak{L}\big{|}_{Z}\) is not very ample is closed. Taking the sequence \[\cdots\subset\mathfrak{C}_{3!}\subset\mathfrak{C}_{2!}\subset\mathfrak{C}_{1! }\subset\mathfrak{C}\] gives a nested sequence of closed analytic subspaces. The intersection is empty since for all \(Z\in\mathfrak{C}\), there is some \(n_{Z}\in\mathbb{N}\) for which \(n_{Z}L\) is very ample, and \(n_{Z}\mid i!\) for all \(i\geq n_{Z}\). We conclude some \(\mathfrak{C}_{n}\) is empty for large enough \(n\), so \(|nL|\) is a projective embedding for all \(Z\in\mathfrak{C}\). Furthermore, the locus on which \(H^{i}(Z,nL)\) jumps in dimension is also closed, and so by the same argument, we may assume \(h^{i}(Z,nL)=0\) for all \(i>0\) and all \(Z\in\mathfrak{C}\). Then \(u_{*}(n\mathfrak{L})\) is a vector bundle of rank \[N+1:=\chi(Z,nL)=h^{0}(Z,nL).\] It is a vector bundle because \(\chi\) is constant in (analytic) flat families. Let \(\mathbb{P}\to\mathfrak{C}\) be the projective frame bundle of \(u_{*}(n\mathfrak{L})\), a principal holomorphic \(J=\operatorname{PGL}(N+1)\)-bundle. Points of \(\mathbb{P}\) correspond to some \(Z\subset X\), and a basis of sections of \(H^{0}(Z,nL)\), modulo scaling. We have an analytic map \[\phi\colon\mathbb{P}\to\mathcal{H}\] where \(\mathcal{H}\subset\operatorname{Hilb}(\mathbb{P}^{N})\) is the component of the Hilbert scheme with Hilbert polynomial \(\chi\), sending \((Z,[s_{0}:\cdots:s_{N}])\in\mathbb{P}\) to the closed subscheme of \(\mathbb{P}^{N}\) with the given embedding. Note \(\mathcal{H}\) is projective and \(\phi\) is equivariant with respect to the natural \(J\)-action on both sides. Consider the _set_ of algebraic cycles \(\operatorname{O}:=\{\overline{J\cdot x}\,\big{|}\,x\in\mathcal{H}\}\subset \operatorname{Chow}(\mathcal{H})\). A point of \(\operatorname{O}\) uniquely determines a \(J\)-orbit, since a \(J\)-orbit is recoverable from its closure. Since the action of \(J\) is algebraic on \(\mathcal{H}\), the space \(\operatorname{O}\) is stratified by algebraic varieties \[\operatorname{O}=\operatorname{O}_{1}\sqcup\cdots\sqcup\operatorname{O}_{m}\] with each \(\operatorname{O}_{j}\) an irreducible, locally closed set of some component \(\operatorname{Chow}_{j}(\mathcal{H})\) of the Chow variety. Let \(\mathcal{H}_{j}\subset\mathcal{H}\) be the locally closed set of points \(x\) for which \(\overline{J\cdot x}\in\operatorname{O}_{j}\) and choose the \(j\) such that \(\mathcal{H}_{j}\) is the largest-dimensional space intersecting \(\phi(\mathbb{P})\). These are the "generic" \(J\)-orbits which arise from choosing a basis of \(H^{0}(Z,nL)\). Since \(\mathbb{P}\) is irreducible, we have \(\phi(\mathbb{P})\subset\overline{\mathcal{H}}_{j}\). Observe that there is a rational map (a morphism on \(\mathcal{H}_{j}\)) \[\psi\colon\overline{\mathcal{H}}_{j}\dashrightarrow\overline{ \operatorname{O}}_{j}\] \[x\mapsto\overline{J\cdot x}\] with the closure of the latter taken in \(\operatorname{Chow}_{j}(\mathcal{H})\), which is projective. Let \(U\subset\mathfrak{C}\) be a small analytic open around a given point. There is a local analytic section of \(\mathbb{P}\big{|}_{U}\to U\), call it \(s_{U}\). Then, \(\phi\circ s_{U}\colon U\to\overline{\mathcal{H}}_{j}\) is analytic and \(\psi\) is rational, so the composition \[\psi\circ\phi\circ s_{U}\colon U\dasharrow\overline{\mathrm{O}}_{j}\] is a meromorphic map. Furthermore, since \(\psi\) collapses \(J\)-orbits, and \(\phi\) is \(J\)-equivariant, we conclude that this local meromorphic map is independent of choice of local section \(s_{U}\). So these maps patch together to give a meromorphic map \(\alpha\colon\mathfrak{C}\dasharrow\overline{\mathrm{O}}_{j}.\) Since \(\alpha\) is meromorphic, by Hironaka, there is a resolution of indeterminacy \[\mathfrak{C}\stackrel{{\beta}}{{\leftarrow}}\widetilde{ \mathfrak{C}}\xrightarrow{\gamma}\overline{\mathrm{O}}_{j}\] of \(\alpha=\gamma\circ\beta^{-1}\) with \(\beta\) bimeromorphic. Finally, we apply the assumption of maximal variation: There exists some analytic open \(U\subset\mathfrak{C}\) for which \((Z_{s},L_{s})\not\simeq(Z_{t},L_{t})\) for all \(s,t\in U\), \(s\neq t\). This implies that \((Z_{t},nL_{t})\not\simeq(Z_{s},nL_{s})\) for all \(s\neq t\) in a possibly smaller neighborhood. Thus, the \(J\)-orbits in \(\mathcal{H}\) corresponding to \((Z_{t},nL_{t})\) are distinct in an analytic open set. Hence \(\gamma\) is generically finite. We conclude that \(\widetilde{\mathfrak{C}}\) and thus \(\mathfrak{C}\) is Moishezon. **Remark 4.15**.: The assumption of maximal variation is necessary. For instance, let \(X\) be an arbitrary compact, complex manifold, and consider the distribution manifold for which \(T^{||}=0\). It admits a polarization by setting \(L=\mathcal{O}_{X}\) with \(h\) the trivial metric. Then, the Douady space of points in \(X\) is a parallel Douady space, isomorphic to \(X\) itself. But of course, \(X\) need not be Moishezon, so not all parallel Douady spaces are Moishezon in this generality. **Meta-Definition 4.16**.: We define _data of GAGA type_ on \(X\) to be a collection of holomorphic data \(\mathrm{Data}_{X}\) to which the GAGA theorem applies, upon restriction to a projective scheme \(Z\in\mathfrak{D}^{||}\). **Example 4.17**.: An example of data of GAGA type would be \(\mathrm{Data}_{X}=(F^{\bullet},\nabla)\) where \(F^{\bullet}\) is a descending filtration of holomorphic vector bundles on \(X\) and \(\nabla\) is a holomorphic connection on \(F^{0}\). For any parallel analytic space \(Z\in\mathfrak{D}^{||}\), the restriction of \(F^{\bullet}\) to \(Z\) is a filtration \(F^{\bullet}_{Z}\) of algebraic vector bundles, by Serre's GAGA theorem [10]. Similarly, a well-known extension of GAGA implies that the restriction of \(\nabla\) to a connection \(\nabla_{Z}\) on \(F^{0}_{Z}\) is an algebraic connection. **Meta-Theorem 4.18**.: _Let \(\mathrm{Data}_{X}\) be data of GAGA type on \(X\)._ _We say that an irreducible, closed analytic subspace \(\mathfrak{D}_{0}\subset\mathfrak{D}^{||}\) has maximal variation with respect to \(\mathrm{Data}_{X}\) if the isomorphism type of the restriction of this data to \(Z\in\mathfrak{D}_{0}\) is determinative in an analytic open set \(U\subset\mathfrak{D}_{0}\): \((Z_{s},\operatorname{Data}_{s})\not\simeq(Z_{t},\operatorname{Data}_{t})\) for all \(s\neq t\in U\)._ _Then Theorem 4.14 still holds: \(\mathfrak{D}_{0}\) is Moishezon._ Proof.: By GAGA, the restriction of \(\operatorname{Data}_{X}\) to any \(Z\in\mathfrak{D}_{0}\) is algebraic data, denoted \(\operatorname{Data}_{Z}\). The general form of such algebraic data, together with \(Z\), is parameterized by an algebraic variety (adding rigidifying data corresponding to an algebraic group action as necessary), admitting an algebraic compactification \(\mathcal{H}_{\operatorname{Data}}\). Then, we apply the same argument as in Theorem 4.14 to the classifying map \[\mathfrak{D}_{0}\dashrightarrow\mathcal{H}_{\operatorname{Data}}\] \[Z\mapsto(Z,\operatorname{Data}_{Z})\] to conclude that \(\mathfrak{D}_{0}\) is Moishezon. **Example 4.19**.: For the data of GAGA type \((F^{\bullet},\nabla)\) discussed in Example 4.16, \(\mathcal{H}_{\operatorname{Data}}\) can be concretely constructed as follows. Let \(\mathfrak{D}_{0}\) be an irreducible component of \(\mathfrak{D}^{\|}\) containing \(Z\) and with maximal variation with respect to \((F^{\bullet},\nabla)\). Denote by \(\pi:\mathfrak{Z}\to\mathfrak{D}_{0}\) the universal flat family and \(f:\mathfrak{L}\to\mathfrak{Z}\) the universal polarizing bundle. Let \(\mathcal{H}\) be the component of the Hilbert scheme that \(|nL|\) maps \(Z\) into. The Hilbert polynomials \(P^{\bullet}\) of the vector bundles \(F^{\bullet}_{Z}\) which arise from restricting \(F^{\bullet}\) are constant along \(Z\in\mathfrak{D}_{0}\). We may choose integers \(m_{p},n_{p}\gg 0\) for which any vector bundle (even coherent sheaf) with Hilbert polynomial \(P^{p}\) over any \(Z\in\mathcal{H}\) is a quotient of the form \[(-m_{p}L)^{\oplus n_{p}}\twoheadrightarrow F^{p}_{Z}.\] For instance, choose \(m_{p}\) uniformly over all of \(\mathcal{H}\) so that \(F^{p}_{Z}(m_{p}L)\) is globally generated with vanishing higher cohomology. Then for a fixed \(n_{p}\), there is a surjection \(\mathcal{O}_{Z}^{\oplus n_{p}}\twoheadrightarrow F^{p}_{Z}(m_{p}L)\) corresponding to a basis of global sections. Furthermore, this quotient is uniquely determined by the induced surjection \[H^{0}(Z,(k_{p}L)^{\oplus n_{p}})\twoheadrightarrow H^{0}(Z,F^{p}_{Z}((m_{p}+k_ {p})L))\] for all \(k_{p}\) large enough. We can ensure that \(h^{0}(Z,k_{p}L)\) is constant over all of \(\mathcal{H}\). So this defines an embedding of the relative moduli space of coherent sheaves with Hilbert polynomial \(P^{p}\) over \(\mathcal{H}\) into a Grassmannian bundle \(\operatorname{Gr}(V_{p})\) of the vector bundle \(V_{p}:=\pi_{*}(k_{p}L)^{\oplus n_{p}}\). This is the standard construction, due to Grothendieck [10], of an embedding of the quot-scheme into a Grassmannian, performed relatively over \(\mathcal{H}\). The inclusion \(F^{p}_{Z}\hookrightarrow F^{p-1}_{Z}\) is an element \(H^{0}(Z,(F^{p}_{Z})^{*}\otimes F^{p-1}_{Z})\). This vector space includes into \(H^{0}(Z,(m_{p}L)^{\oplus n_{p}}\otimes F^{p-1}_{Z})\) and by choosing \(m_{p}\gg m_{p-1}\), we can ensure that the latter receives a surjection from \(H^{0}(Z,(m_{p}L)^{\oplus n_{p}}\otimes(-m_{p-1}L)^{\oplus n_{p-1}}).\) Thus, the inclusion \(F_{Z}^{p}\hookrightarrow F_{Z}^{p-1}\) is determined by an \(n_{p}\times n_{p-1}\)-matrix of global sections of \((m_{p}-m_{p-1})L\), uniquely up to a subspace of this vector space of matrices. Choosing \(k_{p}\) so that \(k_{p-1}+m_{p-1}=k_{p}+m_{p}\) we can insure that \(F_{Z}^{p}\hookrightarrow F_{Z}^{p-1}\) is induced by an inclusion \(V_{p,Z}\hookrightarrow V_{p-1,Z}\) of the fibers over \(Z\in\mathcal{H}\). Thus, the isomorphism type of \(F_{Z}^{\bullet}\) as a filtered vector bundle can be rigidified in terms of flag-like data \(\operatorname{Fl}(F_{Z}^{\bullet})\) involving subspaces of, and morphisms between, the \(V_{p}\) vector bundles. Furthermore, the isomorphism type of \(F_{Z}^{\bullet}\) is uniquely determined by a \(J^{\prime}\)-orbit on \(\operatorname{Fl}(F_{Z}^{\bullet})\), for \(J^{\prime}\) an algebraic group. Concretely, \(J^{\prime}\) is the group of changes-of-basis of \(H^{0}(Z,F_{Z}^{p}(m_{p}L))\) and changes-of-lift of the inclusions \(F_{Z}^{p}\hookrightarrow F_{Z}^{p-1}\). Let \(\mathcal{H}_{\operatorname{fit}}\) be the principal \(J^{\prime}\)-bundle consisting of a filtered vector bundle \(F_{Z}^{\bullet}\) on some \(Z\in\mathcal{H}\) with Hilbert polynomial \(P^{\bullet}\), together with its rigidifying data in \(\operatorname{Fl}(F_{Z}^{\bullet})\). We have a forgetful map \(\mathcal{H}_{\operatorname{fit}}\to\mathcal{H}\). Over \(\mathcal{H}_{\operatorname{fit}}\), we construct the relative moduli space \(\mathcal{H}_{\operatorname{Data}}^{o}\to\mathcal{H}_{\operatorname{fit}}\) of algebraic connections \(\nabla\) on \(F^{0}\). We could also work with the algebraic subloci of \(\mathcal{H}_{\operatorname{Data}}^{o}\) for which \(\nabla\) is flat, or \(\nabla(F^{p})\subset F^{p-1}\otimes\Omega^{1}\) on \((Z^{\operatorname{red}})_{\operatorname{sm}}\). Take an algebraic compactification \(\mathcal{H}_{\operatorname{Data}}^{o}\hookrightarrow\mathcal{H}_{\operatorname {Data}}\). As in Theorem 4.14, we have a principal \(J\)-bundle \(\mathbb{P}\to\mathfrak{D}_{0}\) with \(J=\operatorname{PGL}(N+1)\) corresponding to changes of basis of \(H^{0}(Z,nL)\). Over \(\mathbb{P}\), we have a principal \(J^{\prime}\)-bundle \(\mathbb{P}^{\prime}\to\mathbb{P}\) consisting of the space of all rigidifying data for \(F_{Z}^{\bullet}\) as above. We also have a flat, algebraic connection \(\nabla_{Z}\) on \(F_{Z}^{0}\). So there is a holomorphic classifying map \(\mathbb{P}^{\prime}\to\mathcal{H}_{\operatorname{Data}}\), which is \(J^{\prime}\)- and \(J\)-equivariant for the actions on the source and target. The remainder of the argument of Theorem 4.14 applies. ## 5. Algebraicity of the non-abelian Hodge locus We now apply the general results of the previous section to the polarized distribution manifold \((X_{\Gamma},T^{||},L,h)\) where \(X_{\Gamma}=\Gamma\backslash\mathbb{D}\) for \(\Gamma\) co-compact, \(T^{||}\) is the Griffiths distribution, \(L\) is the Griffiths line bundle, and \(h\) is the equivariant hermitian metric. Let \(G=G_{1}\times\cdots\times G_{k}\) be the decomposition of the semisimple group \(G=\mathbf{G}^{\operatorname{ad}}(\mathbb{R})^{+}\) into \(\mathbb{R}\)-simple factors. These give the \(\mathbb{C}\)-simple factors of \(G_{\mathbb{C}}\) by [2, 4.4.10]. We have a decomposition \(\mathbb{D}=\mathbb{D}_{1}\times\cdots\times\mathbb{D}_{k}\) and on each factor \(\mathbb{D}_{i}\) we have a filtered vector bundle with flat connection. Let \((F_{i}^{\bullet},\nabla_{i})\) be the pullbacks of these to \(\mathbb{D}\). Then, they descend to \(X_{\Gamma}\) even when \(\Gamma\) does not split as a product of lattices \(\Gamma_{i}\subset G_{i}\). Let \(V_{i}\) denote the \(\mathbb{C}\)-local system on \(X_{\Gamma}\) of flat sections of \((F_{i}^{0},\nabla_{i})\). **Definition 5.1**.: We define the _Hodge data of GAGA type_ \[\operatorname{Hodge}_{X_{\Gamma}}=\{(F_{i}^{\bullet},\nabla_{i})\}_{i=1,\ldots,k}\] to be this \(k\)-tuple of filtered flat vector bundles. **Remark 5.2**.: It is important to remark that the universal filtered flat vector bundle \((F^{\bullet},\nabla)=\bigoplus_{i}(F^{\bullet}_{i},\nabla_{i})\) is not the same data of GAGA type as above! For instance, it may be impossible to tell how \((F^{\bullet},\nabla)\) splits, upon restriction to some \(Z\subset X_{\Gamma}\). **Remark 5.3**.: Let \(Z\in\mathfrak{D}^{||}\) be reduced and irreducible. Suppose \(\widetilde{Z}\to Z\) is a resolution of singularities. Then \(\widetilde{Z}\) admits a \(\mathbb{Z}\)-PVHS by pulling back \((V_{\mathbb{Z}},F^{\bullet},\nabla)\). The pullback of \(\operatorname{Hodge}_{X_{\Gamma}}=\{(F^{\bullet}_{i},\nabla_{i})\}_{i=1,\dots,k}\) constitutes the data of a splitting of the corresponding \(\mathbb{R}\)-VHS into factors. Let \(V\) be the local system of flat sections of \(\nabla_{\widetilde{Z}}\). The \(\mathbb{Z}\)-PVHS on \(\widetilde{Z}\), and thus, the period map \(\Phi\colon\widetilde{Z}\to X_{\Gamma}\), is recoverable from \((Z,\operatorname{Hodge}_{Z})\) and one critical missing piece of information: the location of the integral lattice \(V_{\mathbb{Z},*}\hookrightarrow V_{*}\) in a fiber over some base point \(*\in\widetilde{Z}\)--this is the only data which cannot be captured coherently on \(X_{\Gamma}\) itself, and to which GAGA cannot be applied. Now, we leverage the fact that the lattice \(V_{\mathbb{Z},*}\) must be invariant under parallel transport. **Proposition 5.4**.: _Let \(Z\in\mathfrak{D}^{||}\) be irreducible and reduced, and suppose \(\widetilde{Z}\to Z\) is a resolution of singularities. Let \((V_{\mathbb{Z}},F^{\bullet})\) be the corresponding pullback \(\mathbb{Z}\)-PVHS and let \(*\in\widetilde{Z}\) be a base point. Let_ \[\rho\colon\pi_{1}(\widetilde{Z},*)\to\operatorname{GL}(V_{\mathbb{Z},*})\] _be the monodromy representation and let \(H=\prod_{i\in I}G_{i}\subset G\) be the collection of simple factors in which \(\operatorname{im}\rho\) is Zariski-dense. Fixing a frame of \(V_{\mathbb{Z},*}\), the infinitesimal changes-of-frame which give rise to a lattice preserved by \(\rho\) are contained in_ \[\prod_{i\in I}\mathbb{C}\times\prod_{i\notin I}\mathfrak{gl}(V_{i}).\] Proof.: An infinitesimal change-of-frame \(a\in\mathfrak{gl}(V_{*})\) resulting in a new monodromy-invariant lattice is exactly a matrix commuting with \(\operatorname{im}(\rho)\), and thus commuting with \(\mathbf{H}(\mathbb{R})\). Since \(V_{i}\) is an irreducible representation of \((G_{i})_{\mathbb{C}}\), Schur's lemma implies that \(a\) acts by a scalar \(\lambda_{i}\) on each summand \(V_{i}\subset V\) for which \(G_{i}\subset H\). **Definition 5.5**.: Given any analytic subspace \(Z\subset X_{\Gamma}\) we define \(\Gamma_{Z}\) as the image of \(\pi_{1}(\widetilde{Z})\to\Gamma\) for some resolution of singularities \(\widetilde{Z}\to Z^{\operatorname{red}}\). **Lemma 5.6**.: _Let \(Z^{\nu}\to Z^{\operatorname{red}}\) be the normalization. Then, \(\Gamma_{Z}\subset\Gamma\) is the image of \(\pi_{1}(Z^{\nu})\). It is also the image of \(\pi_{1}(U)\) for any dense open subset \(U\subset(Z^{\operatorname{red}})_{\operatorname{sm}}\)._ Proof.: Let \(Z^{\nu}_{\rm sm}\) denote the nonsingular locus. Then \(\pi_{1}(Z^{\nu}_{\rm sm})\twoheadrightarrow\pi_{1}(Z^{\nu})\) is surjective. The same property holds for the inverse image of \(Z^{\nu}_{\rm sm}\) or \(U\) in any desingularization. Thus, \(\pi_{1}(\widetilde{Z})\), \(\pi_{1}(Z^{\nu}_{\rm sm})\), \(\pi_{1}(Z^{\nu})\), \(\pi_{1}(U)\) all have the same image in \(\Gamma=\pi_{1}(X_{\Gamma})\). **Proposition 5.7**.: _Let \(Z\in\mathfrak{D}^{||}\) be irreducible and reduced. The group \(\Gamma_{Z}\) only jumps in size, in an open neighborhood of \(Z\in\mathfrak{D}^{||}\)._ Proof.: Let \((C,0)\to\mathfrak{D}^{||}\) be an analytic arc, and consider the pullback family \(\mathfrak{Z}\to(C,0)\), with \(\mathfrak{Z}_{0}=Z\). Let \(\mathcal{W}=\mathfrak{Z}^{\nu}\) be the normalization of the total space. The general fiber \(\mathcal{W}_{t}\) is normal, so \(\Gamma_{Z_{t}}={\rm im}(\pi_{1}(\mathcal{W}_{t}))\) by Lemma 5.6. This is the same group for all \(t\in C\setminus 0\) if we assume (as we may) that \(\mathcal{W}\) is a fiber bundle over \(C\setminus 0\). There is a deformation-retraction \(\mathcal{W}\to\mathcal{W}_{0}\) to the central fiber. Tracing an element of \(\pi_{1}(\mathcal{W}_{t})\) through the retraction, we get a free homotopy from any \(\gamma_{t}\in\pi_{1}(\mathcal{W}_{t})\) to an element \(\gamma_{0}\in\pi_{1}(\mathcal{W}_{0})\). Conversely, we can lift any element of \(\pi_{1}(\mathcal{W}_{0})\) to an element of \(\pi_{1}(\mathcal{W}_{t})\): We have \(\pi_{1}(\mathcal{W}_{0})=\pi_{1}(\mathcal{W})=\pi_{1}(\mathcal{W}\setminus( (\mathcal{W}_{0})_{\rm sing}\cup\mathcal{W}_{\rm sing}))\) because \(\mathcal{W}\) is normal and \((\mathcal{W}_{0})_{\rm sing}\cup\mathcal{W}_{\rm sing}\) has codimension \(2\). Thus, any element of \(\pi_{1}(\mathcal{W}_{0})=\pi_{1}(\mathcal{W})\) can be represented by a loop in \(\mathcal{W}\) avoiding both \((\mathcal{W}_{0})_{\rm sing}\) and \(\mathcal{W}_{\rm sing}\). Then, this loop can be deformed off its intersection with \((\mathcal{W}_{0})_{\rm sm}\) as \((\mathcal{W}_{0})_{\rm sm}\) is a locally smooth divisor in \(\mathcal{W}_{\rm sm}\). So we can represent the loop in \(\mathcal{W}\setminus\mathcal{W}_{0}\). Finally, \(\pi_{1}(\mathcal{W}\setminus\mathcal{W}_{0})\) is a \(\mathbb{Z}\)-extension of \(\pi_{1}(\mathcal{W}_{t})\) because it is a fiber bundle over the punctured disk \(C\setminus 0\). Thus, \(\Gamma_{Z_{t}}={\rm im}(\pi_{1}(\mathcal{W}_{0}))\). Then the natural morphism \(\mathcal{W}_{0}\to\mathfrak{Z}_{0}=Z\) is a finite birational morphism because \(Z\) is reduced. Thus, it factors the normalization \(Z^{\nu}\to\mathcal{W}_{0}\to Z\) and so \({\rm im}(\pi_{1}(Z^{\nu}))=\Gamma_{Z}\subset\Gamma_{Z_{t}}={\rm im}(\pi_{1}( \mathcal{W}_{0})).\) Thus \(\Gamma_{Z}\) only jumps in size. **Remark 5.8**.: The same statement holds, up to passing to a finite index subgroup of \(\Gamma_{Z}\), when \(Z\) is generically non-reduced. **Theorem 5.9**.: _If \(Z\subset X_{\Gamma}\) is irreducible, reduced, and \(\Gamma_{Z}\) is Zariski-dense in \(G\), then any irreducible component \(\mathfrak{C}\subset\mathfrak{D}^{||}\) containing \(Z\) has maximal variation with respect to \({\rm Hodge}_{X_{\Gamma}}\). In particular, \(\mathfrak{C}\) is Moishezon by Meta-Theorem 4.18._ Proof.: We must find an analytic open set \(U\subset\mathfrak{C}\) for which \[(Z_{s},\{(F_{i}^{\bullet},\nabla_{i})\})\not\simeq(Z_{t},\{(F_{i}^{\bullet}, \nabla_{i})\})\] for all \(s\neq t\in U\). Choose \(U\) to be a small neighborhood of \(Z\in\mathfrak{C}\). Since \(Z\) is irreducible and reduced, we can assume that \(Z_{t}\) is irreducible and reduced for all \(t\in U\). Applying Proposition 5.7, we may ensure that all \(Z_{t}\in U\) satisfy the property that \(\Gamma_{Z_{t}}\) is Zariski-dense in \(G\). It suffices to show there are no holomorphic arcs \(C\to U\) for which the isomorphism type of \((Z_{t},\{(F_{i}^{\bullet},\nabla_{i})\})\) is constant over all \(t\in C\). Choose a smooth base point \(*\in Z_{t}\). Then by Proposition 5.4, the only deformations of the lattice \(V_{\mathbb{Z},*}\subset V_{*}\) which remain invariant under \(\nabla_{\bar{Z}_{t}}\) are those which differ by scaling each summand of \(V=\bigoplus V_{i}\) by some \(\lambda_{i}\in\mathbb{C}^{*}\). But such scaling does not change the period map, as the Hodge flag \(F^{\bullet}=\bigoplus F_{i}^{\bullet}\) is also preserved by this scaling action. Thus, \(\operatorname{Hodge}_{X_{\Gamma}}\) is determinative on \(U\)--if it were non-determinative, the fixed data \((Z_{t},\{(F_{i}^{\bullet},\nabla_{i})\})\) would admit a flat deformation of the local system \(V_{\mathbb{Z}}\) which induced a non-isomorphic Hodge filtration. **Remark 5.10**.: One could as easily have worked with Barlet spaces, since the support morphism \([\cdot]\colon\mathfrak{C}\to\mathfrak{B}^{||}\) will be bimeromorphic onto its image, under the assumptions of Theorem 5.9. The disadvantage is that the embedding into a compact, algebraic parameter space, as in Example 4.19, is unclear for Barlet spaces. **Theorem 5.11**.: _Let \(\mathcal{Y}\to\mathcal{S}\) be a smooth projective family over a quasiprojective variety \(\mathcal{S}\). Then the non-abelian Hodge locus of compact type \(\operatorname{NHL}_{c}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n})\) is algebraic._ Proof.: Let \(Y_{s}\) be a fiber. As we saw in Section 2, the data of a \(\mathbb{Z}\)-PVHS on \(Y_{s}\) with generic Mumford-Tate group \(\mathbf{G}\subset\operatorname{GL}_{n}\) and monodromy \(\mathbf{H}\) is completely determined by 1. a holomorphic, Griffiths transverse period map \(\Phi_{s}:Y_{s}\to X_{\Gamma_{H}}\) whose monodromy image is Zariski-dense, and 2. a point in \(\mathbb{D}_{H^{\prime}}\) corresponding to a summand on which the \(\mathbb{Z}\)-PVHS is locally constant. Thus, up to passing to a finite index subgroup of fixed level, the monodromy representation of such a \(\mathbb{Z}\)-PVHS has a reduction of structure to the product \(\mathbf{G}=\mathbf{H}\times\mathbf{H}^{\prime}\) where the corresponding local system has trivial monodromy on the summand associated to \(\mathbf{H}^{\prime}\). Hence, possibly passing to a smaller value of \(n\), we can restrict our attention to the \((Y_{s},\nabla_{s})\in\operatorname{NHL}_{c}(\mathcal{Y}/\mathcal{S}, \operatorname{GL}_{n})\) which underlie a \(\mathbb{Z}\)-PVHS \(\mathbb{V}\) with Zariski-dense monodromy in the generic Mumford-Tate group. By Corollary 3.18, only finitely many representations of \(\pi_{1}(Y_{s})\) of compact type can appear in this manner. Thus, there is a finite list of compact Hodge manifolds \(X_{\Gamma}\) which receive all the period maps for such \((Y_{s},\nabla_{s})\). So to prove the theorem, we may restrict our attention to a single compact period target \(\Gamma\backslash\mathbb{D}=X_{\Gamma}\). It remains to show: The space of pairs \((Y_{s},\Phi_{s})\) of a fiber of \(\mathcal{Y}\to\mathcal{S}\), together with a Griffiths' transverse map \(\Phi_{s}\colon Y_{s}\to X_{\Gamma}\) with Zariski-dense monodromy is an algebraic variety (and the maps into the relative de Rham and Dolbeault spaces are algebraic). We first prove that each irreducible analytic component of the space of pairs \((Y_{s},\Phi_{s})\) is algebraic, then we prove that the number of components is finite. Fix an irreducible analytic component \(B\subset\operatorname{NHL}_{c}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n}).\) There is an analytic Zariski open subset \(B^{o}\subset B\) on which \(\operatorname{im}(\Phi_{s})\), taken with its reduced scheme structure, form a flat family of closed analytic subspaces of \(X_{\Gamma}\) over \(B^{o}\). So there is an irreducible component \(\mathfrak{C}\subset\mathfrak{D}^{||}\) for which \(\operatorname{im}(\Phi_{s})\in\mathfrak{C}\) for \((Y_{s},\Phi_{s})\in B^{o}\). Since \(Y_{s}\) is smooth, the morphism \(Y_{s}\to\Phi_{s}(Y_{s})\) factors through the normalization \(Y_{s}\to\Phi_{s}(Y_{s})^{\nu}\). Thus, \(\Gamma_{\operatorname{im}(\Phi_{s})}\) contains the image of \(\pi_{1}(Y_{s})\) in \(\Gamma\). Since we have restricted to the case where the monodromy is Zariski-dense, \(\mathfrak{C}\) is Moishezon by Theorem 5.9. Let \(\mathfrak{Z}\to\mathfrak{C}\) be the universal family. For all \((Y_{s},\Phi_{s})\in B^{o}\), the period mapping \(\Phi_{s}\) factors through the inclusion \(\operatorname{im}(\Phi_{s})\hookrightarrow\mathfrak{Z}\) as a fiber of the universal family. That is, we have a map \(\Xi\colon\mathcal{Y}\times_{\mathcal{S}}B^{o}\to\mathfrak{Z}\) for which \(\Phi=\pi_{X_{\Gamma}}\circ\Xi\). The analytic deformations of \((Y_{s},\Phi_{s})\) in \(B\) are exactly the isomonodromic deformations of the local system \(V_{\mathbb{Z}}\) on \(Y_{s}\) to nearby fibers, which underlie a \(\mathbb{Z}\)-PVHS. But for \((Y_{s},\Phi_{s})\in B^{o}\), these are exactly the ways to deform the inclusion \(\Xi_{s}\colon Y_{s}\hookrightarrow\mathfrak{Z}\) of fibers. Since \(\mathcal{Y}\to\mathcal{S}\) is algebraic and \(\mathfrak{Z}\to\mathfrak{C}\) is Moishezon, the irreducible component of \(\operatorname{Hom}_{\operatorname{fiber}}(\mathcal{S},\mathfrak{Z})\), the space of morphisms from a fiber of \(\mathcal{Y}\) to a fiber of \(\mathfrak{Z}\), which contains \((Y_{s},\Xi_{s})\in B^{o}\), is Moishezon. The inclusion into \(M_{\operatorname{dR}}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n})\) is Moishezon because \(\nabla_{s}\) is the pull back along \(\Xi_{s}\) of the relative connection on \(F^{0}\) on the universal family over \(\mathfrak{Z}\to\mathfrak{C}\). The relative connection on \(F^{0}\) is Moishezon, by GAGA. Thus, \(B^{o}\) and its closure \(B\) are algebraic, as they are Moishezon subsets of an algebraic variety. The inclusion into \(M_{\operatorname{Dol}}(\mathcal{Y}/\mathcal{S},\operatorname{GL}_{n})\) is Moishezon by the same reasoning, applied to the associated graded of the universal Hodge flag over \(\mathfrak{Z}\to\mathfrak{C}\), equipped with its Higgs field. Finally, it remains to prove that (1) only finitely many irreducible components \(\mathfrak{C}\) of the parallel Douady space appear, and (2) for each one that appears, the number of irreducible components of the space \(\operatorname{Hom}_{\operatorname{fiber}}(\mathcal{Y},\mathfrak{Z})\) is finite. Let \(F^{\bullet}\) be the Hodge filtration on \(Y_{s}\) coming from a period map \(\Phi_{s}\colon Y_{s}\to X_{\Gamma}\) and let \(A\to\mathcal{Y}\) be an ample line bundle on the universal family. Then by Simpson [22, Lemma 3.3], the vector bundles \(F^{p}\) enjoy the following property: If \(m_{s}\) is an integer for which \(T_{Y_{s}}(m_{s}A)\) is globally generated, then \(\mu_{A}(F^{p+1})\leq\mu_{A}(F^{p})+mn\). Here \(\mu_{A}\) is the slope with respect to \(A\). Note that \(\mu_{A}(F^{0})=0\) because \(F^{0}\) has a flat structure. We may choose an \(m_{s}=m\) uniformly over all of \(\mathcal{S}\). We conclude that the slopes \(\mu_{A}(F^{p})\) are bounded, in a way depending only on \(\mathcal{Y}\to\mathcal{S}\). In turn, \(A^{d-1}\cdot\det(F^{p})\) is bounded for all \(p\), and so there is an a priori bound on \(A^{d-1}\cdot L\), where \(L\) is the Griffiths bundle. It follows that \(A^{d-r}\cdot L^{r}\) is bounded for any \(r\). This bounds the Griffiths volume of the image \(\Phi_{s}(Y_{s})\) of any period map, and so by Theorem 4.6, only finitely many components of the parallel Barlet space \(\mathfrak{B}^{\|}\) of \(X_{\Gamma}\) occur as the support of period images from \(Y_{s}\). The same finiteness holds for relevant components \(\mathfrak{C}\) of the parallel Douady space, as we are taking period images with their reduced scheme structure, see Remark 5.10. Finally, the bounds on \(A^{d-r}\cdot L^{r}\) also bound the volume of the graph \(\Gamma(\Xi_{s})\) of a morphism \((Y_{s},\Xi_{s})\in\operatorname{Hom}_{\operatorname{fiber}}(\mathcal{Y}, \mathfrak{Z})\), viewed as a subvariety of \(\mathcal{Y}\times\mathfrak{Z}\). We conclude that there must be only finitely many components of \(\operatorname{Hom}_{\operatorname{fiber}}(\mathcal{Y},\mathfrak{Z})\).
2310.04993
Prompt-augmented Temporal Point Process for Streaming Event Sequence
Neural Temporal Point Processes (TPPs) are the prevalent paradigm for modeling continuous-time event sequences, such as user activities on the web and financial transactions. In real-world applications, event data is typically received in a \emph{streaming} manner, where the distribution of patterns may shift over time. Additionally, \emph{privacy and memory constraints} are commonly observed in practical scenarios, further compounding the challenges. Therefore, the continuous monitoring of a TPP to learn the streaming event sequence is an important yet under-explored problem. Our work paper addresses this challenge by adopting Continual Learning (CL), which makes the model capable of continuously learning a sequence of tasks without catastrophic forgetting under realistic constraints. Correspondingly, we propose a simple yet effective framework, PromptTPP\footnote{Our code is available at {\small \url{ https://github.com/yanyanSann/PromptTPP}}}, by integrating the base TPP with a continuous-time retrieval prompt pool. The prompts, small learnable parameters, are stored in a memory space and jointly optimized with the base TPP, ensuring that the model learns event streams sequentially without buffering past examples or task-specific attributes. We present a novel and realistic experimental setup for modeling event streams, where PromptTPP consistently achieves state-of-the-art performance across three real user behavior datasets.
Siqiao Xue, Yan Wang, Zhixuan Chu, Xiaoming Shi, Caigao Jiang, Hongyan Hao, Gangwei Jiang, Xiaoyun Feng, James Y. Zhang, Jun Zhou
2023-10-08T03:41:16Z
http://arxiv.org/abs/2310.04993v2
# Prompt-augmented Temporal Point Process for Streaming Event Sequence ###### Abstract Neural Temporal Point Processes (TPPs) are the prevalent paradigm for modeling continuous-time event sequences, such as user activities on the web and financial transactions. In real-world applications, event data is typically received in a _streaming_ manner, where the distribution of patterns may shift over time. Additionally, _privacy and memory constraints_ are commonly observed in practical scenarios, further compounding the challenges. Therefore, the continuous monitoring of a TPP to learn the streaming event sequence is an important yet under-explored problem. Our work paper addresses this challenge by adopting Continual Learning (CL), which makes the model capable of continuously learning a sequence of tasks without catastrophic forgetting under realistic constraints. Correspondingly, we propose a simple yet effective framework, PromptTPP1, by integrating the base TPP with a continuous-time retrieval prompt pool. The prompts, small learnable parameters, are stored in a memory space and jointly optimized with the base TPP, ensuring that the model learns event streams sequentially without buffering past examples or task-specific attributes. We present a novel and realistic experimental setup for modeling event streams, where PromptTPP consistently achieves state-of-the-art performance across three real user behavior datasets. Footnote 1: Our code is available at [https://github.com/yanganSann/PromptTPP](https://github.com/yanganSann/PromptTPP). ## 1 Introduction Event sequences are ubiquitous in a wide range of applications, such as healthcare, finance, social media, and so on. Neural TPPs Mei and Eisner (2017); Shchur et al. (2020); Zuo et al. (2020); Zhang et al. (2020); Yang et al. (2022) have emerged as the dominant paradigm for modeling such data, thanks to their ability to leverage the rich representation power of neural networks. However, most existing works assume a _static_ setting, where the TPP model is trained on the entire data, and parameters remain fixed after training. In contrast, real-world event data usually arrives in a _streaming_ manner, rendering it impractical to store all data and retrain the model from scratch at each time step due to computational and storage costs. Shown in Figure 1, a common approach is to use sliding windows to frame the data for model training and prediction. Traditional schemes include pretraining a TPP, which is used for all the following test periods, retraining TPP on the data of each slide of windows and online TPPs. However, they either may fail to adapt to new data or suffer from _catastrophic forgetting_ (see Appendix A for an empirical analysis). In our work, we approach the problem by adopting Continual Learning (CL) Hadsell et al. (2020); Hao et al. (2023); Chu and Li (2023); Chu et al. (2023), a relevant area studying how systems learn sequentially from a continuous stream of correlated data. Yet, classical CL models are not fully applicable to our problem. A major line of CL methods (Cha et al., 2021; Buzzega et al., 2020) rely on a rehearsal buffer to retrain a portion of past examples. However, they become ineffective when a rehearsal buffer is not allowed - for example, in real-world scenarios where data privacy matters (Shokri and Shmatikov, 2015) or there are resource constraints. Another branch of works (Ke et al., 2020) bypass the forgetting issue by assuming known task identity at test time, but knowing task identity at test time restricts practical usage. Furthermore, the problem of sequential tasks of event sequence in continuous time have barely been studied. To develop a CL algorithm for such data in real-world scenarios with applicability and generality, we draw inspiration from recent advances in prompt-augmented learning (Liu et al., 2022; Varshney et al., 2022; Cho et al., 2022; Li et al., 2023; Chu et al., 2023; Wang et al., 2023). Prompt-augmented learning is a form of machine learning that involves adding additional information or prompts to the training data in order to further improve the performance of the model. This can include adding labels or annotations to the data, providing additional context to help the model better understand the data, or incorporating feedback from human experts to guide the learning process. By incorporating these prompts, the model is able to learn more effectively and make more accurate predictions. Prompt-augmented learning has been used successfully in a variety of applications, including natural language processing, computer vision, and speech recognition. Intuitively, prompt-augmented learning reformulates learning downstream tasks from directly adapting model weights to designing prompts that "instruct" the model to perform tasks conditionally while maintaining model plasticity. Thus, it is promising to leverage prompts to sequentially learn knowledge and further store learned knowledge of event sequence in the CL context. While prompt learning (Wang et al., 2022, 2022) already demonstrates its effectiveness on multiple CL benchmarks in language modeling, we wish to extend their success to the models of neural TPPs. To this end, we propose **PromptTPP**, a novel CL framework whose basis is a **continuous-time retrieval prompt pool** for modeling streaming event sequences. Specifically, we develop a module of _temporal prompt_ that learns knowledge and further store the learned knowledge for event sequences in _continuous time_. To improve the applicability, building upon prior works (Wang et al., 2022), we structure the prompts in a key-value shared memory space called the _retrieval prompt pool_, and design a retrieval mechanism to dynamically lookup a subset of task-relevant prompts based on the instance-wise input of event sequences. The retrieval prompt pool, which is optimized jointly with the generative loss, ensures that shared (unselected) prompts encode shared knowledge for knowledge transfer, and unshared (selected) prompts encode task-specific knowledge that helps maintain model plasticity. PromptTPP has two distinctive characteristics: (i) **applicability**: despite the effectiveness in augmenting TPP with CL, the prompt pool and the event retrieval mechanism removes the necessity of a rehearsal buffer and knowing the task identity, making the method applicable to modeling the event streams in a more realistic CL setting, i.e., memory efficient and task agnostic. (ii) **generality**: our approach is general-purpose in the sense that it can be integrated with any neural TPPs. In summary, our main contributions are: * We introduce PromptTPP, a novel prompt-augmented CL framework for neural TPPs. It represents a new approach to address the challenges of modeling streaming event sequences by learning a pool of continuous-time retrieval prompts. These prompts serve as parameterized instructions for base TPP models to learn tasks sequentially, thus enhancing the performance of the model. * We formalize an experimental setup for evaluating the streaming event sequence in the context of CL and demonstrate the effectiveness of our proposed method across three real user datasets. * By connecting the fields of TPP, CL, and prompt learning, our method provides a different perspective for solving frontier challenges in neural TPPs. Figure 1: Overview of the classical schemes and PromptTPP framework for streaming event sequences. Preliminaries **Generative Modeling of Event Sequences.** Suppose we observe \(I\) events at a fixed time interval \([0,T]\). Each event is denoted mnemonically as \(e@t\) (i.e., "type e at time t") and the sequence is denoted as \(s_{[0,T]}=[e_{1}@t_{1},\ldots,e_{I}@t_{I}]\) where \(0<t_{1}<\ldots<t_{I}\leq T\) and \(e_{i}\in\{1,\ldots,E\}\) is a discrete event type. Note that representations in terms of time \(t_{i}\) and the corresponding inter-event time \(\tau_{i}=t_{i}-t_{i-1}\) are isomorphic, **we use them interchangeably**. Generative models of event sequences are TPPs. Specifically, TPPs define functions \(\lambda_{e}\) that determine a finite **intensity**\(\lambda_{e}(t\mid s_{[0,t)})\geq 0\) for each event type \(e\) at each time \(t>0\) such that \(p_{e}(t\mid s_{[0,t)})=\lambda_{e}(t\mid s_{[0,t)})dt\). Then the log-likelihood of a TPP given the entire event sequence \(s_{[0,T]}\) is \[\mathcal{L}_{ll}=\sum_{i=1}^{I}\log\lambda_{e_{i}}(t_{i}\mid s_{[0,t_{i})})- \int_{t=0}^{T}\sum_{e=1}^{E}\lambda_{e}(t\mid s_{[0,t)})dt, \tag{1}\] Instead of posing strong parametric assumptions on the intensity function, neural TPPs (Du et al., 2016; Mei and Eisner, 2017; Zhang et al., 2020; Zuo et al., 2020; Yang et al., 2022) use expressive representations for the intensity function via neural networks and maximize the associated log-likelihood equation 1 via stochastic gradient methods. **CL Problem Formulation for Streaming Event Sequences.** The typical CL problem is defined as training models on a continuum of data from a sequence of tasks. Given a sequence \(s_{[0,T]}\), we split it based on a sliding window approach shown in Figure 1 and form a sequence of tasks over the time \(\{\mathcal{D}_{0},...,\mathcal{D}_{N}\}\), where the \(\mathcal{T}\)-th task \(\mathcal{D}_{\mathcal{T}}=(s_{train}^{\mathcal{T}},s_{test}^{\mathcal{T}})\) contains a tuple of train and test set of event sequences and the two sets have no overlap in time. Data from the previous tasks are not available when training for future tasks. We use the widely-adopted assumption that the task boundaries are clear and the task switch is sudden at training time (Pham et al., 2021). Our goal is to continually learn the sequences while avoiding catastrophic forgetting from the previous tasks. **Prompt Learning.** Prompt learning methods propose to simply condition frozen language models (LMs) to perform down-stream tasks by learning prompt parameters that are prepended to the input tokens to instruct the model prediction. Compared with ordinary fine-tuning, literature shows In our context, a naive application of prompt learning is to prepend learnable parameters \(\mathbf{P_{s}}\in\mathbb{R}^{L_{p}\times D}\), called a prompt, to the event embedding \(\mathbf{h}=[\mathbf{P_{s}}||\mathbf{x}]\), where \(\mathbf{x}\in\mathbb{R}^{D}\) denotes the output of a TPP's embedding layer of an event, and then feed it to the model function \(g(\mathbf{h})\), i.e., a decoder, to perform downstream tasks. Instead of the native application, in our proposed method, we design a novel prompt learning mechanism to properly model the event streams (see section 3.3). ## 3 Prompt-augmented TPP We introduce a simple and general prompt-augmented CL framework for neural TPPs, named PromptTPP. As shown in Figure 2, PromptTPP consists of three components: a base TPP model, a pool of continuous-time retrieval prompts and a prompt-event interaction layer. In this section, we omit the task index \(\mathcal{T}\) in our notation as our method is general enough to the task-agnostic setting. ### Base TPP A neural TPP model autoregressively generates events one after another via neural networks. For the \(i\)-th event \(e_{i}@t_{i}\), it computes the embedding of the event \(\mathbf{x}_{i}\in\mathbb{R}^{D}\) via an embedding layer, which takes the concatenation 2 of the type and temporal embedding \(\mathbf{x}_{i}=[\mathbf{x}_{i}^{\text{\tiny{TPP}}}||\mathbf{x}_{i}^{\text{\tiny{TPP}}}]\) where \(||\) denotes concatenation operation and \(\mathbf{x}_{i}^{\text{\tiny{TPP}}}\in\mathbb{R}^{D_{1}},\mathbf{x}_{i}^{\text{\tiny{TPP }}}\in\mathbb{R}^{D_{2}},D=D_{1}+D_{2}\). Then one can draw the next event conditioned on the hidden states that encode history information sequentially: Footnote 2: The sum operation is also used in some literature. In this paper, we apply concatenation for event embedding. \[t_{i+1},e_{i+1}\sim\mathbb{P}_{\theta}(t_{i+1},e_{i+1}|\mathbf{h}_{i}),\quad\mathbf{h }_{i}=f_{r}(\mathbf{h}_{i-1},\mathbf{x}_{i}), \tag{2}\] where \(f_{r}\) could be either RNN (Du et al., 2016; Mei and Eisner, 2017) or more expressive attention-based recursion layer (Zhang et al., 2020; Zuo et al., 2020; Yang et al., 2022). For the simplicity of notation, we denote the embedding layer and recursion layer together as **the encoder**\(f_{\phi_{enc}}\) parameterized by \(\phi_{enc}\). Our proposed PromptTPP is general-purpose in the sense that it is straightforward to incorporate any version of neural TPP into the framework. ### Continuous-time Retrieval Prompt Pool The motivations for introducing Continuous-time Retrieval Prompt Pool (**CtRetroPromptPool**) are two-fold. First, existing prompt-learning works focus on classification tasks in NLP or CV domains, whose methods are not directly applicable for sequential tasks of learning event streams in continuous time (see section 4.2). Second, the practical setup for modeling event streams closes to the task-agnostic CL setting, where we do not know task identity at test time so that training task-dependent prompt is not feasible. Even if we use extra sources to memorize the task identity, naive usage of prompts (Liu et al., 2022, 2021; Tam et al., 2022) are still found to result in catastrophic forgetting. For the first motivation, we construct _temporal prompt_ that properly encodes the knowledge of temporal dynamics of event sequence. To address the second, we build a store of prompts in a key-value shared space to transfer knowledge sequentially from one task to another without distinguishing between the common features among tasks versus the features that are unique to each task. **Temporal Prompt.** In contrast to the standard prompt, the _temporal prompt_ is a time-varying learnable matrix that encodes not only the structural but also the temporal knowledge of the event sequence. We define the temporal prompt \(\mathbf{P}=[\mathbf{P}_{s};\mathbf{P}_{t}]\in\mathbb{R}^{L_{p}\times D}\), where \(L_{p}\) is the prompt length and \(\mathbf{P}_{s}\in\mathbb{R}^{L_{p}\times D_{1}},\mathbf{P}_{t}\in\mathbb{R}^{L_{p} \times D_{2}}\) denotes the structural component and temporal component. While \(\mathbf{P}_{s}\) is a learnable submatrix, the temporal component \(\mathbf{P}_{t}\) is set to be continuous-time positional encodings of the estimated conditional time so as to consider the timing. More concretely, given \(i\)-th event, we estimate the arithmetic mean of inter-event times **up to \(t_{i-1}\)**, denoted by \(\mathcal{E}_{i}=\mathbb{E}[\{\tau_{j}\}_{j<i}]\) and add this estimated inter-event time to \(t_{i-1}\) to get the estimated conditional time \(t_{p}\coloneqq\hat{t}_{i}=t_{i-1}+\mathcal{E}_{i}\). Inline with Yang et al. (2022), we compute the temporal embedding \(\text{TE}(t_{p})\in\mathbb{R}^{D_{2}}\) by \[\text{TE}(t)=\cos\left(\frac{t}{n_{te}}\cdot(\frac{5N_{te}}{n_{te}})^{\frac{d _{i-1}}{D_{2}}}\right)\text{ if }d\text{ is odd },\text{TE}(t)=\sin\left(\frac{t}{n_{te}}\cdot(\frac{5N_{te}}{n_{te}})^{\frac{ d}{D_{2}}}\right)\text{ if }d\text{ is even} \tag{3}\] where \(\{N_{te},n_{te}\in\mathbb{N}\}\) are hyperparameters selected according to the time scales in different periods. As \(\text{TE}(\mathcal{E}_{i})\) is a vector, we concatenate it repeatedly to form \(\mathbf{P}_{t}\), i.e, \(\mathbf{P}_{t}=[\text{TE}(t_{p})||,...,||\text{TE}(t_{p})]\in\mathbb{R}^{L_{p} \times D_{2}}\). Note that the **structural component \(\mathbf{P}_{s}\) is learnable while the temporal component \(\mathbf{P}_{t}\) is computed deterministically**. An important consideration of employing such a mechanism is that the mean characterizes the most important property (the long-run average) of the inter-event time distribution, and the computation is straightforward. By taking the temporal embedding of the estimated average conditional time, the prompt efficiently encodes the time-varying knowledge up to the current event, which facilitates learning prediction tasks. We verify the effectiveness of temporal prompt in section 4.2. **From Prompt to Prompt Pool.** Ideally, one would learn a model that is able to share knowledge when tasks are similar while maintaining knowledge independently otherwise. Thus, instead of Figure 2: Overview of PromptTPP. Up: At training time, PromptTPP selects a subset of temporal prompts from a key-value paired CtRetroPromptPool based on our proposed retrieval mechanism; then it prepends the selected prompts to the event representations; finally it feeds the extended event representations into the prompt-event interaction and intensity layer, and optimizes the CtRetroPromptPool through the loss defined in equation 11. Down Left: Illustration of how to parameterize a temporal prompt. Down Right: Illustration of prompt tuning in the prompt-event interaction layer. applying a single prompt, we introduce a **pool of temporal prompts** to store encoded knowledge, which can be flexibly grouped as an input to the model. The pool is defined as \[\mathbf{P}=[\mathbf{P}_{1},...,\mathbf{P}_{M}], \tag{4}\] where \(M\) denotes the total number of prompts and \(\mathbf{P}_{i}\in\mathbb{R}^{L_{p}\times D}\) is a single temporal prompt. Following the notation in section 3.1, recall \(\mathbf{h}_{i}\in\mathbb{R}^{D}\) denotes the hidden representation of the \(i\)-th event in the sequence 3 which encodes the event history up to \(t_{i}\) via the recursion by equation 2 and let \(\{\mathbf{P}_{r_{j}},j=1,...,N\}\) be a subset of \(N\) selected prompts, we then incorporate them into the event sequences as **in-context augmentation** as follows: Footnote 3: As \(D_{1}+D_{2}=D\), we use \(D\) and \((D_{1}+D_{2})\) interchangeable throughout the paper. \[[\mathbf{P}_{r_{1}}||,...,||\mathbf{P}_{r_{N}}||\mathbf{h}_{i}], \tag{5}\] Prompts are free to compose, so they can jointly encode knowledge for the model to process, which provides flexibility and generality in the sense that a more fine-grained knowledge sharing scheme can be achieved via _prompt retrieval mechanism_. Under this mechanism, a combination of prompts is selected for each task - similar inputs tend to share more common prompts, and vice versa. **Retrieval Prompt Pool.** The retrieval prompt pool shares some design principles with methods in other fields, such as RETRO (Borgeaud et al., 2022). Specifically, the prompt pool is augmented to be a key-value store \((\mathcal{K},\mathcal{V})\), defined as the set of learnable keys \(\mathbf{k}\in\mathbb{R}^{D}\) and values - temporal prompts \(\mathbf{P}\) in equation 4: \[(\mathcal{K},\mathcal{V})=\{(\mathbf{k}_{i},\mathbf{P}_{i})\}_{i=1}^{M} \tag{6}\] The retrieval prompt pool may be flexible to edit and can be asynchronously updated during the training procedure. The input sequence itself can decide which prompts to choose through query-key matching. Let \(\varphi:\mathbb{R}^{D}\times\mathbb{R}^{D}\) be the cosine distance function to score the match between the query and prompt key. Given a query \(\mathbf{h}_{i}\), the encoded event vector, we search for the closest keys over \(\mathcal{K}\) via maximum inner product search (MIPS). The subset of top-N selected keys is denoted as: \[\mathrm{K}_{top-N}=\operatorname*{argmin}_{\{r_{j}\}_{j=1}^{N}}\sum_{i=1}^{N} \varphi(\mathbf{h}_{i},\mathbf{k}_{r_{j}}) \tag{7}\] Importantly, the design of this strategy brings two benefits: (i) it decouples the query learning and prompt learning processes, which has been empirically shown to be critical (see section 4.2); (ii) the retrieval is performed in an instance-wise fashion, which makes the framework become _task agnostic_, meaning the method works without needing to store extra information about the task identity at test time. This corresponds to a _realistic setting_ for modeling event streams in real applications. ### Prompt-Event Interaction The interaction operation controls the way we combine prompts with the encoded event states, which directly affects how the high-level instructions in prompts interact with low-level representations. Thus, we believe a well-designed prompting function is also vital for the overall CL performance. The interaction mechanism is also called **prompting function** in the NLP community. We apply the multi-head self-attention mechanism (Vaswani et al., 2017) (MHSA) for modeling the interactions and adopt the mainstream realization of prompting function - Prefix Tuning (Pre-T) (Li and Liang, 2021). Denote the input query, key, and values as \(\mathbf{z}_{Q},\mathbf{z}_{K},\mathbf{z}_{V}\) and the MHSA layer is constructed as: \[\text{MHSA}(\mathbf{z}_{Q},\mathbf{z}_{K},\mathbf{z}_{V})=[\mathbf{z}_{1}||,...,||\mathbf{z}_{m}] \mathbf{W}^{O}, \tag{8}\] where \(\mathbf{z}_{i}=\text{Attn}(\mathbf{z}_{Q}\mathbf{W}_{i}^{Q},\mathbf{z}_{K}\mathbf{W}_{i}^{K},\mathbf{z }_{V}\mathbf{W}_{i}^{V}),\mathbf{W}^{O},\mathbf{W}_{i}^{Q},\mathbf{W}_{i}^{K},\mathbf{W}_{i}^{V}\) are projection matrix. In our context, let \(\{\mathbf{P}_{r_{i}}\}_{i=1}^{N}\) be the retrieved prompts from the pool, we set \(\mathbf{h}_{i}\) to be the query, split each prompt \(\mathbf{P}_{r_{i}}\) into \(\mathbf{P}_{r_{i}}^{K},\mathbf{P}_{r_{i}}^{V}\in\mathbb{R}^{L_{p}/2\times D}\) and prepend them to keys and values, respectively, while keeping the query as-is: \[\mathbf{h}_{i}^{Pre-T}=\text{MHSA}(\mathbf{h}_{i},[\mathbf{P}^{K}||\mathbf{h}_{i}],[\mathbf{P}^{V} ||\mathbf{h}_{i}]), \tag{9}\] where \(\mathbf{P}^{K}=[\mathbf{P}_{r_{1}}^{K}||,...,||\mathbf{P}_{r_{N}}^{K}|\),\(\mathbf{P}^{V}=[\mathbf{P}_{r_{1}}^{V}||,...,||\mathbf{P}_{r_{N}}^{V}]\). Apparently, the key and value \(\mathbf{z}_{K},\mathbf{z}_{V}\in\mathbb{R}(\frac{L_{p}*N}{2}+1)\times D\) and the output \(\mathbf{h}_{i}^{Pre-T}\in\mathbb{R}^{D}\). Noted that there exist other prompting methods, such as _Prompt Tuning_ (Pro-T), where all the prompts concurrently prepend to the query, key and values: \[\mathbf{h}_{i}^{Pro-T}=\text{MHSA}([\mathbf{P}^{Q}||\mathbf{h}_{i}],[\mathbf{P}^{K}||\mathbf{h}_{i} ],[\mathbf{P}^{V}||\mathbf{h}_{i}]), \tag{10}\] where \(\mathbf{P}^{Q}=\mathbf{P}^{K}=\mathbf{P}^{V}=[\mathbf{P}_{r_{1}}],...,||\mathbf{P}_{r_{N}}|\). As a result, the query, key, value and output \(\mathbf{z}_{Q},\mathbf{z}_{K},\mathbf{z}_{V},\mathbf{h}_{i}^{Pro-T}\in\mathbb{R}^{(L_{p}*N+1) \times D}\). Despite being less efficient in computation, we empirically demonstrate that Pre-T brings better performance. See Analysis III in section 4.2. The output of the MHSA is then passed into an intensity layer (an MLP with softplus activation) to generate the intensity \(\lambda_{e}(t_{i}),e\in\{1,...,E\}\). For simplicity, we denote the prompt-event interaction and intensity layer together as the **the decoder**\(f_{\phi_{dec}}\) parameterized by \(\phi_{dec}\). ### Model Optimization The full picture of PromptTPP at training and test time is described in Algorithm 1 and Algorithm 2 in Appendix C.1. At every training step, each event \(e_{i}@t_{i}\) is recursively fed into the encoder \(f_{\phi_{enc}}\), after selecting \(N\) prompts following the aforementioned retrieval strategy, the intensity \(\mathbf{\lambda}(t_{i})\) is computed by the decoder \(f_{\phi_{dec}}\). Overall, we seek to minimize the end-to-end loss function: \[\min_{\mathbf{P},\phi_{enc},\phi_{dec},\mathcal{K}}\mathcal{L}_{nll}(\mathbf{P},f_{ \phi_{enc}},f_{\phi_{dec}})+\alpha\sum_{i}\sum_{K_{top-N}}\varphi(f_{\phi_{ enc}}(e_{i}@t_{i}),\mathbf{k}_{r_{j}}), \tag{11}\] where the first term is the negative loglikelihood of the event sequence (\(\mathcal{L}_{nll}\) equals to \(-\mathcal{L}_{ll}\) defined in equation 1) and the second term refers to a surrogate loss to pull selected keys closer to corresponding query in the retrieval process. \(\alpha\) is a scalar to control the importance of the surrogate loss. Given the learned parameters, we may wish to make a minimum Bayes risk prediction about the next event via the thinning algorithm (Mei & Eisner, 2017; Yang et al., 2022). **Asynchronous Refresh of Prompt Pool.** The prompts may lead to the variable contextual representation of the event as the parameters of the based model are continually updated. To accelerate training, we propose to asynchronously update all embeddings in the prompt pool every \(C\) training epochs. ## 4 Experiments ### Experimental setup **Datasets and Evaluation Setup** We conduct our real-world experiments on three sequential user-behavior datasets. In each dataset, a sequence is defined as the records pertaining to a single individual. The **Taobao**(Alibaba, 2018) dataset contains time-stamped user click behaviors on Taobao shopping pages with the category of the item involved noted as the event type. The **Amazon**(Ni, 2018) dataset contains time-stamped records of user-generated reviews of clothing, shoes, and jewelry with the category of the reviewed product defined as the event type. The **StackOverflow**(Leskovec & Krevl, 2014) dataset contains two years of user awards on a question-answering website: each user received a sequence of badges with the category of the badges defined as the event type. See Appendix D.1 for dataset details. We partition Taobao and Amazon datasets into \(10\) consecutively rolling slides (namely \(10\) tasks) and partition the StackOverflow dataset into \(6\) rolling slides (namely \(6\) tasks). For the Taobao dataset, each slide covers approximately \(1\) day of time; for the Amazon dataset, each slide covers \(2\) years of time; for the StackOverflow dataset, each slide covers approximately 5 months time. The subset in each task is split into training, validation, and test sets with a \(70\%\), \(10\%\), \(20\%\) ratio by chronological order. Each task has no overlap in the test set. For a detailed discussion, a demonstration of the evaluation process is provided in Figure 9 in Appendix D.3. **Metrics.** Following the common next-event prediction task in TPPs (Du et al., 2016; Mei & Eisner, 2017), each model attempts to predict every held-out event \((t_{i},k_{i})\) from its history \(\mathcal{H}_{i}\). We evaluate the prediction \(\hat{k}_{i}\) with the error rate and evaluate the prediction \(\hat{t}_{i}\) with the RMSE. **Base models.** While our proposed methods are amenable to neural TPPs of arbitrary structure, we choose two strong neural TPPs as our base models: **NHP**(Mei & Eisner, 2017) and **AttNHP**(Yang et al., 2022), an attention-based TPP whose performance is comparable to or better than that of the NHP as well as other attention-based models (Zuo et al., 2020; Zhang et al., 2020). **Competitors.** With NHP and AttNHP as base models, we trained _PromptNHP_ (**Pt-NHP**) and _PromptAttNHP_ (**Pt-ANHP**) in the proposed prompt-augmented setup and compared with \(7\) baselines. * _PretrainedTPP_. _PretrainedNHP_ (**Pre-NHP**) and _PretrainedAttNHP_ (**Pre-ANHP**) represent NHP and AttNHP learned at the first task (time step) and not trained any longer. * _RetrainedTPP_. _RetrainedNHP_ (**Re-NHP**) and _RetrainedAttNHP_ (**Re-ANHP**) refer to TPPs retained at every sliding widow. * _OnlineTPP_. As there is no prior work on online neural TPPs, we use online Hawkes process _OnlineMHP_ (**O-TPP**) (Yang et al., 2017), trained in an online manner without any consideration for knowledge consolidation. * _CLTPP_. The concurrent work (Dubey et al., 2022), to the best of our knowledge, is the only neural TPP with CL abilities proposed so far. Based on their work 4, we implement _CL-NHP_ (**CL-NHP**) and _CLAttNHP_ (**CL-ANHP**) as two variants of the hypernetwork-based CLTPPs. Footnote 4: They have not published the code yet. **Implementation and Training Details.** For a fair comparison, they (except O-TPP which is a classical TPP model) are of similar model size (see Table 2 in Appendix D.4). For Pt-NHP and Pt-ANHN, we set \(M=10,N=4,L_{p}=10\) for both datasets. During training, we set \(C=2\) by default and explore the effect of asynchronous training in Analysis IV of section 4.2. More details of the implementation and training of all the methods are in Appendix D.5. ### Results and Analysis The main results are shown in Figure 3. Pre-NHP and Pre-ANHP work the worst in most cases because of inadequate ability to handle the distribution shift in the event sequence. Besides. O-TPP has a similarly poor performance because of two reasons: first it is a classical (non-neural) TPP with weaker representation power of modeling event sequence compared to its neural counterparts; second as a traditional online learning method, it easily loses memory of previously encountered data and suffers from _catastrophic forgetting_. Retraining at every task (Re-NHP and Re-ANHP) achieves Figure 3: Performance of all the methods on Taobao (up), Amazon (middle) and StackOverflow (down). In each figure, the subfigures from left to right are the evolution of type error rate and the time RMSE of each task, the average error rate, and the average time RMSE of all the tasks. moderate results but it also causes _catastrophic forgetting_. Not surprisingly, CL-NHP and CL-ANHP perform better than retraining, by applying a regularized hypernetwork to avoid forgetting. However, the hypernetwork relies on task descriptors built upon rich meta data, which limits its applicability and performance in our setup (and in real applications as well!). Lastly, our methods (both Pt-NHP and Pt-ANHP) work significantly better than all these baselines across the three datasets: they substantially beat the non-CL methods; they also consistently outperform CL-NHP and CL-ANHP by a relative \(4\%-6\%\) margin on both metrics, thanks to our novel design of the CtRetroPromptPool, which successfully reduces catastrophic forgetting (see Analysis 0). **Analysis 0: How models perform on previous tasks after learning new events?** We aim to validate that the improvement in performances indeed is due to the alleviation in catastrophic forgetting instead of simply a better fit on the current task. We use ANHP trained on task \(9\) and Pt-ANHP _continuously_ trained on task \(9\), re-evaluate them on previous tasks and see how the metrics changed. Specifically, on Figure 4, (i) Each number on the curves of Re-ANHP and CL-ANHP corresponds to the performance difference on the test set of task \(i,i<9\) using ANHP trained on task \(9\) vs ANHP trained on task \(i\). (ii) Each number on the curves of Pt-ANHP corresponds to the performance difference on the test set of task \(i,i<9\) using Pt-ANHP trained until (including) task \(9\) vs Pt-ANHP trained until task \(i\). See from Figure 4, on both metrics, we see the drop in performance (i.e., error rate / RMSE increases) of Pt-ANHP is much less significant than ANHP, indicating Pt-ANHP stores well the knowledge of previous tasks, which largely alleviates catastrophic forgetting. Figure 4: The performance drop when re-evaluating the 0-8-th tasks using model trained on the \(9\)-th task on Amazon dataset. Figure 5: Effect of temporal prompt and prompting function of PromptTPP. Figure 6: Effect of hyperparameters of PromptTPP on Amazon dataset. **Analysis I: Does stronger base TPP model naively improve CL?** Our method builds upon a backbone TPP and understanding this question is important for fair comparisons and future research. From Figure 3, Re-ANHP makes no consistent improvement against Re-NHP on average CL performance, which indicates a stronger TPP is not a solution for CL without being appropriately leveraged. Besides, for the CL-based methods, CL-ANHP is tied with CL-NHP on Taobao and makes a limited advancement against CL-NHP on Amazon, while Pt-NHP and Pt-ANHP perform closely on both datasets. Therefore, we can conclude that, although AttNHP is a more robust base model than common non attention-based TPP, i.e., NHP, it is not necessarily translated to CL performance. **Analysis II: Temporal prompt vs standard prompt.** For a fair comparison, we initialize a pool of standard prompts without time-varying parameters by fixing their temporal components \(P_{t}\) to be an all-ones matrix and incorporate it into the base model AttNHP. This method is named Pt-ANHP-std. With other components fixed, we compare Pt-ANHP-std with Pt-ANHP to validate the effectiveness of the temporal prompt introduced in section 3.2. Figure 4(a) shows that Pt-ANHP achieves better performance on both datasets: the introduction of temporal prompts slightly improves the RMSE metric and reduces the error rate with a larger margin. We did the paired permutation test to verify the statistical significance of the improvements. See Appendix D.6 for details. Overall, on both datasets, we find that the performance improvements by using the temporal prompts are enormously significant on error rate (p-value \(<0.05\) ) and weakly significant on RMSE (p-value \(\approx 0.08\)). **Analysis III: How to better attach prompts?** We explore how to attach prompts and enhance their influences on overall performance. We compare three types of prompting: 1_Naive Prompting_ (N-P), where the retrieval and prompting are performed after the event embedding layer: we replace \(\mathbf{h}_{i}\) with \(\mathbf{x}_{i}\) in equation 7, prepend the selected prompts to \(\mathbf{x}_{i}\) and pass it to the rest structure of TPP. 2_Prompt Tuning_ (Pro-T): which concurrently prepend the prompts to query, key, and value, introduced at the end of section 3.3. 3_Prefix-Tuning_ (Pre-T), proposed in the main body of section 3.3, which is the **prompting method used in PromptTPP**. In Figure 4(b), we observe that Pre-T leads to better performance on both datasets compared to those two variants. Despite its empirically better performance, the architecture of Pre-T is actually more scalable and efficient when attached to multiple layers since it results in unchanged output size: \(\mathbf{h}_{i}^{Pre-T}\in\mathbb{R}^{D}\) remains the same size as the input while \(\mathbf{h}_{i}^{Pro-T}\in\mathbb{R}^{(L_{p}*N+1)\times D}\) increases the size along the prompt length dimension. **Analysis IV: Efficiency of our method.** We examine the efficiency of our method in two steps: * Firstly, seen from Table 2, on both datasets, Pt-NHP / Pt-ANHP leads to a \(12\%\)\(/\)\(8\%\) total parameter increase to the base model, which in fact causes a marginal impact on training speed: Figure 6(a) shows that learning curves of Re-ANHP and Pt-ANHP(\(C=1\)) converge at almost the same speed to achieve competitive log-likelihood, respectively. * Furthermore, to accelerate the training (especially when introducing large size prompts), we introduce the asynchronous refresh mechanism (see section 3.4) with prompts updated in a frequency \(C>1\) (refresh the prompt pool less frequently). We observe in Figure 6(a) that Taobao training with \(C=2\) has a comparable performance with \(C=1\) while Amazon training with \(C=2\) improves the convergence notably. \(C=4\) leads to no advancement. Figure 7: Effect of asynchronous refresh and prompt related components of PromptTPP on dataset. Overall, PromptTPP only adds a small number of parameters so that it generally has the same convergence rate as the base model. The asynchronous prompt optimization scheme with \(C=2\) improves the convergence more remarkably on the Amazon dataset. In addition, we indeed provide a complexity analysis. See Appendix D.7. **Analysis V: Effect of prompt related components of our method.** Firstly we completely remove the CtRetroPromptPool design (_w/o CtRoPP_ in Figure 6(b)) and use a single temporal prompt to train tasks sequentially. The performance declines with a notable drop, indicating that a single prompt suffers severe catastrophic forgetting between tasks, while our design of CtRetroPromptPool encodes task-invariant and task-specific knowledge well. Secondly, we remove the learnable key associated with prompts (_w/o k-v_ in Figure 6(b)) and directly use the mean of prompts as keys. This strategy causes a moderate drop in performance. To conclude, learnable keys decouple the query and prompt learning processes and markedly contribute to the performance. **Analysis VI: Effect of hyperparameters of our method.** We evaluate how the performance of PromptTPP changes as we vary three key hyperparameters: (i) prompt length \(L_{p}\), (ii) selection size \(N\), and (iii) prompt pool size \(M\). Theoretically, \(L_{p}\) determines the capacity of a single prompt (which jointly encodes certain knowledge), \(L_{p}\times N\) is the total size used to prepend the event vector, while \(M\) sets the up limit of the capacity of learnable prompts. * _Prompt length \(L_{p}\) and selection size \(N\)._ From the results in Figure 5(a), a too small \(L_{p}\) negatively affects results as a single prompt has a too limited ability to encode the knowledge. Besides, given an optimal \(L_{p}\), an overly large \(N\) makes the total prompts excessively oversized, leading to underfitting and negatively impacting the results. We conclude that a reasonably large \(L_{p}\) and \(N\) enable the model properly encode the shared knowledge between the tasks of event sequences and substantially improve the predictive performance. * _Prompt pool size \(M\)._ Figure 5(b) illustrates that \(M\) positively contributes to the performance. This is because the larger pool size means the larger capacity of the prompts. ## 5 Conclusion In summary, this paper has proposed a groundbreaking framework, known as PromptTPP, for modeling streaming event sequences. By incorporating a continuous-time retrieval prompt pool, the framework effectively facilitates the learning of event streams without requiring rehearsal or task identification. Our experiments have shown that PromptTPP performs exceptionally well compared to other competitors, even under challenging and realistic conditions. ## 6 Limitations and Societal Impacts **Limitations.** Our method uses neural networks, which are typically data-hungry. Although it worked well in our experiments, it might still suffer compared to non-neural models if starved of data. **Societal Impacts.** By describing the model and releasing code, we hope to facilitate probabilistic modeling of continuous-time sequential data in many domains. However, our model may be applied to unethical ends. For example, it may be used for unwanted tracking of individual behavior.
2305.11696
Higher nearby cycles and central sheaves on affine flag varieties
In this paper we generalize and study a notion of (unipotent) nearby cycles over a higher dimensional base based on Be\u{i}linson's description of unipotent nearby cycles, following an idea of Gaitsgory. This generalization, in the setting of affine Grassmannians, is required in recent work of Bezrukavnikov-Braverman-Finkelberg-Kazhdan.
Pramod N. Achar, Simon Riche
2023-05-19T14:20:15Z
http://arxiv.org/abs/2305.11696v2
# Higher nearby cycles and central sheaves on affine flag varieties ###### Abstract. In this paper we generalize and study a notion of (unipotent) nearby cycles over a higher dimensional base based on Bellinson's description of unipotent nearby cycles, following an idea of Gaitsgory. This generalization, in the setting of affine Grassmannians, is required in recent work of Bezrukavnikov-Braverman-Finkelberg-Kazhdan [6]. P.A. was supported by NSF Grant Nos. DMS-1802241 and DMS-2202012. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (S.R., grant agreement No. 101002592). these complexes. In general, this construction forces one to leave the constructible derived category; to avoid this problem, in our account of Gaitsgory's construction in [2] we proposed instead to restrict to complexes such that a stabilization property as above occurs. So, for us the \(2\)-dimensional nearby cycles functor is only "partially defined." The starting point of this paper is the observation that this definition can be phrased so that it makes sense for a scheme \(X\) (of finite type) over any affine space \(\mathbb{A}^{d}\). ### Composition of higher nearby cycles functors The application of nearby cycles over a \(2\)-dimensional base in [8] uses a comparison with two related operations: (a) compute the nearby cycles of the given perverse sheaf successively along the \(2\) factors in \(\mathbb{A}^{2}\); (b) restrict the perverse sheaf to the preimage of the diagonal copy of \(\mathbb{A}^{1}\), and then compute the nearby cycles of (a shift of) this complex. Gaitsgory shows that his \(2\)-dimensional nearby cycles complex maps to each of the complexes obtained in (a) or (b), and that these maps are isomorphisms in the specific setting required in [5]. In [6] the authors consider (in a related setting) a variant of (a) where one considers a complex over \(\mathbb{A}^{d}\) and computes nearby cycles along each \(\mathbb{A}^{1}\)-factor successively, and they assert that the result does not depend on the choice of order on the coordinates. To justify this assertion, here we introduce the (partially defined) \(d\)-dimensional nearby cycles functor, and show that, in the setting of [6], each of the complexes identifies canonically with the image of the inital complex under our functor. To justify this fact, we interpret nearby cycles along each factor as the case \(d=1\) of our construction, and show roughly that a composition of \(d\)- and \(e\)-dimensional nearby cycles functors receives a canonical map from a \((d+e)\)-dimensional nearby cycles functor, which in the setting of [6] is an isomorphism. In fact, in the body of the paper we consider a more general construction of higher nearby cycles and their compositions, which also covers the construction in (b) above, and thereby (in our opinion) clarifies the general picture. See Definition 2.2 for the main definition, and Proposition 2.16 for the statement about composition of such functors. ### Compatibilities In full generality, it is not reasonable to expect that the morphism in Proposition 2.16 is always an isomorphism. As in [8], what we show here is that the constructions of higher nearby cycles and of the morphisms considered above are compatible with smooth pullback and proper pushforward in an appropriate sense (see Lemmas 2.13, 2.14, 2.19 and 2.20 for precise statements), and then study a product-type situation (see SS2.10). This is sufficient to show that the \(d\)-dimensional nearby cycles are well defined in the setting of [6], and that their formation is compatible with composition, which amounts to the statement these authors require. See Theorem 3.2 and SS3.7 for precise statements. ### Further comments There exists a general theory of nearby and vanishing cycles over general bases; see [9] for a brief account. We do not attempt to compare this construction with our much more elementary and ad-hoc version. Our theory should not be considered as a formalism applicable in any general setting, but only as something suitable for some specific situations encountered in the geometric Langlands program. As explained above, the construction considered here is an extension of our study in [2, SS9.4]. There is one important difference in our treatment though: in [2] we introduced a general condition of "iterated cleanness" implying that the 2-dimensional nearby cycles complex is well defined, and showed that this condition is satisfied in the setting at hand. It is not obvious (to us) how to extend this condition over a higher-dimensional base; here we bypass this question by proving a compatibility statement with proper pushforward which is stronger than its counterpart in [2] (compare Lemma 2.14 and [2, Proposition 9.4.2]) and allows us to reduce the question to the product-type setting. In this paper we work in the setting of etale sheaves on schemes over fields, in order to meet the setting of [6]. However all our statements have obvious variants for "usual" sheaves on complex algebraic varieties endowed with their analytic topology (and coefficients in a field), and all our proofs apply in both settings. After we completed a preliminary version of this paper, it was pointed out to us that a special case of our construction had already been introduced by A. Salmon, with applications to the cohomology of shtukas; see [11] and Example 2.11. ### Acknowledgements We thank Roman Bezrukavnikov and Michael Finkelberg for suggesting that our study of Gaitsgory's construction could be used to prove the statement they use in [6], which was the starting point of this work. We thank them also, as well as A. Salmon, for useful comments on a preliminary version. ## 2. Higher nearby cycles ### Pointed maps and associated linear morphisms We fix a base field \(\mathbb{F}\). Below, by a "scheme" we will mean an \(\mathbb{F}\)-scheme of finite type. If \(P\) is a finite set, we may consider the affine space \[\mathbb{A}^{P}=\operatorname{Spec}(\mathbb{F}[X_{p}:p\in P])\] with coordinates indexed by \(P\). The _generic part_ of this affine space, denoted by \(\mathbb{A}_{\eta}^{P}\), is the open subscheme where no coordinates vanish: \[\mathbb{A}_{\eta}^{P}=\{(x_{p})_{p\in P}\mid x_{p}\neq 0\text{ for all }p\}.\] (In case \(P=\varnothing\), we interpret these definitions as \(\mathbb{A}^{\varnothing}=\mathbb{A}_{\eta}^{\varnothing}=\operatorname{Spec}( \mathbb{F})\).) For any finite set \(P\) we let \(P_{*}\) denote the disjoint union \(P\amalg\{*\}\), where \(*\) denotes a new element. For finite sets \(P\) and \(Q\), a _pointed map_\(\alpha:P_{*}\to Q_{*}\) is a function that satisfies \(\alpha(*)=*\). If \(\alpha:P_{*}\to Q_{*}\) is a pointed map, there is an induced linear map of affine spaces \(\bar{\alpha}:\mathbb{A}^{Q}\to\mathbb{A}^{P}\) given by \[\bar{\alpha}((x_{q})_{q\in Q})=(y_{p})_{p\in P}\qquad\text{where}\qquad y_{p }=\begin{cases}x_{\alpha(p)}&\text{if }\alpha(p)\in Q,\\ 0&\text{if }\alpha(p)=*.\end{cases}\] We also set \(\mathbb{A}_{\eta,\alpha}^{P}=\{(x_{p})_{p\in P}\in\mathbb{A}^{P}\mid x_{p}\neq 0 \text{ if }\alpha(p)\neq*\}\); then the restriction of \(\bar{\alpha}\) to \(\mathbb{A}_{\eta}^{Q}\) factors through a morphism \(\bar{\alpha}_{\eta}:\mathbb{A}_{\eta}^{Q}\to\mathbb{A}_{\eta,\alpha}^{P}\). _Remark 2.1_.: From the definition we see that \(\bar{\alpha}\) is injective if \(\alpha\) is surjective, and surjective if \(\alpha\) is injective. For arbitrary \(\alpha\), one can decompose the situation into a combination of these two settings as follows: set \(R=\alpha(P)\cap Q\). Then \(\alpha\) decomposes in the obvious way as a composition of pointed maps \[P_{*}\xrightarrow{\alpha_{1}}R_{*}\xrightarrow{\alpha_{2}}Q_{*},\] and we have \(\bar{\alpha}=\bar{\alpha}_{1}\circ\bar{\alpha}_{2}\) where \(\bar{\alpha}_{1}\) is injective and \(\bar{\alpha}_{2}\) is surjective. ### Scheme morphisms associated with pointed maps Now suppose we have a scheme \(X\) equipped with a map \(f:X\to\mathbb{A}^{P}\). The _generic part_ of \(X\) is defined by \(X_{\eta}=X\times_{\mathbb{A}^{P}}\mathbb{A}^{P}_{\eta}\); the natural morphism \(X_{\eta}\to\mathbb{A}^{P}_{\eta}\) will be denoted \(f_{\eta}\), and the embedding \(X_{\eta}\to X\) will be denoted \(\mathbf{j}_{X}\) (or \(\mathbf{j}\) when no confusion is likely). If \(\alpha:P_{*}\to Q_{*}\) is a pointed map, then one can consider the schemes \[X^{\alpha}:=X\times_{\mathbb{A}^{P}}\mathbb{A}^{Q}\qquad\text{and}\qquad X^{ \alpha}_{\eta}:=X\times_{\mathbb{A}^{P}}\mathbb{A}^{Q}_{\eta}\] where the fiber products are taken with respect to \(\bar{\alpha}:\mathbb{A}^{Q}\to\mathbb{A}^{P}\) and its restriction to \(\mathbb{A}^{Q}_{\eta}\). The natural morphism \(X^{\alpha}\to X\) will be denoted \(\mathbf{i}^{\prime}_{X,\alpha}\) (or \(\mathbf{i}^{\prime}_{\alpha}\)), its restriction to \(X^{\alpha}_{\eta}\) will be denoted \(\mathbf{i}_{X,\alpha}\) (or \(\mathbf{i}_{\alpha}\)) and we will denote by \[f^{\alpha}:X^{\alpha}\to\mathbb{A}^{Q}\] the natural projection morphism. By Remark 2.1, \(\mathbf{i}^{\prime}_{\alpha}\) is a closed immersion if \(\alpha\) is surjective, and is smooth and surjective if \(\alpha\) is injective; moreover we have \(\mathbf{i}_{\alpha}=\mathbf{i}^{\prime}_{\alpha}\circ\mathbf{j}_{X^{\alpha}}\). Note also that if \(\alpha^{-1}(*)=\{*\}\) then \(\mathbf{i}_{\alpha}\) factors through a morphism \[\mathbf{i}^{\prime\prime}_{\alpha}:X^{\alpha}_{\eta}\to X_{\eta}.\] ### Setting The goal of this section is to explain the construction, for \(\mathscr{F}\) a perverse sheaf on \(X_{\eta}\), of a perverse sheaf \[\Upsilon^{\alpha}_{f}(\mathscr{F})\in\mathsf{Perv}(X^{\alpha}_{\eta},\Bbbk),\] together with a collection of commuting natural automorphisms \[\mathsf{m}^{p}_{\mathscr{F}}:\Upsilon^{\alpha}_{f}(\mathscr{F})\to\Upsilon^ {\alpha}_{f}(\mathscr{F})\qquad\text{for $p\in\alpha^{-1}(*)$}.\] We warn the reader that this construction is "partial": it will be defined only for certain perverse sheaves \(\mathscr{F}\). We do not have any general criterion which guarantees that this construction works, but we do have tools (see Lemmas 2.13, 2.14 and 2.22) that can be used to show that this is the case in certain settings, where it gives rise to very important objects (see Section 3). In the case where \(\#P=1\) and \(Q=\varnothing\), our construction will reduce to Beilinson's description of the unipotent nearby cycles functor, and when \(\#P=2\) and \(Q=\varnothing\) it corresponds to the theory of "nearby cycles along a 2-dimensional base" developed in [8] and studied in [2, SS9.4]; see Example 2.11. ### Definition Let \(\ell\) be a prime number which is nonzero in \(\mathbb{F}\), and let \(\Bbbk\) be either a finite extension or an algebraic closure of \(\mathbb{Q}_{\ell}\) or \(\mathbb{F}_{\ell}\), so that the bounded constructible derived category of etale \(\Bbbk\)-sheaves on schemes is defined and well behaved. For \(a\in\mathbb{Z}_{\geqslant 1}\), we consider the \(\Bbbk\)-local system \(\mathscr{L}_{a}\) constructed in [2, SS9.2.2]: it is the unique (up to isomorphism) indecomposable unipotent \(\Bbbk\)-local system on \(\mathbb{A}^{1}\smallsetminus\{0\}\) of rank \(a\). (Here, "unipotent" means that this local system is an iterated extension of copies of the trivial local system.) This local system has a canonical automorphism \(T_{a}\). As in [2, SS9.2.2], for \(a\leqslant b\) there is an embedding of local systems \[\mathscr{L}_{a}\hookrightarrow\mathscr{L}_{b} \tag{2.1}\] which intertwines \(T_{a}\) and \(T_{b}\), and whose cokernel identifies with \(\mathscr{L}_{b-a}\). In particular, \(\mathscr{L}_{a}\) admits a canonical surjection to the constant local system \(\underline{\Bbbk}=\mathscr{L}_{1}\). Now let \(\mathbf{a}:P\to\mathbb{Z}_{\geqslant 1}\) be a function. Define a local system \(\mathscr{L}_{\mathbf{a}}\) on \(\mathbb{A}^{P}_{\eta}\) by \[\mathscr{L}_{\mathbf{a}}=\big{[}\begin{matrix}\underline{\times}\\ p\in P\end{matrix}\mathscr{L}_{\mathbf{a}(p)}.\] For any \(p\in P\), we will denote by \(T^{p}_{\mathbf{a}}\) the automorphism of \(\mathscr{L}_{\mathbf{a}}\) induced by the automorphism \(T_{a(p)}\) of the factor labelled by \(p\). If \(\mathbf{a},\mathbf{b}:P\to\mathbb{Z}_{\geqslant 1}\) are two functions, we say that \(\mathbf{a}\leqslant\mathbf{b}\) if \(\mathbf{a}(p)\leqslant\mathbf{b}(p)\) for all \(p\in P\). If \(\mathbf{a}\leqslant\mathbf{b}\), then (2.1) gives rise to an embedding of local systems \(\mathscr{L}_{\mathbf{a}}\hookrightarrow\mathscr{L}_{\mathbf{b}}\) on \(\mathbb{A}^{P}_{\eta}\). In the special case where \(\mathbf{a}(p)=\mathbf{b}(p)\) for all but one element \(p_{0}\) of \(P\), the cokernel is again a local system of the same form: specifically, we have a short exact sequence \[\mathscr{L}_{\mathbf{a}}\hookrightarrow\mathscr{L}_{\mathbf{b}}\twoheadrightarrow \mathscr{L}_{\mathbf{c}}\qquad\text{if}\qquad\begin{cases}\mathbf{a}(p)= \mathbf{b}(p)=\mathbf{c}(p)&\text{for all }p\neq p_{0},\\ \mathbf{b}(p_{0})=\mathbf{a}(p_{0})+\mathbf{c}(p_{0}).\end{cases} \tag{2.2}\] Let us say that \(\mathbf{a}:P\to\mathbb{Z}_{\geqslant 1}\) is _\(\alpha\)-special_ (with respect to a given pointed map \(\alpha:P_{\mathbf{*}}\to Q_{\mathbf{*}}\)) if for each \(p\in\alpha^{-1}(Q)\) we have \(\mathbf{a}(p)=1\). **Definition 2.2**.: Let \(f:X\to\mathbb{A}^{P}\) be a morphism of schemes. Let \(\mathscr{F}\in\mathsf{Perv}(X_{\eta},\Bbbk)\), and let \(\alpha:P_{\mathbf{*}}\to Q_{\mathbf{*}}\) be a pointed map. If \(\mathbf{a},\mathbf{b}:P\to\mathbb{Z}_{\geqslant 1}\) satisfy \(\mathbf{a}\leqslant\mathbf{b}\), then for any \(i\in\mathbb{Z}\) there is a natural map \[{}^{p}\!\mathscr{H}^{i}(\mathbf{i}^{*}_{\alpha}\mathbf{j}_{*}(\mathscr{F} \otimes f^{*}_{\eta}\mathscr{L}_{\mathbf{a}}))\to{}^{p}\!\mathscr{H}^{i}( \mathbf{i}^{*}_{\alpha}\mathbf{j}_{*}(\mathscr{F}\otimes f^{*}_{\eta} \mathscr{L}_{\mathbf{b}})). \tag{2.3}\] We say that the _\(\alpha\)-nearby cycles of \(\mathscr{F}\) are well defined_ if * for \(i=|Q|-|P|\), there exists \(N\in\mathbb{Z}_{\geqslant 0}\) such that if \(\mathbf{a}\) is \(\alpha\)-special and satisfies \(\mathbf{a}(p)\geqslant N\) for any \(p\in\alpha^{-1}(*)\cap P\), then for any \(\mathbf{b}\geqslant\mathbf{a}\)\(\alpha\)-special the map (2.3) is an isomorphism; * for \(i\neq|Q|-|P|\), for any \(\mathbf{a}\)\(\alpha\)-special there exists \(\mathbf{b}\geqslant\mathbf{a}\)\(\alpha\)-special such that the map (2.3) vanishes. If these conditions are satisfied, we set \[\Upsilon^{\alpha}_{f}(\mathscr{F})=\lim_{\begin{subarray}{c}\mathbf{a}:P\to \mathbb{Z}_{\geqslant 1}\\ \alpha\text{-special}\end{subarray}}{}^{p}\!\mathscr{H}^{|Q|-|P|}(\mathbf{i}^{* }_{\alpha}\mathbf{j}_{*}(\mathscr{F}\otimes f^{*}_{\eta}\mathscr{L}_{\mathbf{ a}})).\] If \(p\in\alpha^{-1}(*)\), we will denote by \(\mathsf{m}^{p}_{\mathscr{F}}\) the automorphism of \(\Upsilon^{\alpha}_{f}(\mathscr{F})\) induced by \((T^{p}_{\mathbf{a}})^{-1}\); this automorphism will be called the _monodromy automorphism_ associated with \(p\). _Remark 2.3_.: It should be clear that, although we omit the adjective "unipotent" from our terminology for simplicity, what we consider in Definition 2.2 is an extension of the construction of the _unipotent part_ of the nearby cycles functor. We will sometimes write \(\Upsilon^{\alpha}_{X}(\mathscr{F})\) instead of \(\Upsilon^{\alpha}_{f}(\mathscr{F})\). This construction is obviously functorial in the sense that if \(\mathscr{F},\mathscr{G}\in\mathsf{Perv}(X_{\eta},\Bbbk)\) are such that the \(\alpha\)-nearby cycles of \(\mathscr{F}\) and \(\mathscr{G}\) are well defined and if \(u:\mathscr{F}\to\mathscr{G}\) is a morphism, then we have a natural morphism \(\Upsilon^{\alpha}_{f}(u):\Upsilon^{\alpha}_{f}(\mathscr{F})\to\Upsilon^{\alpha }_{f}(\mathscr{G})\) which intertwines the automorphisms \(\mathsf{m}^{p}_{\mathscr{F}}\) and \(\mathsf{m}^{p}_{\mathscr{G}}\) for any \(p\). It is also easily seen to be exact in the sense that if \(\mathscr{F}_{1}\hookrightarrow\mathscr{F}_{2}\twoheadrightarrow\mathscr{F}_{3}\) is a short exact sequence in \(\mathsf{Perv}(X_{\eta},\Bbbk)\) such that the \(\alpha\)-nearby cycles of \(\mathscr{F}_{1}\), \(\mathscr{F}_{2}\) and \(\mathscr{F}_{3}\) are well defined, then the induced morphisms \(\Upsilon^{\alpha}_{f}(\mathscr{F}_{2})\to\Upsilon^{\alpha}_{f}(\mathscr{F}_{2}) \to\Upsilon^{\alpha}_{f}(\mathscr{F}_{3})\) form a short exact sequence in \(\mathsf{Perv}(X^{\alpha}_{\eta},\Bbbk)\). _Remark 2.4_.: Recall the notation of SS2.1, and set \(X_{\eta,\alpha}=X\times_{\mathbb{A}^{P}}\mathbb{A}^{P}_{\eta,\alpha}\). Then the immersion \(\mathbf{j}_{X}\) factors as a composition \[X_{\eta}\xrightarrow{\mathbf{j}_{X,\alpha,1}}X_{\eta,\alpha}\xrightarrow{ \mathbf{j}_{X,\alpha,2}}X,\] and \(\mathbf{i}_{X,\alpha}\) factors through a morphism \(\mathbf{h}_{X,\alpha}:X_{\eta}^{\alpha}\to X_{\eta,\alpha}\). Hence for any function \(\mathbf{a}\) we have an identification \[\mathbf{i}_{\alpha}^{*}\mathbf{j}_{*}(\mathscr{F}\otimes f_{\eta}^{*}\mathscr{L }_{\mathbf{a}})=\mathbf{h}_{X,\alpha}^{*}(\mathbf{j}_{X,\alpha,1})_{*}( \mathscr{F}\otimes f_{\eta}^{*}\mathscr{L}_{\mathbf{a}}). \tag{2.4}\] _Remark 2.5_.: Consider the decomposition \(\alpha=\alpha_{2}\circ\alpha_{1}\) from Remark 2.1. Then a map \(\mathbf{a}:P\to\mathbb{Z}_{\geqslant 1}\) is \(\alpha\)-special iff it is \(\alpha_{1}\)-special, and we have \(\mathbf{i}_{X,\alpha}^{\prime}=\mathbf{i}_{X,\alpha_{1}}^{\prime}\circ \mathbf{i}_{X^{\alpha_{1}},\alpha_{2}}^{\prime}\), so for any \(\alpha\)-special \(\mathbf{a}\) we have \[\mathbf{i}_{X,\alpha}^{*}(\mathbf{j}_{X})_{*}(\mathscr{F}\otimes f_{\eta}^{*} \mathscr{L}_{\mathbf{a}})\simeq(\mathbf{i}_{X^{\alpha_{1}},\alpha_{2}})^{*}( \mathbf{i}_{X,\alpha_{1}}^{\prime})^{*}(\mathbf{j}_{X})_{*}(\mathscr{F} \otimes f_{\eta}^{*}\mathscr{L}_{\mathbf{a}}).\] Since \((\alpha_{2})^{-1}(*)=\{*\}\) we have a morphism \(\mathbf{i}_{X^{\alpha_{1}},\alpha_{2}}^{\prime\prime}\), which is easily seen to be smooth and surjective, and an identification \[\mathbf{i}_{X,\alpha}^{*}(\mathbf{j}_{X})_{*}(\mathscr{F}\otimes f_{\eta}^{*} \mathscr{L}_{\mathbf{a}})\cong(\mathbf{i}_{X^{\alpha_{1}},\alpha_{2}}^{\prime \prime})^{*}(\mathbf{i}_{X,\alpha_{1}})^{*}(\mathbf{j}_{X})_{*}(\mathscr{F} \otimes f_{\eta}^{*}\mathscr{L}_{\mathbf{a}}).\] Since \(\mathbf{i}_{X^{\alpha_{1}},\alpha_{2}}^{\prime\prime}\) is smooth of relative dimension \(|Q|-|R|\) the functor \((\mathbf{i}_{X^{\alpha_{1}},\alpha_{2}}^{\prime\prime})^{*}[|Q|-|R|]\) is exact with respect to the perverse t-structure (see e.g. [1, Proposition 3.6.1]), so for any \(i\in\mathbb{Z}\) we have \[{}^{p}\!\mathscr{H}^{i}(\mathbf{i}_{X,\alpha}^{*}(\mathbf{j}_{X})_{*}( \mathscr{F}\otimes f_{\eta}^{*}\mathscr{L}_{\mathbf{a}}))\cong\\ (\mathbf{i}_{X^{\alpha_{1}},\alpha_{2}}^{\prime\prime})^{*}[|Q|-|R |]\left({}^{p}\!\mathscr{H}^{i-|Q|+|R|}(\mathbf{i}_{X,\alpha_{1}}^{*}(\mathbf{ j}_{X})_{*}(\mathscr{F}\otimes f_{\eta}^{*}\mathscr{L}_{\mathbf{a}}))\right). \tag{2.5}\] Since \(\mathbf{i}_{X^{\alpha_{1}},\alpha_{2}}^{\prime\prime}\) is also surjective, \((\mathbf{i}_{X^{\alpha_{1}},\alpha_{2}}^{\prime\prime})^{*}[|Q|-|R|]\) is faithful on perverse sheaves (see [1, Theorem 3.6.6]) and detects isomorphisms (because invertibility of a morphism can be checked on geometric stalks), we deduce that the \(\alpha\)-nearby cycles of \(\mathscr{F}\) are well defined iff so are the \(\alpha_{1}\)-nearby cycles of \(\mathscr{F}\), and that in this case we have \[\Upsilon_{f}^{\alpha}(\mathscr{F})=(\mathbf{i}_{X^{\alpha_{1}},\alpha_{2}}^{ \prime\prime})^{*}\Upsilon_{f}^{\alpha_{1}}(\mathscr{F})[|Q|-|R|].\] ### First properties The following lemma makes more precise the perverse degrees one has to consider in Definition 2.2. **Lemma 2.6**.: _Let \(\mathscr{F}\in\mathsf{Perv}(X_{\eta},\Bbbk)\), and let \(\alpha:P_{*}\to Q_{*}\) be a pointed map. For any map \(\mathbf{a}:P\to\mathbb{Z}_{\geqslant 1}\), we have \({}^{p}\!\mathscr{H}^{i}(\mathbf{i}_{\alpha}^{*}\mathbf{j}_{*}(\mathscr{F} \otimes f_{\eta}^{*}\mathscr{L}_{\mathbf{a}}))=0\) unless \(|Q|-|P|\leqslant i\leqslant|Q|-|\mathrm{im}(\alpha)\cap Q|\)._ Proof.: Remark 2.5 reduces the proof to the case \(\alpha\) is surjective, which we assume from now on. Set \(r:=|P|-|Q|\). Then \(\mathbf{i}_{\alpha}^{\prime}\) is a closed immersion; more specifically, it can be written as a composition \(i_{r}\circ\cdots\circ i_{1}\) where each \(i_{j}\) is a closed immersion whose complementary open immersion is affine. (In fact it suffices to remark this for \(\bar{\alpha}\), where the decomposition is obtained by writing this map as a composition of embeddings of codimension-1 linear subspaces.) After this remark the proof is similar to that of [2, Lemma 9.4.1]. We deduce the following property. **Lemma 2.7**.: _Let \(\mathscr{F}\in\mathsf{Perv}(X_{\eta},\Bbbk)\), and let \(\alpha:P_{*}\to Q_{*}\) be a pointed map. For any two functions \(\mathbf{a},\mathbf{b}:P\to\mathbb{Z}_{\geqslant 1}\) with \(\mathbf{a}\leqslant\mathbf{b}\), the natural map_ \[{}^{p}\!\mathscr{H}^{|Q|-|P|}(\mathbf{i}_{\alpha}^{*}\mathbf{j}_{*}(\mathscr{F} \otimes f_{\eta}^{*}\mathscr{L}_{\mathbf{a}}))\to{}^{p}\!\mathscr{H}^{|Q|-|P|}( \mathbf{i}_{\alpha}^{*}\mathbf{j}_{*}(\mathscr{F}\otimes f_{\eta}^{*} \mathscr{L}_{\mathbf{b}}))\] _is injective._ Proof.: By induction, we can reduce to the case where \(\mathbf{a}(p)=\mathbf{b}(p)\) for all but one element of \(P\), say \(p_{0}\). In this case, define \(\mathbf{c}\) as in (2.2). That short exact sequence gives rise to a distinguished triangle \[\mathbf{i}_{\alpha}^{*}\mathbf{j}_{*}(\mathscr{F}\otimes f_{\eta}^{*}\mathscr{L} _{\mathbf{a}})\to\mathbf{i}_{\alpha}^{*}\mathbf{j}_{*}(\mathscr{F}\otimes f_{ \eta}^{*}\mathscr{L}_{\mathbf{b}})\to\mathbf{i}_{\alpha}^{*}\mathbf{j}_{*}( \mathscr{F}\otimes f_{\eta}^{*}\mathscr{L}_{\mathbf{c}})\xrightarrow{[1]}.\] Lemma 2.6 applies to all three terms, and then the present lemma follows by examining the long exact sequence in perverse cohomology. ### Reformulation In the following lemma we show that Definition 2.2 can be formulated in a slightly different way. **Lemma 2.8**.: _Let \(\mathscr{F}\in\mathsf{Perv}(X_{\eta},\Bbbk)\), and let \(\alpha:P_{*}\to Q_{*}\) be a pointed map. The \(\alpha\)-nearby cycles of \(\mathscr{F}\) are well defined if and only if the following conditions hold:_ * _There exists_ \(N\in\mathbb{Z}_{\geqslant 0}\) _such that if_ \(\mathbf{a}\) _is_ \(\alpha\)_-special and satisfies_ \(\mathbf{a}(p)\geqslant N\) _for any_ \(p\in\alpha^{-1}(*)\cap P\)_, then for any_ \(\mathbf{b}\geqslant\mathbf{a}\)__\(\alpha\)_-special the natural map_ \[{}^{p}\mathscr{H}^{|Q|-|P|}(\mathbf{i}_{\alpha}^{*}\mathbf{j}_{*}(\mathscr{F} \otimes f_{\eta}^{*}\mathscr{L}_{\mathbf{a}}))\to{}^{p}\mathscr{H}^{|Q|-|P|}( \mathbf{i}_{\alpha}^{*}\mathbf{j}_{*}(\mathscr{F}\otimes f_{\eta}^{*} \mathscr{L}_{\mathbf{b}}))\] _is an isomorphism._ * _For any_ \(\mathbf{a}\)__\(\alpha\)_-special there exists_ \(\mathbf{b}\geqslant\mathbf{a}\)__\(\alpha\)_-special such that the natural map_ \[{}^{p}\tau^{>|Q|-|P|}(\mathbf{i}_{\alpha}^{*}\mathbf{j}_{*}(\mathscr{F}\otimes f _{\eta}^{*}\mathscr{L}_{\mathbf{a}}))\to{}^{p}\tau^{>|Q|-|P|}(\mathbf{i}_{ \alpha}^{*}\mathbf{j}_{*}(\mathscr{F}\otimes f_{\eta}^{*}\mathscr{L}_{\mathbf{ b}}))\] _is zero._ Proof.: By Lemma 2.6, the map (2.3) automatically vanishes for \(i<|Q|-|P|\). In view of this, it is clear that the conditions in the present lemma imply those in Definition 2.2. Conversely, assume that the conditions in Definition 2.2 hold. Let \(\mathbf{a}:P\to\mathbb{Z}_{\geqslant 1}\) be an \(\alpha\)-special function, and define a sequence of functions \(\mathbf{a}_{1},\mathbf{a}_{2},\ldots\) inductively as follows: set \(\mathbf{a}_{1}=\mathbf{a}\), and if \(\mathbf{a}_{1},\ldots,\mathbf{a}_{n-1}\) are already defined, choose \(\mathbf{a}_{n}\geqslant\mathbf{a}_{n-1}\) such that \[{}^{p}\mathscr{H}^{i}(\mathbf{i}_{\alpha}^{*}\mathbf{j}_{*}(\mathscr{F} \otimes f_{\eta}^{*}\mathscr{L}_{\mathbf{a}_{n-1}}))\to{}^{p}\mathscr{H}^{i}( \mathbf{i}_{\alpha}^{*}\mathbf{j}_{*}(\mathscr{F}\otimes f_{\eta}^{*} \mathscr{L}_{\mathbf{a}_{n}}))\] vanishes for \(i>|Q|-|P|\). (By Lemma 2.6 again, there are only finitely many degrees \(i>|Q|-|P|\) in which the objects above are nonzero, so finding such a \(\mathbf{a}_{n}\) requires only finitely many invocations of Definition 2.2.) Set \[M_{j}={}^{p}\tau^{>|Q|-|P|}(\mathbf{i}_{\alpha}^{*}\mathbf{j}_{*}(\mathscr{F} \otimes f_{\eta}^{*}\mathscr{L}_{\mathbf{a}_{j}})),\qquad j=1,2,\ldots.\] By Lemma 2.9 below, there is an integer \(N\geqslant 1\) such that \(M_{1}\to M_{N}\) is the zero map. The second condition of the lemma is then satisfied by \(\mathbf{b}=\mathbf{a}_{N}\). **Lemma 2.9**.: _Let \(\mathscr{T}\) be a triangulated category equipped with a t-structure, and suppose we have a sequence of objects and maps_ \[M_{1}\xrightarrow{\phi_{1}}M_{2}\xrightarrow{\phi_{2}}M_{3}\to\cdots\] _such that the following conditions hold:_ 1. _there exist integers_ \(a\leqslant b\) _such that for all_ \(j\)_, the t-cohomology_ \({}^{t}\mathscr{H}^{i}(M_{j})\) _vanishes unless_ \(a\leqslant i\leqslant b\)_;_ 2. _for any_ \(j\geqslant 1\) _and_ \(i\in\mathbb{Z}\)_, the map_ \({}^{t}\mathscr{H}^{i}(\phi_{j})\) _vanishes._ _Then there is an integer \(N\) such that \(\phi_{N-1}\phi_{N-2}\cdots\phi_{1}:M_{1}\to M_{N}\) vanishes._ Proof.: The proof proceeds by induction on \(b-a\), using truncation triangles and a long exact sequence of Hom-groups. Details are left to the reader. ### Examples To illustrate Definition 2.2 we next consider some special cases. _Example 2.10_.: Assume that \(\alpha^{-1}(*)=\{*\}\). Then there exists only one \(\alpha\)-special map, namely the constant map with value \(1\), and the corresponding local system is constant (of rank \(1\)). In this case, we interpret the conditions above as requiring that \({}^{p}\!\mathscr{H}(\mathbf{i}^{*}_{\alpha}\mathbf{j}_{*}(\mathscr{F}))=0\) if \(i\neq|Q|-|P|\). Note that we have \(\mathbf{i}_{\alpha}=\mathbf{j}\circ\mathbf{i}^{\prime\prime}_{\alpha}\), hence \(\mathbf{i}^{*}_{\alpha}\mathbf{j}_{*}(\mathscr{F})=(\mathbf{i}^{\prime\prime }_{\alpha})^{*}\mathscr{F}\). It follows that the \(\alpha\)-nearby cycles of \(\mathscr{F}\) are well defined if and only if \((\mathbf{i}^{\prime\prime}_{\alpha})^{*}\mathscr{F}[|Q|-|P|]\) is perverse, and that if this is the case we have \[\Upsilon^{\alpha}_{f}(\mathscr{F})\cong(\mathbf{i}^{\prime\prime}_{\alpha})^ {*}\mathscr{F}[|Q|-|P|].\] _Example 2.11_.: Assume now that \(Q=\varnothing\). In this case, there exists a unique map \(\alpha:P_{*}\to Q_{*}=\{*\}\) (which will therefore be omitted from the notation), and any map \(\mathbf{a}:P\to\mathbb{Z}_{\geq 1}\) is special. If \(n=\#P\), we will speak of _\(n\)-dimensional nearby cycles_ instead of \(\alpha\)-nearby cycles in this case. More specifically: 1. In case \(n=1\), the constructions above amount to those of Beilinson [4] in his description of the unipotent nearby cycles functor and its monodromy automorphism, see [2, SS9.2] for a description as above; in particular, the \(1\)-dimensional nearby cycles of \(\mathscr{F}\) are well defined for any \(\mathscr{F}\), and compute the unipotent part of the nearby cycles \(\Psi_{f}(\mathscr{F})\). 2. In case \(n=2\), the considerations above specialize to the setting studied in [2, SS9.4] (following an idea of Gaitsgory in [8]); in particular, [2, Proposition 9.4.7] gives a sufficient condition under which the \(2\)-dimensional nearby cycles of \(\mathscr{F}\) are well defined and can be computed in terms of iterated unipotent nearby cycles. This case is also considered (for general \(n\)) in [11], where appropriate versions of Lemmas 2.13 and 2.14 are also obtained. ### Compatibilities **Lemma 2.12**.: _If \(\alpha:P_{*}\to Q_{*}\) is surjective, \(|Q|=|P|-1\), and \(|\alpha^{-1}(*)|=2\), then the \(\alpha\)-nearby cycles of \(\mathscr{F}\) are well defined._ Proof.: Our assumptions imply that there is exactly one element \(p\in P\) with \(\alpha(p)=*\). It is clear that the datum of an \(\alpha\)-special map \(\mathbf{a}:P\to\mathbb{Z}_{\geq 1}\) is equivalent to the datum of a nonnegative integer (corresponding to \(\mathbf{a}(p)\)). If \(\pi_{p}:\mathbb{A}^{P}\to\mathbb{A}^{1}\) is the projection onto the \(p\)th coordinate, then the construction of the \(\alpha\)-nearby cycles of \(\mathscr{F}\) with respect to \(f\) amounts to the construction of the \(1\)-dimensional nearby cycles of \(\mathscr{F}\) with respect to \(\pi_{p}\circ f\), which are well defined by Example 2.11. For the next statements we fix a pointed map \(\alpha:P_{*}\to Q_{*}\). Given a morphism \(g:Y\to X\), we will denote by \(g_{\eta}:Y_{\eta}\to X_{\eta}\) and \(g_{\eta}^{\alpha}:Y_{\eta}^{\alpha}\to X_{\eta}^{\alpha}\) the morphisms obtained by base change. **Lemma 2.13**.: _Let \(g:Y\to X\) be a smooth morphism of relative dimension \(d\), and let \(\mathscr{F}\in\mathsf{Perv}(X_{\eta},\Bbbk)\)._ 1. _If the_ \(\alpha\)_-nearby cycles of_ \(\mathscr{F}\) _are well defined, then so are the_ \(\alpha\)_-nearby cycles of_ \(g_{\eta}^{*}\mathscr{F}[d]\)_._ 2. _If_ \(g\) _is surjective, and if the_ \(\alpha\)_-nearby cycles of_ \(g_{\eta}^{*}\mathscr{F}[d]\) _are well defined, then so are the_ \(\alpha\)_-nearby cycles of_ \(\mathscr{F}\)_._ _In either case, there is a natural isomorphism_ \[\Upsilon^{\alpha}_{fg}(g_{\eta}^{*}\mathscr{F}[d])\cong(g_{\eta}^{\alpha})^{* }\Upsilon^{\alpha}_{f}(\mathscr{F})[d].\] Proof.: The first claim follows from the smooth base change theorem and t-exactness of shifted smooth pullbacks. The second claim follows from the fact that pullback under a smooth surjective morphism is faithful on perverse sheaves and detects isomorphisms, as in Remark 2.5. Details are left to the reader. **Lemma 2.14**.: _Let \(g:Y\to X\) be a proper morphism, and let \(\mathscr{F}\in\mathsf{Perv}(Y_{\eta},\Bbbk)\). Assume that the following conditions hold:_ 1. _the_ \(\alpha\)_-nearby cycles of_ \(\mathscr{F}\) _are well defined;_ 2. _both_ \((g_{\eta})_{*}\mathscr{F}\) _and_ \((g_{\eta}^{\alpha})_{*}\Upsilon^{\alpha}_{fg}(\mathscr{F})\) _are perverse._ _Then the \(\alpha\)-nearby cycles of \((g_{\eta})_{*}\mathscr{F}\) are well defined, and there is a natural isomorphism_ \[\Upsilon^{\alpha}_{f}((g_{\eta})_{*}\mathscr{F})\cong(g_{\eta}^{\alpha})_{*} \Upsilon^{\alpha}_{fg}(\mathscr{F}).\] Proof.: To simplify notation we set \(r=|Q|-|P|\) and \(h=fg\). For any \(\alpha\)-special functions \(\mathbf{a},\mathbf{b}:P\to\mathbb{Z}_{\geq 1}\) with \(\mathbf{a}\leq\mathbf{b}\), we can form the following commutative diagram, in which the columns are truncation distinguished triangles (in the top row, we use Lemma 2.6 to identify \({}^{p}\tau^{\leq r}(-)\) with \({}^{p}\mathscr{H}^{r}(-)[-r]\)): (2.6) Since the \(\alpha\)-nearby cycles of \(\mathscr{F}\) are well defined, by Lemma 2.8, we may choose \(\mathbf{a}\) such that the top horizontal map is an isomorphism for any \(\mathbf{b}\geq\mathbf{a}\) (so that these objects identify with \(\Upsilon^{\alpha}_{h}(\mathscr{F})\)), and then choose \(\mathbf{b}\) such that the bottom horizontal map is \(0\). By base change and the projection formula, we have \[g_{\eta*}^{\alpha}\sharp_{Y,\alpha}\sharp_{Y,\alpha}\sharp_{Y*}(\mathscr{F} \otimes h_{\eta}^{*}\mathscr{L}_{\mathbf{a}})\cong\sharp_{X,\alpha}^{\sharp} \sharp_{X*}g_{\eta*}(\mathscr{F}\otimes g_{\eta}^{*}f_{\eta}^{*}\mathscr{L}_{ \mathbf{a}})\cong\sharp_{X,\alpha}^{\sharp}\sharp_{X*}((g_{\eta*}\mathscr{F}) \otimes f_{\eta}^{*}\mathscr{L}_{\mathbf{a}}).\] Thus, applying \(g_{\eta*}^{\alpha}\) to (2.6), we obtain a diagram (2.7) whose columns are distinguished triangles. The objects in the top row are identified with \(g_{\eta*}^{\alpha}\Upsilon^{\alpha}_{h}(\mathscr{F})[-r]\); in particular, by assumption, they are concentrated in perverse degree \(r\). Since \(g_{\eta*}\mathscr{F}\) is assumed to be perverse, Lemma 2.6 tells us that the objects in the middle row live in perverse degrees \(\geq r\). It follows that \[{}^{p}\mathscr{H}^{i}(g_{\eta*}^{\alpha}({}^{p}\tau^{>r}\sharp_{Y,\alpha}^{*} \sharp_{Y*}(\mathscr{F}\otimes h_{\eta}^{*}\mathscr{L}_{\mathbf{a}})))=0 \qquad\text{for }i\leq r-2,\] and likewise for \(\mathscr{L}_{\mathbf{b}}\). Taking perverse cohomology, we obtain the following commutative diagram with exact columns: \[\begin{CD}{}^{p}\mathscr{H}^{r-1}(g^{\alpha}_{\eta*}({}^{p}\tau^{>r}\mathbf{i}^{ \ast}_{Y,\alpha}\mathbf{j}_{Y*}(\mathscr{F}\otimes h^{\ast}_{\eta}\mathscr{L}_{ \mathbf{a}})))\xrightarrow{0}{}^{p}\mathscr{H}^{r-1}(g^{\alpha}_{\eta*}({}^{p }\tau^{>r}\mathbf{i}^{\ast}_{Y,\alpha}\mathbf{j}_{Y*}(\mathscr{F}\otimes h^{ \ast}_{\eta}\mathscr{L}_{\mathbf{b}})))\\ @V{}V{}V@V{}V{}V\\ g^{\alpha}_{\eta*}{}{}^{p}\mathscr{H}^{r}(\mathbf{i}^{\ast}_{Y,\alpha}\mathbf{j}_ {Y*}(\mathscr{F}\otimes h^{\ast}_{\eta}\mathscr{L}_{\mathbf{a}}))\xrightarrow{ 0}{}^{p}\mathscr{H}^{r}(\mathbf{i}^{\ast}_{Y,\alpha}\mathbf{j}_{Y*}(\mathscr{F }\otimes h^{\ast}_{\eta}\mathscr{L}_{\mathbf{b}}))\\ @V{}V{}V@V{}V{}V\\ {}^{p}\mathscr{H}^{r}(\mathbf{i}^{\ast}_{X,\alpha}\mathbf{j}_{X*}(g_{\eta*} \mathscr{F}\otimes f^{\ast}_{\eta}\mathscr{L}_{\mathbf{a}}))\xrightarrow{0}{}^{p }\mathscr{H}^{r}(g^{\alpha}_{\eta*}({}^{p}\tau^{>r}\mathbf{i}^{\ast}_{Y,\alpha }\mathbf{j}_{Y*}(\mathscr{F}\otimes h^{\ast}_{\eta}\mathscr{L}_{\mathbf{b}}))). \end{CD}\] Here, the third horizontal arrow is injective by Lemma 2.7. An easy diagram chase shows that the topmost term in the left-hand column vanishes, and one of the four-lemmas implies that the \(0\) morphism on the fourth line is injective, so that the bottommost term in this column also vanishes. We deduce that we actually have \[{}^{p}\mathscr{H}^{i}(g^{\alpha}_{\eta*}({}^{p}\tau^{>r}\mathbf{i}^{\ast}_{Y, \alpha}\mathbf{j}_{Y*}(\mathscr{F}\otimes h^{\ast}_{\eta}\mathscr{L}_{\mathbf{a }})))=0\qquad\text{for }i\leqslant r.\] The same reasoning also applies to \(\mathbf{b}\), which implies that the columns of (2.7) can be identified with truncation distinguished triangles: that whole diagram can be rewritten as (2.8) Our argument shows that the top (resp. bottom) row of (2.8) is an isomorphism (resp. zero) whenever the corresponding row of (2.6) has the same property. By Lemma 2.8, we conclude that the \(\alpha\)-nearby cycles of \(g_{\eta*}\mathscr{F}\) are well defined. The identification of (2.7) with (2.8) shows that \(\Upsilon^{\alpha}_{f}(g_{\eta*}\mathscr{F})\cong g^{\alpha}_{\eta*}\Upsilon^{ \alpha}_{fg}(\mathscr{F})\). ### Compositions of higher nearby cycles The following is obtained by a repeated application of [2, Lemma 9.4.9]. **Lemma 2.15**.: _Let \(a_{1},\ldots,a_{k}\geqslant 1\) be integers. There is a map_ \[\mathscr{L}_{a_{1}}\otimes\cdots\otimes\mathscr{L}_{a_{k}}\to\mathscr{L}_{a_{ 1}+\cdots+a_{k}-k+1}\] _of local systems on \(\mathbb{A}^{1}\smallsetminus\{0\}\) such that the following diagram commutes, where the vertical arrows are the obvious morphisms:_ \[\begin{CD}\mathscr{L}_{a_{1}}\otimes\cdots\otimes\mathscr{L}_{a_{k}}@>{}>{} \mathscr{L}_{a_{1}+\cdots+a_{k}-k+1}\\ @V{}V{}V@V{}V{}V\\ \Bbbk\end{CD}\] **Proposition 2.16**.: _Let \(\alpha:P_{*}\to Q_{*}\) and \(\beta:Q_{*}\to R_{*}\) be pointed maps, and let \(\mathscr{F}\in\operatorname{\mathsf{Perv}}(X_{\eta},\Bbbk)\). Assume that:_ * _the_ \(\alpha\)_-nearby cycles and the_ \(\beta\alpha\)_-nearby cycles of_ \(\mathscr{F}\) _are well defined;_ * _the_ \(\beta\)_-nearby cycles of_ \(\Upsilon^{\alpha}_{f}(\mathscr{F})\) _are well defined._ _Then there is a natural map \(\Upsilon^{\beta\alpha}_{f}(\mathscr{F})\to\Upsilon^{\beta}_{f^{\alpha}}( \Upsilon^{\alpha}_{f}(\mathscr{F}))\)._ _Remark 2.17_.: Suppose that \(|P|=2\), \(|Q|=1\), \(R=\varnothing\), and \(\alpha\) is nonconstant. Then Proposition 2.16 is equivalent to [2, Lemma 9.4.3 or Lemma 9.4.11], depending on the size of \(\alpha^{-1}(*)\). Proof.: Let \(\mathbf{c}:P\to\mathbb{Z}_{\geq 1}\) be a \(\beta\alpha\)-special function. Recall that this means that \(\mathbf{c}(p)\neq 1\) implies \(\beta(\alpha(p))=*\). Define two new functions \(\mathbf{a},\mathbf{b}:P\to\mathbb{Z}_{\geq 1}\) by \[\mathbf{a}(p)=\begin{cases}\mathbf{c}(p)&\text{if }\alpha(p)=*,\\ 1&\text{otherwise,}\end{cases}\qquad\mathbf{b}(p)=\begin{cases}\mathbf{c}(p)& \text{if }\beta(\alpha(p))=*\text{ but }\alpha(p)\neq*,\\ 1&\text{otherwise.}\end{cases}\] We clearly have that \(\mathbf{a}\) is \(\alpha\)-special, and that \(\mathscr{L}_{\mathbf{c}}\cong\mathscr{L}_{\mathbf{a}}\otimes\mathscr{L}_{ \mathbf{b}}\). Next, define \(\mathbf{b}^{\prime}:Q\to\mathbb{Z}_{\geq 1}\) by \[\mathbf{b}^{\prime}(q)=-|\alpha^{-1}(q)|+1+\sum_{p\in\alpha^{-1}(q)}\mathbf{b} (p).\] We claim that \(\mathbf{b}^{\prime}\) is \(\beta\)-special. Indeed, if \(\beta(q)\neq*\), then the summation involves only elements \(p\) satisfying \(\beta(\alpha(p))\neq*\), and the claim follows from the fact that \(\mathbf{c}\) is \(\beta\alpha\)-special. Note also that if \(\beta(q)=*\) we have \[\mathbf{b}^{\prime}(q)=-|\alpha^{-1}(q)|+1+\sum_{p\in\alpha^{-1}(q)}\mathbf{c} (p). \tag{2.9}\] Recall the open subscheme \(\mathbb{A}_{\eta,\alpha}^{P}\) from SS2.1 and the morphism \(\bar{\alpha}_{\eta}:\mathbb{A}_{\eta}^{Q}\to\mathbb{A}_{\eta,\alpha}^{P}\). Note that the local system \(\mathscr{L}_{\mathbf{b}}\) on \(\mathbb{A}_{\eta}^{P}\) extends (uniquely) to a local system \(\mathscr{L}_{\mathbf{b},\alpha}\) on \(\mathbb{A}_{\eta,\alpha}^{P}\). We claim that there exists a natural morphism \[\bar{\alpha}_{\eta}^{*}\mathscr{L}_{\mathbf{b},\alpha}\to\mathscr{L}_{ \mathbf{b}^{\prime}} \tag{2.10}\] of local systems on \(\mathbb{A}_{\eta}^{Q}\). Indeed, for \(q\in Q\), the \(q\)th copy of \(\mathbb{A}^{1}\) in \(\mathbb{A}^{Q}\) is mapped under \(\bar{\alpha}\) to the diagonal copy of \(\mathbb{A}^{1}\) inside \(\mathbb{A}^{\alpha^{-1}(q)}\). It follows that \[\bar{\alpha}_{\eta}^{*}\mathscr{L}_{\mathbf{b},\alpha}\cong\underset{q\in Q} {\bigtriangledown}\Big{(}\bigotimes_{p\in\alpha^{-1}(q)}\mathscr{L}_{ \mathbf{b}(p)}\Big{)}.\] Lemma 2.15 gives us a map \[\bigotimes_{p\in\alpha^{-1}(q)}\mathscr{L}_{\mathbf{b}(p)}\to\mathscr{L}_{ \mathbf{b}^{\prime}(q)}\] for each \(q\); taking the external tensor product over \(q\), we obtain (2.10). Note now that the restriction of \(\bar{\alpha}\) to \(\mathbb{A}_{\eta,\beta}^{Q}\) factors through \(\mathbb{A}_{\eta,\beta\alpha}^{P}\), which allows to define the morphism \(\mathbf{i}_{X,\alpha,\beta}:X_{\eta,\beta}^{\alpha}\to X_{\eta,\beta\alpha}\) by base change. We consider the commutative diagram as follows, where the unlabelled arrow is the obvious open immersion: \[\diagram{\node{X_{\eta}^{\beta\alpha}}{\node{X_{\eta}^{\alpha},\beta}}} \node{\node{X_{\eta,\beta}^{\alpha},\beta,1}{\node{X_{\alpha,\beta}^{\alpha },\beta,1}{\node{X_{\alpha,\beta}^{\alpha},\beta,1}{\node{X_{\eta,\beta\alpha }^{\alpha},\beta,1}{\node{X_{\alpha,\beta}^{\alpha},\beta,1}{\node{X_{\eta, \beta\alpha}^{\prime}}}}}}}}\node{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{ \node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{ \prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta, \beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{ \node{X_{\eta,\beta\alpha}^{\prime}}}}}}}}}\node{\node{X_{\eta,\beta\alpha}^{ \prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta, \beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{ \node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{ \prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta, \beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{ \node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{ \prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta \alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta, \beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{ \node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime}, \beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{ \prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta, \beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{ \node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime}, \beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{ \prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta \alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta, \beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{ \node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime}, \beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime },\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{ \prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta, \beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{ \eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{ \node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime}, \beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime}, \beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{ \prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta \alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta, \beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{ \node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime}, \beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{ \prime},\beta,2}{\node{X_{\eta,\beta\alpha}^{\prime},\beta,2}{\node{X_{\eta,\beta \alpha}^{\prime We have a sequence of natural maps or isomorphisms as follows: \[\mathbf{i}^{*}_{X,\beta\alpha}(\mathbf{j}_{X})_{*}(\mathscr{F} \otimes f^{*}_{\eta}\mathscr{L}_{\mathbf{c}})\cong\mathbf{h}^{*}_{X^{\alpha}, \beta}\mathbf{i}^{*}_{X,\alpha,\beta}(\mathbf{j}_{X,\beta\alpha,1})_{*}( \mathscr{F}\otimes f^{*}_{\eta}\mathscr{L}_{\mathbf{c}})\\ \xrightarrow{\text{adjunction}}\mathbf{h}^{*}_{X^{\alpha},\beta}( \mathbf{j}_{X^{\alpha},\beta,1})_{*}(\mathbf{j}_{X^{\alpha},\beta,1})^{*} \mathbf{i}^{*}_{X,\alpha,\beta}(\mathbf{j}_{X,\beta\alpha,1})_{*}(\mathscr{F} \otimes f^{*}_{\eta}\mathscr{L}_{\mathbf{c}})\\ \cong\mathbf{h}^{*}_{X^{\alpha},\beta}(\mathbf{j}_{X^{\alpha}, \beta,1})_{*}\mathbf{h}^{*}_{X,\alpha}(\mathbf{j}_{X,\alpha,1})_{*}(\mathscr{F }\otimes f^{*}_{\eta}\mathscr{L}_{\mathbf{a}}\otimes f^{*}_{\eta}\mathscr{L}_{ \mathbf{b}}).\] Now, recall the local system \(\mathscr{L}_{\mathbf{b},\alpha}\), and denote by \(f_{\eta,\alpha}:X_{\eta,\alpha}\to\mathbb{A}^{p}_{\eta,\alpha}\) the morphism induced by \(f\). By adjunction there exists a canonical morphism \[(\mathbf{j}_{X,\alpha,1})_{*}(\mathscr{F}\otimes f^{*}_{\eta}\mathscr{L}_{ \mathbf{a}})\otimes f^{*}_{\eta,\alpha}\mathscr{L}_{\mathbf{b},\alpha}\to( \mathbf{j}_{X,\alpha,1})_{*}(\mathscr{F}\otimes f^{*}_{\eta}\mathscr{L}_{ \mathbf{a}}\otimes f^{*}_{\eta}\mathscr{L}_{\mathbf{b}}).\] This morphism becomes an isomorphism if \(\mathscr{L}_{\mathbf{b},\alpha}\) is replaced by the constant local system; since \(\mathscr{L}_{\mathbf{b},\alpha}\) is an extension of copies of this constant sheaf, we deduce that it is an isomorphism too. We deduce identifications \[\mathbf{h}^{*}_{X^{\alpha},\beta}(\mathbf{j}_{X^{\alpha},\beta,1} )_{*}\mathbf{h}^{*}_{X,\alpha}(\mathbf{j}_{X,\alpha,1})_{*}(\mathscr{F} \otimes f^{*}_{\eta}\mathscr{L}_{\mathbf{a}}\otimes f^{*}_{\eta}\mathscr{L}_{ \mathbf{b}})\\ \cong\mathbf{h}^{*}_{X^{\alpha},\beta}(\mathbf{j}_{X^{\alpha}, \beta,1})_{*}\mathbf{h}^{*}_{X,\alpha}\left((\mathbf{j}_{X,\alpha,1})_{*}( \mathscr{F}\otimes f^{*}_{\eta}\mathscr{L}_{\mathbf{a}})\otimes f^{*}_{\eta, \alpha}\mathscr{L}_{\mathbf{b},\alpha}\right)\\ \cong\mathbf{h}^{*}_{X^{\alpha},\beta}(\mathbf{j}_{X^{\alpha}, \beta,1})_{*}\left((\mathbf{h}^{*}_{X,\alpha}(\mathbf{j}_{X,\alpha,1})_{*}( \mathscr{F}\otimes f^{*}_{\eta}\mathscr{L}_{\mathbf{a}})\right)\otimes\left( \mathbf{h}^{*}_{X^{\alpha},\beta}f^{*}_{\eta,\alpha}\mathscr{L}_{\mathbf{b}, \alpha}\right)\right)\\ \cong\mathbf{h}^{*}_{X^{\alpha},\beta}(\mathbf{j}_{X^{\alpha}, \beta,1})_{*}\left((\mathbf{h}^{*}_{X,\alpha}(\mathbf{j}_{X,\alpha,1})_{*}( \mathscr{F}\otimes f^{*}_{\eta}\mathscr{L}_{\mathbf{a}})\right)\otimes\left( (f^{\alpha}_{\eta})^{*}\bar{\alpha}^{*}_{\eta}\mathscr{L}_{\mathbf{b},\alpha }\right)\right).\] Using (2.10) we deduce a canonical morphism \[\mathbf{h}^{*}_{X^{\alpha},\beta}(\mathbf{j}_{X^{\alpha},\beta,1} )_{*}\mathbf{h}^{*}_{X,\alpha}(\mathbf{j}_{X,\alpha,1})_{*}(\mathscr{F} \otimes f^{*}_{\eta}\mathscr{L}_{\mathbf{a}}\otimes f^{*}_{\eta}\mathscr{L}_{ \mathbf{b}})\\ \to\mathbf{h}^{*}_{X^{\alpha},\beta}(\mathbf{j}_{X^{\alpha}, \beta,1})_{*}\left((\mathbf{h}^{*}_{X,\alpha}(\mathbf{j}_{X,\alpha,1})_{*}( \mathscr{F}\otimes f^{*}_{\eta}\mathscr{L}_{\mathbf{a}})\right)\otimes(f^{ \alpha}_{\eta})^{*}\mathscr{L}_{\mathbf{b}^{\prime}})\,.\] Using (2.4), Lemma 2.6, and the fact that tensoring with \((f^{\alpha}_{\eta})^{*}\mathscr{L}_{\mathbf{b}^{\prime}}\) is exact for the perverse t-structure, applying perverse cohomology in degree \(|R|-|P|\) to the composition of the maps above we deduce a canonical morphism \[{}^{p}\!\!\mathscr{H}^{|R|-|P|}(\mathbf{i}^{*}_{X,\beta\alpha}( \mathbf{j}_{X})_{*}(\mathscr{F}\otimes f^{*}_{\eta}\mathscr{L}_{\mathbf{c}})) \to\\ {}^{p}\!\!\mathscr{H}^{|Q|-|R|}(\mathbf{h}^{*}_{X^{\alpha},\beta}( \mathbf{j}_{X^{\alpha},\beta,1})_{*}({}^{p}\!\!\mathscr{H}^{|P|-|Q|}((\mathbf{h }^{*}_{X,\alpha}(\mathbf{j}_{X,\alpha,1})_{*}(\mathscr{F}\otimes f^{*}_{\eta} \mathscr{L}_{\mathbf{a}}))\otimes(f^{\alpha}_{\eta})^{*}\mathscr{L}_{\mathbf{b} ^{\prime}}))). \tag{2.11}\] When \(\mathbf{c}\) is large (among \(\beta\alpha\)-special maps) then \(\mathbf{a}\) is large (among \(\alpha\)-special maps) and \(\mathbf{b}^{\prime}\) is large (among \(\beta\)-special maps) in view of (2.9). Hence in this case (2.11) provides the morphism we were looking for. The following three statements give compatibility properties of the construction of Proposition 2.16. Each of them is easily checked on definitions. **Lemma 2.18**.: _Suppose we have three pointed maps \(\alpha:P_{*}\to Q_{*}\), \(\beta:Q_{*}\to R_{*}\), and \(\gamma:R_{*}\to S_{*}\). If all the objects in the diagram below are defined, then the diagram commutes:_ _where each arrow is given by an application of Proposition 2.16. _ **Lemma 2.19**.: _Let \(g:Y\to X\) be a smooth morphism of relative dimension \(d\), let \(\alpha:P_{*}\to Q_{*}\), \(\beta:Q_{*}\to R_{*}\) be pointed maps, let \(\mathscr{F}\in\operatorname{\mathsf{Perv}}(X_{\eta},\Bbbk)\). Assume that:_ * _the_ \(\alpha\)_-nearby cycles and the_ \(\beta\alpha\)_-nearby cycles of_ \(\mathscr{F}\) _are well defined;_ _ * _the_ \(\beta\)_-nearby cycles of_ \(\Upsilon_{f}^{\alpha}(\mathscr{F})\) _are well defined._ _Then the \(\alpha\)-nearby cycles and the \(\beta\alpha\)-nearby cycles of \(g_{\eta}^{*}\mathscr{F}[d]\), and the \(\beta\)-nearby cycles of \(\Upsilon_{fg}^{\alpha}(g_{\eta}^{*}\mathscr{F}[d])\), are all well defined, and the morphism_ \[\Upsilon_{fg}^{\beta\alpha}(g_{\eta}^{*}\mathscr{F}[d])\to\Upsilon_{(fg)^{ \alpha}}^{\beta}(\Upsilon_{fg}^{\alpha}(g_{\eta}^{*}\mathscr{F}[d]))\] _of Proposition 2.16 is, taking into account the identifications of Lemma 2.13, the image under \((g_{\eta}^{\alpha})^{*}[d]\) of the corresponding morphism \(\Upsilon_{f}^{\beta\alpha}(\mathscr{F})\to\Upsilon_{f^{\alpha}}^{\beta}( \Upsilon_{f}^{\alpha}(\mathscr{F}))\). _ **Lemma 2.20**.: _Let \(g:Y\to X\) be a proper morphism, let \(\alpha:P_{*}\to Q_{*}\), \(\beta:Q_{*}\to R_{*}\) be pointed maps, and let \(\mathscr{F}\in\operatorname{\mathsf{Perv}}(Y_{\eta},\Bbbk)\). Assume that:_ * _the_ \(\alpha\)_-nearby cycles and the_ \(\beta\alpha\)_-nearby cycles of_ \(\mathscr{F}\) _are well defined;_ * _the_ \(\beta\)_-nearby cycles of_ \(\Upsilon_{fg}^{\alpha}(\mathscr{F})\) _are well defined;_ * _the complexes_ \[(g_{\eta})_{*}\mathscr{F},\quad(g_{\eta}^{\alpha})_{*}\Upsilon_{fg}^{\alpha} (\mathscr{F}),\quad(g_{\eta}^{\beta\alpha})_{*}\Upsilon_{(fg)^{\alpha}}^{ \beta}(\Upsilon_{fg}^{\alpha}(\mathscr{F}))\quad\text{and}\quad(g_{\eta}^{ \beta\alpha})_{*}\Upsilon_{fg}^{\beta\alpha}(\mathscr{F})\] _are perverse._ _Then the \(\alpha\)-nearby cycles and the \(\beta\alpha\)-nearby cycles of \((g_{\eta})_{*}\mathscr{F}\), and the \(\beta\)-nearby cycles of \(\Upsilon_{f}^{\alpha}((g_{\eta})_{*}\mathscr{F})\), are all well defined, and the morphism_ \[\Upsilon_{f}^{\beta\alpha}((g_{\eta})_{*}\mathscr{F})\to\Upsilon_{f^{\alpha}} ^{\beta}(\Upsilon_{f}^{\alpha}((g_{\eta})_{*}\mathscr{F}))\] _of Proposition 2.16 is, taking into account the identifications of Lemma 2.14, the image under \((g_{\eta}^{\beta\alpha})_{*}\) of the morphism \(\Upsilon_{fg}^{\beta\alpha}(\mathscr{F})\to\Upsilon_{(fg)^{\alpha}}^{\beta}( \Upsilon_{fg}^{\alpha}(\mathscr{F}))\). _ ### Product-type situations Let \(P\) be a finite set, and suppose we have a collection of maps \((f_{p}:X_{p}\to\mathbb{A}^{1})_{p\in P}\). For each \(p\), let \[X_{p,\eta}=f_{p}^{-1}(\mathbb{A}^{1}\smallsetminus\{0\})\qquad\text{and}\qquad X _{p,0}=f_{p}^{-1}(\{0\}).\] Denote the inclusion maps by \(j_{p}:X_{p,\eta}\to X_{p}\) and \(i_{p}:X_{p,0}\to X_{p}\). Set \[X=\prod_{p\in P}X_{p}\qquad\text{and}\qquad f=\prod_{p\in P}f_{p}:X\to\mathbb{ A}^{P}.\] We obviously have \(X_{\eta}=\prod_{p\in P}X_{p,\eta}\). More generally, for any pointed map \(\alpha:P_{*}\to Q_{*}\), we can describe \(X_{\eta}^{\alpha}\) as follows: \[X_{\eta}^{\alpha}\cong\prod_{q\in Q}X_{q,\eta}^{\alpha}\times\prod_{p\in \alpha^{-1}(*)\cap P}X_{p,0}\quad\text{where}\quad X_{q,\eta}^{\alpha}=\prod_{ p\in\alpha^{-1}(q)}X_{p,\eta}. \tag{2.12}\] Here, the right-hand side is a fiber product over \(\mathbb{A}^{1}\). If \(\alpha^{-1}(q)=\varnothing\), the right-hand side should be understood to be \(\mathbb{A}^{1}\). The following lemma is immediate from the definitions. **Lemma 2.21**.: _Let \((f_{p}:X_{p}\to\mathbb{A}^{1})_{p\in P}\) be as above. Suppose we have a collection of objects \(\mathscr{F}_{p}\in D_{\mathrm{c}}^{\mathrm{b}}(X_{p,\eta},\Bbbk)\), and set_ \[\mathscr{F}=\big{\llbracket}\underline{\times}\big{\rvert}_{p\in P}\mathscr{F}_ {p}\in D_{\mathrm{c}}^{\mathrm{b}}(X,\Bbbk).\] _Then the object \(\mathbf{i}_{\alpha}^{*}\mathbf{j}_{*}\mathscr{F}\in D_{\mathrm{c}}^{\mathrm{b}} (X_{\eta}^{\alpha},\Bbbk)\) is given by_ \[\mathbf{i}_{\alpha}^{*}\mathbf{j}_{*}\mathscr{F}\cong\big{\llbracket}\underline{ \times}\big{\rvert}_{q\in Q}\left(\big{\lvert}\underline{\times}\big{\rvert} _{A_{1}^{\alpha}}\mathscr{F}_{p}\right)\boxed{\underline{\times}}\big{\rvert} _{p\in\alpha^{-1}(*)}i_{p}^{*}j_{p*}\mathscr{F}_{p}.\] Here, the notation "\(\boxtimes_{\mathbb{A}_{\eta}^{1}}\)" is a relative external tensor product: it is the pullback of the usual external tensor product \(\underset{p\in\alpha^{-1}(q)}{\bigtriangledown}_{p\in\alpha^{-1}(q)}\mathscr{F}_ {p}\) along the map \[\prod_{p\in\alpha^{-1}(q)}X_{p,\eta}\hookrightarrow\prod_{p\in\alpha^{-1}(q)}X_ {p,\eta}.\] (When \(\alpha^{-1}(q)=\varnothing\), this is the map \(\mathbb{A}^{1}\to\operatorname{Spec}(\mathbb{F})\), and \(\underset{p\in\alpha^{-1}(q)}{\bigtriangledown}_{p\in\alpha^{-1}(q)}\mathscr{F }_{p}\) should be understood to be the constant sheaf \(\underline{\Bbbk}\) on \(\operatorname{Spec}(\mathbb{F})\).) **Lemma 2.22**.: _Let \((f_{p}:X_{p}\to\mathbb{A}^{1})_{p\in P}\) and \(\alpha:P_{*}\to Q_{*}\) be as above. Suppose we have a collection of perverse sheaves \(\mathscr{F}_{p}\in\operatorname{\mathsf{Perv}}(X_{p,\eta},\Bbbk)\) that satisfy the following condition: for each \(q\in Q\), the object_ \[\left(\underset{p\in\alpha^{-1}(q)}{\bigtriangledown}_{\mathbb{A}_{\eta}^{1 }}\mathscr{F}_{p}\right)[1-|\alpha^{-1}(q)|]\in D^{\mathrm{b}}_{\mathrm{c}}(X _{q,\eta}^{\alpha},\Bbbk)\] _is perverse. Then the \(\alpha\)-nearly cycles of \(\mathscr{F}\) are well defined, and we have_ \[\Upsilon_{f}^{\alpha}(\mathscr{F})\cong\underset{q\in Q}{\bigtriangledown} \left(\underset{p\in\alpha^{-1}(q)}{\bigtriangledown}_{\mathbb{A}_{\eta}^{1 }}\mathscr{F}_{p}\right)[1-|\alpha^{-1}(q)|]\boxtimes\underset{p\in\alpha^{-1 }(*)}{\bigtriangledown}_{p\in\alpha^{-1}(*)}\Psi_{f_{p}}(\mathscr{F}_{p}).\] Proof.: Consider two \(\alpha\)-special functions \(\mathbf{a}\leq\mathbf{b}\). By Lemma 2.21, the map \[\mathbf{i}_{\alpha}^{*}\mathbf{j}_{*}(\mathscr{F}\otimes f_{\eta}^{*} \mathscr{L}_{\mathbf{a}})\to\mathbf{i}_{\alpha}^{*}\mathbf{j}_{*}(\mathscr{F} \otimes f_{\eta}^{*}\mathscr{L}_{\mathbf{b}})\] is the external tensor product of the following two kinds of maps: \[\text{for }q\in Q: \underset{p\in\alpha^{-1}(q)}{\bigtriangledown}_{\mathbb{A}_{ \eta}^{1}}(\mathscr{F}_{p}\otimes f_{p,\eta}^{*}\mathscr{L}_{\mathbf{a}(p)} \to\mathscr{F}_{p}\otimes f_{p,\eta}^{*}\mathscr{L}_{\mathbf{b}(p)}); \tag{2.14}\] \[\text{for }p\in\alpha^{-1}(*): i_{p}^{*}j_{p*}(\mathscr{F}_{p}\otimes f_{p,\eta}^{*}\mathscr{L}_{ \mathbf{a}(p)}\to\mathscr{F}_{p}\otimes f_{p,\eta}^{*}\mathscr{L}_{\mathbf{b}(p )}). \tag{2.13}\] In (2.13), because \(\mathbf{a}\) and \(\mathbf{b}\) are \(\alpha\)-special, we have \(\mathbf{a}(p)=\mathbf{b}(p)=1\) for each \(p\) that appears. That is, (2.13) is just the identity map of the object \(\underset{p\in\alpha^{-1}(q)}{\bigtriangledown}_{\mathbb{A}_{\eta}^{1}}\mathscr{ F}_{p}\), which is a shifted perverse sheaf by assumption. The perverse cohomology of (2.14) is precisely Beilinson's description of the unipotent nearby cycles of \(\mathscr{F}_{p}\). More precisely, for \(\mathbf{a}\) and \(\mathbf{b}\) large enough, the \(i\)-th perverse cohomology of the map in (2.14) is an isomorphism if \(i=-1\), and is \(0\) otherwise. We conclude that, for \(\mathbf{a}\) and \(\mathbf{b}\) large enough, the perverse cohomology \({}^{p}\!\mathscr{H}^{i}\) of the external tensor product of all the maps (2.13) and (2.14) is an isomorphism when \[i=\sum_{q\in Q}(1-|\alpha^{-1}(q)|)+\sum_{p\in\alpha^{-1}(*)}(-1)=|Q|-|P|,\] and zero otherwise. **Lemma 2.23**.: _Let \((f_{p}:X_{p}\to\mathbb{A}^{1})_{p\in P}\) be as above, and let \(\alpha:P_{*}\to Q_{*}\) and \(\beta:Q_{*}\to R_{*}\) be pointed maps. Suppose we have a collection of perverse sheaves \(\mathscr{F}_{p}\in\operatorname{\mathsf{Perv}}(X_{p,\eta},\Bbbk)\) satisfying the following two conditions:_ 1. _for each_ \(q\in Q\)_, the following object is perverse:_ \[\begin{pmatrix}\boxedown_{\mathbb{A}^{1}_{\eta}}&\mathscr{F}_{p}\\ p\in\alpha^{-1}(q)&\end{pmatrix}[1-|\alpha^{-1}(q)|]\in D^{\mathrm{b}}_{\mathrm{ c}}(X^{\alpha}_{q,\eta},\Bbbk);\] 2. _for each_ \(r\in R\)_, the following object is perverse:_ \[\begin{pmatrix}\boxedown_{\mathbb{A}^{1}_{\eta}}&\mathscr{F}_{p}\\ p\in\alpha^{-1}(\beta^{-1}(r))&\end{pmatrix}[1-|\alpha^{-1}(\beta^{-1}(r))|] \in D^{\mathrm{b}}_{\mathrm{c}}(X^{\beta\alpha}_{r,\eta},\Bbbk).\] _Then the \(\alpha\)-nearby cycles and the \(\beta\alpha\)-nearby cycles of \(\mathscr{F}\) are well defined, as well as the \(\beta\)-nearby cycles of \(\Upsilon^{\alpha}_{f}(\mathscr{F})\), and the map \(\Upsilon^{\beta\alpha}_{f}(\mathscr{F})\to\Upsilon^{\beta}_{f^{\alpha}}( \Upsilon^{\alpha}_{f}(\mathscr{F}))\) from Proposition 2.16 is an isomorphism._ Proof.: Our assumptions together with Lemma 2.22 imply that the \(\alpha\)-nearby cycles and the \(\beta\alpha\)-nearby cycles of \(\mathscr{F}\) are well defined. To study the \(\beta\)-nearby cycles of \(\Upsilon^{\alpha}_{f}(\mathscr{F})\), let us introduce the notation \(Z=\prod_{p\in\alpha^{-1}(\ast)\cap P}X_{p,0}\), so that \(X^{\alpha}_{\eta}=\prod_{q\in Q}X^{\alpha}_{q,\eta}\times Z\). We also let \[\mathscr{G}_{q}=\begin{pmatrix}\boxedown_{\mathbb{A}^{1}_{\eta}}&\mathscr{F }_{p}\\ p\in\alpha^{-1}(q)&\end{pmatrix}[1-|\alpha^{-1}(q)|]\qquad\text{and}\qquad \mathscr{G}_{Z}=\underset{p\in\alpha^{-1}(\ast)}{\bigotimes}\Psi_{f_{p}}( \mathscr{F}_{p}),\] so that if we set \(\mathscr{G}=\Upsilon^{\alpha}_{f}(\mathscr{F})\), then by Lemma 2.22 we have \[\mathscr{G}\cong\begin{pmatrix}\boxedown_{q\in Q}\mathscr{G}_{q}\\ \end{pmatrix}\boxtimes\mathscr{G}_{Z}.\] The diagram \[X^{\alpha}_{\eta}\xrightarrow{\mathbf{j}X^{\alpha}}X^{\alpha}\xleftarrow{ \mathbf{i}X^{\alpha,\beta}}X^{\beta\alpha}_{\eta}\] can be redrawn as \[\prod_{q\in Q}X^{\alpha}_{q,\eta}\times Z\xrightarrow{\mathbf{j}X^{\alpha}} \prod_{q\in Q}X^{\alpha}_{q}\times Z\xleftarrow{\mathbf{i}_{X^{\alpha,\beta}} }\prod_{r\in R}\left(\prod_{r\in\beta^{-1}(q)}X^{\alpha}_{q,\eta}\right)\times \prod_{q\in\beta^{-1}(\ast)\cap Q}X^{\alpha}_{q,0}\times Z.\] This almost matches the general set-up at the beginning of this subsection, except for the extra factor of \(Z\). A minor variant of Lemma 2.22 says that a sufficient condition for the \(\beta\)-nearby cycles of \(\mathscr{G}\) to be well defined is that for each \(r\in R\) the object \[\begin{pmatrix}\boxedown_{\mathbb{A}^{1}_{\eta}}&\mathscr{G}_{q}\\ \end{pmatrix}[1-|\beta^{-1}(r)|] \tag{2.15}\] be perverse. If this holds, then we have \[\Upsilon^{\beta}_{f^{\alpha}}(\mathscr{G})\cong\underset{r\in R}{\bigotimes} \begin{pmatrix}\boxedown_{\mathbb{A}^{1}_{\eta}}&\mathscr{G}_{q}\\ q\in\beta^{-1}(r)&\end{pmatrix}[1-|\beta^{-1}(r)|]\boxtimes\underset{q\in \beta^{-1}(\ast)\cap Q}{\bigotimes}\Psi_{f^{\alpha}_{q}}(\mathscr{G}_{q}) \boxtimes\mathscr{G}_{Z}. \tag{2.16}\] Using the definition of \(\mathscr{G}_{q}\), we rewrite the object in (2.15) as \[\begin{pmatrix}\\ \bigbox[]{\bigbox[]{\bigbox[]{\bigbox[]{\bigbox[]{\bigbox[]{\bigbox[]{\bigbox[]{ \bigbox[]{\bigbox[]{\bigbox[]{\big[]{\big[]{\big[]{\big[]{\big[]{\big[]{\big[{ \big[]{\big[{\big[]{\big[{\big[]{\big[{\big[}{\big[}{\big[{\big[}{\big[{\big[}{{\big[{\cdot}} \cdot}{{\cdot}{\cdot{\cdot{\cdot{\cdot}_{{{\cdot}_{{{}_{{}}_{{}}{{}_{{}} {{}_{{}}_{{{}_{}}_{{{}_{}_{{}}_{{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{} }}_{{{{}_{{}_{}_{{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{} }_{{{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{{}_{}_{} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \ ### Satake category and central sheaves Let \(G\) be a connected reductive algebraic group over \(\mathbb{F}\). To \(G\) and a choice of Borel subgroup \(B\subset G\) we can associate in the usual way the loop group \(LG\), the positive loop group \(L^{+}G\), the Iwahori subgroup \(I\subset L^{+}G\), the affine Grassmannian \(\mathrm{Gr}_{G}=LG/L^{+}G\) and the affine flag variety \(\mathrm{Fl}_{G}=LG/I\). Here the quotients are the fppf quotients, and they are represented by ind-projective ind-schemes over \(\mathbb{F}\); for all of this, see [2] for details. Recall that the \(L^{+}G\)-equivariant derived category \(D^{\mathrm{b}}_{L^{+}G}(\mathrm{Gr}_{G},\Bbbk)\) of \(\Bbbk\)-sheaves on \(\mathrm{Gr}_{G}\), resp. the \(I\)-equivariant derived category \(D^{\mathrm{b}}_{I}(\mathrm{Fl}_{G},\Bbbk)\) of \(\Bbbk\)-sheaves on \(\mathrm{Fl}_{G}\), is endowed with a canonical unital and associative convolution product \(\star^{L^{+}G}\), resp. \(\star^{I}\). The Satake category is the category \(\mathsf{Perv}_{L^{+}G}(\mathrm{Gr}_{G},\Bbbk)\) of \(L^{+}G\)-equivariant \(\Bbbk\)-perverse sheaves on \(\mathrm{Gr}_{G}\). It is a fundamental standard fact (see [10, 3]) that the product \(\star^{L^{+}G}\) is t-exact on both sides, hence restricts to a bifunctor on the Satake category, and moreover that this restriction admits a canonical commutativity constraint. For a finite collection \((\mathscr{A}_{p})_{p\in P}\) of objects in \(\mathsf{Perv}_{L^{+}G}(\mathrm{Gr}_{G},\Bbbk)\), it therefore makes sense to consider the convolution product \(\star^{L^{+}G}_{p\in P}\mathscr{A}_{p}\). We will denote by \(\mathcal{G}\) the smooth affine group scheme over \(C\) constructed (following X. Zhu) in [2, SS2.2.3.1]: its restriction to \(C\smallsetminus\{0\}\), resp. to the formal neighborhood of \(0\), identifies with \(G\times(C\smallsetminus\{0\})\), resp. with the Iwahori group scheme of \(LG\) attached to \(B\). For any scheme \(X\) over \(C\), we will denote by \(\mathcal{E}^{0}_{X}=X\times_{C}\mathcal{G}\) the trivial principal \(\mathcal{G}\)-bundle over \(X\). Recall the ind-scheme \(\mathbf{Gr}^{\mathrm{Cen}}_{\mathcal{G}}\) over \(C\) defined in [2, SS2.2.3.2]; it represents the functor sending \(R\in\mathsf{Alg}_{\mathbb{F}}\) to the set of equivalence classes of triples \((y,\mathcal{E},\beta)\) where: * \(y\in C(R)\); * \(\mathcal{E}\) is a principal \(\mathcal{G}\)-bundle over \(\widehat{\Gamma}_{y}\); * \(\beta:\mathcal{E}_{|\widehat{\Gamma}_{y}^{\circ}}\xrightarrow{\sim}\mathcal{E} ^{0}_{\widehat{\Gamma}_{y}^{\circ}}\) is an isomorphism. We have canonical identifications \[\mathbf{Gr}^{\mathrm{Cen}}_{\mathcal{G}}|_{\{0\}}\cong\mathrm{Fl}_{G},\quad \mathbf{Gr}^{\mathrm{Cen}}_{\mathcal{G}}|_{C\smallsetminus\{0\}}\cong\mathrm{ Gr}_{G}\times(C\smallsetminus\{0\}). \tag{3.1}\] Following Gaitsgory [7], we consider the functor \[\mathsf{Z}:\mathsf{Perv}_{L^{+}G}(\mathrm{Gr}_{G},\Bbbk)\to\mathsf{Perv}_{I} (\mathrm{Fl}_{G},\Bbbk)\] defined by \(\mathsf{Z}(\mathscr{A})=\Upsilon_{\mathbf{Gr}^{\mathrm{Cen}}_{\mathcal{G}}}( \mathscr{A}[\boxtimes]\underline{\Bbbk}_{C\smallsetminus\{0\}}[1])\). In fact, in this setting it is known that the nearby cycles of \(\mathscr{A}[\boxtimes]\underline{\Bbbk}_{C\smallsetminus\{0\}}[1]\) are unipotent (see [2, SS2.4.5]), so that \(\mathsf{Z}(\mathscr{A})\) coincides with the full nearby cycles, see Example 2.11. It is known that for any \(\mathscr{F}\) in \(\mathsf{Perv}_{I}(\mathrm{Fl}_{G},\Bbbk)\) and \(\mathscr{A}\) in \(\mathsf{Perv}_{L^{+}G}(\mathrm{Gr}_{G},\Bbbk)\) the convolution \(\mathscr{F}\star^{I}\mathsf{Z}(\mathscr{A})\) is perverse (see [2, Corollary 3.2.5]), and that \(\mathsf{Z}\) is a central functor; in particular, for \(\mathscr{F}\), \(\mathscr{A}\) as above there exists a canonical isomorphism \(\mathscr{F}\star^{I}\mathsf{Z}(\mathscr{A})\cong\mathsf{Z}(\mathscr{A})\star^ {I}\mathscr{F}\), see [2, Theorem 3.2.3 and SS3.5.1]. In particular, for a finite collection \((\mathscr{A}_{p})_{p\in P}\) of objects in \(\mathsf{Perv}_{L^{+}G}(\mathrm{Gr}_{G},\Bbbk)\), it makes sense to consider the convolution product \(\star^{I}_{p\in P}\mathsf{Z}(\mathscr{A}_{p})\). ### Iterated affine Grassmannians Let \(P\) be a finite set. Define a functor \(\mathbf{Gr}_{P}\) on \(\mathsf{Alg}_{\mathbb{F}}\) as follows: for \(R\in\mathsf{Alg}_{\mathbb{F}}\), \(\mathbf{Gr}_{P}(R)\) is the set of equivalence classes of the following data: * a point \((y_{p})_{p\in P}\) in \(C^{P}(R)\); * a principal \(\mathcal{G}\)-bundle \(\mathcal{E}\) over \(\widehat{\Gamma}_{\{0\}\cup\{y_{p}:p\in P\}}\); * an isomorphism \(\beta:\mathcal{E}_{|\widehat{\Gamma}_{\{0\}\cup\{y_{p}:p\in P\}}}\xrightarrow {\sim}\mathcal{E}^{0}_{\widehat{\Gamma}_{\{0\}\cup\{y_{p}:p\in P\}}}\). This functor is represented by an ind-proper ind-scheme over \(C^{P}\). It is also easily seen that if \(Q\) is another finite set and \(\alpha:P_{\ast}\to Q_{\ast}\) is a surjective pointed map, there is a canonical identification \(\mathbb{A}^{Q}\times_{\mathbb{A}^{P}}\mathbf{Gr}_{P}=\mathbf{Gr}_{Q}\). _Example 3.1_.: For \(n\in\mathbb{Z}_{\geqslant 1}\) and \(P=\{1,\ldots,n\}\), the ind-scheme \(\mathbf{Gr}_{\{1,\ldots,n\}}\) coincides with the ind-scheme \(\mathbf{Gr}_{n}\) of [6, SS5.1]. If \(P=\varnothing\) we have \(\mathbf{Gr}_{\varnothing}=\mathrm{Fl}_{G}\). Denote by \[C^{P,\dagger}\subset C^{P}\] the open subscheme consisting of the points \((y_{p})_{p\in P}\) such that \(y_{p}\neq 0\) for any \(p\) and \(y_{p}\neq y_{p^{\prime}}\) for any \(p\neq p^{\prime}\). By standard arguments we have a canonical identification \[(\mathbf{Gr}_{P})_{|C^{P,\dagger}}\cong\mathrm{Fl}_{G}\times\prod_{p\in P} \mathrm{Gr}_{G}\times C^{P,\dagger}. \tag{3.2}\] Denote by \({}_{\mathcal{I}P}\) the open embedding \[(\mathbf{Gr}_{P})_{|C^{P,\dagger}}\to(\mathbf{Gr}_{P})_{|(C\smallsetminus 0)^{P}}=( \mathbf{Gr}_{P})_{\eta}.\] Below we will consider collections of perverse sheaves \(\mathscr{A}_{\ast}\in\mathsf{Perv}_{I}(\mathrm{Fl}_{G},\Bbbk)\) and \(\mathscr{A}_{p}\in\mathsf{Perv}_{L^{+}G}(\mathrm{Gr}_{G},\Bbbk)\) for each \(p\in P\). For brevity, we denote this collection by \((\mathscr{A}_{i})_{i\in P_{\ast}}\). We consider the functor \[\mathsf{C}_{P}:\mathsf{Perv}_{I}(\mathrm{Fl}_{G},\Bbbk)\times\prod_{p\in P} \mathsf{Perv}_{L^{+}G}(\mathrm{Gr}_{G},\Bbbk)\to\mathsf{Perv}((\mathbf{Gr}_{P })_{\eta})\] defined by \[\mathsf{C}_{P}((\mathscr{A}_{i})_{i\in P_{\ast}})=({}_{\mathcal{I}P})_{!\ast} \left(\mathscr{A}_{\ast}\boxdot\left(\begin{matrix}\bigtriangledown\!\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\! \bigtriangledown\!\bigtriangledangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\! \bigtriangledown\!\bigtriangledown\!\bigtriangledangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledown\!\bigtriangledangledown\!\bigtriangledown\!\bigtriangledown _Remark 3.4_.: In this remark we assume that \(P=\{1,\ldots,n\}\) for some \(n\in\mathbb{Z}_{\gg 1}\) and \(Q=\varnothing\). In this case there is a unique choice for \(\alpha\), we have \(\mathbf{Gr}_{Q}=\mathrm{Fl}_{G}\), and Theorem 3.2 says that \[\Upsilon^{\alpha}_{\mathbf{Gr}_{P}}(\mathscr{A}_{\ast},\mathscr{A}_{1},\ldots, \mathscr{A}_{n})\cong\mathscr{A}_{\ast}\ast^{\mathsf{I}}\mathsf{Z}(\mathscr{A} _{1})\ast^{I}\cdots\star^{\mathsf{I}}\mathsf{Z}(\mathscr{A}_{n}). \tag{3.3}\] 1. If \(n=1\), by Example 2.11 the fact that the nearby cycles are well defined is automatic; the isomorphism (3.3) is the content of [2, Proposition 3.2.1]. In case \(n=2\), this statement is closely related to the results of [2, SS3.5]. 2. There exists a natural action of the symmetric group \(\mathfrak{S}_{n}\) on \(\mathbf{Gr}_{P}\) by permutation of the points \(y_{i}\). This action preserves the preimage of \(C^{P,\uparrow}\) and, under the identification (3.2), its restriction to this open subset identifies with the diagonal action by permutation of the factors in \((\mathrm{Gr}_{G})^{n}\) and \(C^{P,\uparrow}\). It also preserves the preimage of \((0,\ldots,0)\) and restricts to the identity on this preimage. For any \(\sigma\in\mathfrak{S}_{n}\) we deduce a canonical isomorphism between \(\Upsilon_{\mathbf{Gr}_{P}}(\mathsf{C}_{P}(\mathscr{A}_{\ast},\mathscr{A}_{1}, \ldots,\mathscr{A}_{n}))\) and the similar object obtained by permutation of the \(\mathscr{A}_{i}\)'s. Using the same techniques as in [2, SS3.5.8] one can check that, under (3.3), this isomorphism is induced by the "centrality" isomorphism for the functor \(\mathsf{Z}\) (see [2, Theorem 3.2.3]) or, equivalently (by [8], see [2, Theorem 3.5.1]), by the commutativity constraint on the Satake category. ### Convolution-torsor affine Grassmannians We now introduce some auxiliary ind-schemes needed for the proof of Theorem 3.2. In this section, we assume that \(P_{\ast}\) and \(Q_{\ast}\) are equipped with total orders such that \(\ast\) is the smallest element, and that \(\alpha:P_{\ast}\to Q_{\ast}\) is a surjective, order-preserving pointed map. We set \[\min(\alpha^{-1}):=\{i\in P\mid i=\min(\alpha^{-1}(\alpha(i)))\},\quad\overline {\min}(\alpha^{-1}):=P_{\ast}\smallsetminus\min(\alpha^{-1}).\] For \(i\) in \(P\) or \(Q\), we will denote by \(i-1\) the predecessor of \(i\). Let \(\mathbf{c}\) and \(\mathbf{t}\) be two subsets of \(P\) such that \(\mathbf{c}\cap\mathbf{t}=\varnothing\). We will call \(\mathbf{c}\) the "convolution locus," and \(\mathbf{t}\) the "torsor locus." (These terms will be justified below.). Define a functor \(\widehat{\mathbf{Gr}}_{\alpha}^{\mathbf{c},\mathbf{t}}\) as follows: for \(R\in\mathsf{Alg}_{\mathbb{F}}\), \(\widehat{\mathbf{Gr}}_{\alpha}^{\mathbf{c},\mathbf{t}}(R)\) is the set of equivalence classes of the following data: * a point \((y_{q})_{q\in Q}\) in \(C^{Q}(R)\); * for \(i\in P_{\ast}\), a principal \(\mathcal{G}\)-bundle \(\mathcal{E}^{i}\) over \(\widehat{\Gamma}_{\{0\}\cup\{y_{q};q\in Q\}}\); * for \(i\in P_{\ast}\smallsetminus\mathbf{c}\), an isomorphism \[\beta^{i}:\mathcal{E}^{i}_{|\widehat{\Gamma}_{\{0\}\cup\{y_{q};q\in Q\}} \smallsetminus\Gamma_{y_{\alpha(i)}}}\xrightarrow{\sim}\mathcal{E}^{0}_{\widehat{ \Gamma}_{\{0\}\cup\{y_{q};q\in Q\}}\smallsetminus\Gamma_{y_{\alpha(i)}}}.\] * for \(i\in\mathbf{c}\), an isomorphism \(\beta^{i}:\mathcal{E}^{i}_{|\widehat{\Gamma}_{\{0\}\cup\{y_{q};q\in Q\}} \smallsetminus\Gamma_{y_{\alpha(i)}}}\xrightarrow{\sim}\mathcal{E}^{i-1}_{| \widehat{\Gamma}_{\{0\}\cup\{y_{q};q\in Q\}}\smallsetminus\Gamma_{y_{\alpha(i)}}}\); * for \(i\in\mathbf{t}\), an isomorphism \(\gamma^{i}:\mathcal{E}^{i-1}_{\widehat{\Gamma}_{\{0\}\cup\{y_{q};q\in Q\}} \smallsetminus\Gamma_{\{0\}\cup\{y_{q};q\in Q\}}}\). In this definition, if \(\alpha(i)=\ast\), then "\(y_{\alpha(i)}\)" should be taken to mean the point \(0\in C(R)\). In the special case where \(\alpha\) is the identity map, we may write \(\widehat{\mathbf{Gr}}_{P}^{\mathbf{c},\mathbf{t}}\) instead of \(\widehat{\mathbf{Gr}}_{\alpha}^{\mathbf{c},\mathbf{t}}\). Using standard arguments (see e.g. [2, Proposition 2.3.11]) one can show that \(\widehat{\mathbf{Gr}}_{\alpha}^{\mathbf{c},\mathbf{t}}\) is represented by an ind-scheme over \(C^{Q}\), which is moreover ind-proper if \(\mathbf{t}=\varnothing\). _Example 3.5_.: For \(n\in\mathbb{Z}_{\gg 1}\) and \(P=\{1,\ldots,n\}\), the ind-scheme \(\widehat{\mathbf{Gr}}_{\{1,\ldots,n\}}^{\{1,\ldots,n\},\varnothing}\) coincides with the ind-scheme \(\widetilde{\mathbf{Gr}}_{n}\) of [6, SS5.1]. If \(\mathbf{t}^{\prime}\subset\mathbf{t}\), there is an obvious map \[q:\widehat{\mathbf{Gr}}_{\alpha}^{\mathbf{c},\mathbf{t}}\to\widehat{\mathbf{Gr} }_{\alpha}^{\mathbf{c},\mathbf{t}\smallsetminus\mathbf{t}^{\prime}} \tag{3.4}\] given by forgetting the \(\gamma^{i}\)'s with \(i\in\mathbf{t}^{\prime}\). There is also a "twisting map" \[p:\widehat{\mathbf{Gr}}_{\alpha}^{\mathbf{c},\mathbf{t}}\to\widehat{\mathbf{Gr }}_{\alpha}^{\mathbf{c}\smallsetminus\mathbf{t}^{\prime},\mathbf{t}\smallsetminus \mathbf{t}^{\prime}} \tag{3.5}\] that is defined on \(R\)-points as follows: for each \(j\in\mathbf{t}^{\prime}\), replace \(\beta^{j}\) by the composition \[\mathcal{E}_{|\widehat{\Gamma}_{\{0\}\cup\{y_{q}:q\in Q\}}\smallsetminus\Gamma_{ y_{\alpha(j)}}}^{j}\xrightarrow{\beta^{j}}\mathcal{E}_{\widehat{\Gamma}_{\{0 \}\cup\{y_{q}:q\in Q\}}\smallsetminus\Gamma_{y_{\alpha(j)}}}^{0}\xrightarrow{( \gamma^{j})^{-1}}\mathcal{E}_{|\widehat{\Gamma}_{\{0\}\cup\{y_{q}:q\in Q\}} \smallsetminus\Gamma_{y_{\alpha(j)}}}^{j-1},\] and then forget \(\gamma^{j}\). Let us describe this ind-scheme (or its generic part) in some special cases. First, when \(\mathbf{c}=\mathbf{t}=\varnothing\), we have \[\widehat{\mathbf{Gr}}_{\alpha}^{\varnothing,\varnothing}\cong\underbrace{ \operatorname{Fl}_{G}\times\cdots\times\operatorname{Fl}_{G}}_{|\alpha^{-1}( \mathbf{*})|\text{ copies}}\times\prod_{j\in Q}(\underbrace{\mathbf{Gr}_{ \mathcal{G}}^{\operatorname{Cen}}\times_{C}\cdots\times_{C}\mathbf{Gr}_{ \mathcal{G}}^{\operatorname{Cen}}}_{|\alpha^{-1}(j)|\text{ copies}})\cdot \tag{3.6}\] In particular, its generic part is \[(\widehat{\mathbf{Gr}}_{\alpha}^{\varnothing,\varnothing})_{\eta}\cong \underbrace{\operatorname{Fl}_{G}\times\cdots\times\operatorname{Fl}_{G}}_{| \alpha^{-1}(\mathbf{*})|\text{ copies}}\times\underbrace{\operatorname{Gr}_{G} \times\cdots\times\operatorname{Gr}_{G}}_{|\alpha^{-1}(Q)|\text{ copies}}\times(C \smallsetminus\{0\})^{Q}. \tag{3.7}\] Next, suppose \(\mathbf{c}=\overline{\operatorname{mm}}(\alpha^{-1})\). We have \[(\widehat{\mathbf{Gr}}_{\alpha}^{\overline{\operatorname{mm}}( \alpha^{-1}),\varnothing})_{\eta}\cong\underbrace{LG\times^{I}\cdots\times^{I} \operatorname{Fl}_{G}}_{|\alpha^{-1}(\mathbf{*})|\text{ factors}}\times\\ \prod_{j\in Q}\underbrace{LG\times^{L^{+}G}\cdots\times^{L^{+}G} \operatorname{Gr}_{G}}_{|\alpha^{-1}(j)|\text{ factors}}\times(C\smallsetminus\{0\})^{Q}. \tag{3.8}\] More generally, the previous description remains valid over \(C^{Q,\dagger}\) for any \(\mathbf{c}\) containing \(\overline{\operatorname{mm}}(\alpha^{-1})\): \[(\widehat{\mathbf{Gr}}_{\alpha}^{\mathbf{c},\varnothing})_{|C^{Q,\dagger}} \cong(\widehat{\mathbf{Gr}}_{\alpha}^{\overline{\operatorname{mm}}(\alpha^{- 1}),\varnothing})_{|C^{Q,\dagger}}\qquad\text{if }\mathbf{c}\supset\overline{ \operatorname{mm}}(\alpha^{-1}). \tag{3.9}\] However, over a point \((y_{q})_{q\in Q}\notin C^{Q,\dagger}\), the fiber of \(\widehat{\mathbf{Gr}}_{\alpha}^{\varnothing}\) may differ from (3.8) in the following way: some instances of "\(\operatorname{Gr}_{G}\times(-)\)" are replaced by "\(LG\times^{L^{+}G}(-)\)," depending on \(\mathbf{c}\) and on the coincidences among the \(y_{j}\)'s. We now explain why \(\mathbf{t}\) is called the "torsor locus." Define the pro-smooth group scheme \(\mathcal{L}_{Q}^{+}\mathcal{G}\) over \(C^{Q}\) which represents the functor on \(\mathsf{Alg}_{\mathbb{F}}\) such that \((\mathcal{L}_{Q}^{+}\mathcal{G})(R)\) consists of the tuples \(((y_{q})_{q\in Q},g)\) with \((y_{q})_{q\in Q}\in C^{Q}(R)\) and \(g\in\mathcal{G}(\widehat{\Gamma}_{\{0\}\cup\{y_{q}:q\in Q\}})\). (The representability of this group scheme can be proved as in [2, SS3.5.2].) The following lemma follows from standard arguments (see e.g. [2, Lemma 2.3.9]). **Lemma 3.6**.: _The maps (3.4) and (3.5) are both principal bundles (with respect to different actions) for the group scheme_ \[\prod_{i\in\mathbf{t}^{\prime}}\mathcal{L}_{Q}^{+}\mathcal{G}.\] Suppose we have a collection of perverse sheaves \((\mathscr{A}_{i})_{i\in P_{\mathbf{s}}}\), where \[\mathscr{A}_{i}\in\mathsf{Perv}_{I}(\operatorname{Fl}_{G},\Bbbk)\text{ if }\alpha(i)=\ast,\qquad\mathscr{A}_{i}\in\mathsf{Perv}_{L^{+}G}( \operatorname{Gr}_{G},\Bbbk)\text{ if }\alpha(i)\in Q. \tag{3.10}\] Via (3.7), regard \(([\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Lemma 3.7**.: _Let \((\mathscr{A}_{i})_{i\in P_{\Phi}}\) be as in (3.10). There is a canonical isomorphism_ \[(m_{\eta})_{*}\widetilde{\mathfrak{C}}_{\alpha}^{\mathfrak{mm}( \alpha^{-1})}((\mathscr{A}_{i})_{i\in P_{\Phi}})\cong(\mathscr{A}_{*}\star^{I} \cdots\star^{I}\mathscr{A}_{\max(\alpha^{-1}(\bullet))})\boxtimes\] \[\left(\begin{bmatrix}\underline{\bigtimes}\big{|}\mathscr{A}_{ \min(\alpha^{-1}(j))}\star^{L^{+}G}\cdots\star^{L^{+}G}\mathscr{A}_{\max( \alpha^{-1}(j))})\\ \end{bmatrix}\boxtimes\underline{\Bbbk}_{(C\smallsetminus\{0\})^{Q}}[|Q|].\] Next, we define a map \[\mu=\mu_{\alpha}:\widehat{\mathbf{Gr}}_{\alpha}^{P,\varnothing}\to\mathbf{Gr} _{Q}\] that sends an \(R\)-point \(((y_{j}),(\mathcal{E}^{i}),(\beta^{i}))\) to \(((y_{j}),\mathcal{E}^{\max(P)},\hat{\beta})\) where \[\hat{\beta}=\beta_{|\widehat{\Gamma}_{\{0\}\cup\{y_{q}:q\in Q\}}}^{*}\circ \cdots\circ\beta_{|\widehat{\Gamma}_{\{0\}\cup\{y_{q}:q\in Q\}}^{\max(P)-1}}^ {\max(P)-1}\circ\beta_{|\widehat{\Gamma}_{\{0\}\cup\{y_{q}:q\in Q\}}^{\max(P)}}.\] We combine this with (3.11) to obtain a second convolution diagram \[\widehat{\mathbf{Gr}}_{\alpha}^{\varnothing,\varnothing}\stackrel{{ q}}{{\leftarrow}}\widehat{\mathbf{Gr}}_{\alpha}^{\varnothing,P} \stackrel{{ p}}{{\rightarrow}}\widehat{\mathbf{Gr}}_{\alpha}^{P, \varnothing}\stackrel{{\mu_{\alpha}}}{{\longrightarrow}}\mathbf{ Gr}_{Q}. \tag{3.15}\] **Lemma 3.8**.: _Let \((\mathscr{A}_{i})_{i\in P_{\Phi}}\) be as in (3.10). There is a canonical isomorphism_ \[(\mu_{\alpha,\eta})_{*}\widetilde{\mathfrak{C}}_{\alpha}^{P}((\mathscr{A}_{i })_{i\in P_{\Phi}})\cong\mathsf{C}_{Q}((\mathscr{B}_{j})_{j\in Q_{\Phi}})\] _where the \(\mathscr{B}_{j}\)'s are as in Theorem 3.2._ Proof.: Let us first treat the special case where \(P=Q\) and \(\alpha\) is the identity map. In this case, the statement of the lemma simplifies to \[(\mu_{\mathrm{id}_{P},\eta})_{*}\widetilde{\mathfrak{C}}_{\mathrm{id}_{P}}^{P} ((\mathscr{A}_{i})_{i\in P_{\Phi}})\cong\mathsf{C}_{P}((\mathscr{A}_{i})_{i \in P_{\Phi}}). \tag{3.16}\] The proof in this case is similar to that of [3, Lemma 1.7.10] (the crucial step in the comparison of fusion and convolution in the Satake category). As a first step, we deduce from (3.13) that \[\left((\mu_{\mathrm{id}_{P},\eta})_{*}\widetilde{\mathfrak{C}}_{\mathrm{id}_{P }}^{P}((\mathscr{A}_{i})_{i\in P_{\Phi}})\right)_{|C^{P,\dagger}}\cong\mathscr{ A}_{*}\boxtimes\left(\begin{bmatrix}\underline{\bigtimes}\big{|}\mathscr{A}_{p} \\ \underline{\bigtimes}\big{|}\underline{\bigtimes}\big{|}\underline{\bigtimes} \big{|}\underline{\bigtimes}_{C^{P,\dagger}}[|P|].\end{bmatrix}\] We wish to prove that \((\mu_{\mathrm{id}_{P},\eta})_{*}\widetilde{\mathfrak{C}}_{\mathrm{id}_{P}}^{P} ((\mathscr{A}_{i})_{i\in P_{\Phi}})\) is the intermediate extension of the object above. To do this, we use the standard characterization of the intermediate extension recalled e.g. in [1, Lemma 3.3.4]: namely, it suffices to prove that the restriction, resp. corestriction, of \((\mu_{\mathrm{id}_{P},\eta})_{*}\widetilde{\mathfrak{C}}_{\mathrm{id}_{P}}^{P} ((\mathscr{A}_{i})_{i\in P_{\Phi}})\) to the complement of \(C^{P,\dagger}\) in \((C\smallsetminus\{0\})^{P}\) lies in perverse degrees \(\leq-1\), resp. \(\geq 1\). One can stratify \((C\smallsetminus\{0\})^{P}\) in terms of coincidences between points, with strata indexed by partitions of \(P\). Given a partition \(\tau\) into \(m\) subsets, the preimage of the stratum \(X_{\tau}\) attached to \(\tau\) (of dimension \(m\)) in \(\mathbf{Gr}_{P}\) identifies with \(\mathrm{Fl}_{G}\times(\mathrm{Gr}_{G})^{m}\times X_{\tau}\), and the restriction of \((\mu_{\mathrm{id}_{P},\eta})_{*}\widetilde{\mathfrak{C}}_{\mathrm{id}_{P}}^{P} ((\mathscr{A}_{i})_{i\in P_{\Phi}})\) identifies with the external product of \(\mathscr{A}_{*}\) with some convolution products of the \(\mathscr{A}_{i}\)'s and with \(\underline{\Bbbk}_{X_{\tau}}[n]\). Using the fact that convolution of \(L^{+}G\)-equivariant perverse sheaves on \(\mathrm{Gr}_{G}\) is t-exact (see SS3.2) we see that if \(m<n\) this restriction is in negative perverse degrees, proving the desired claim about restriction. The claim about corestrictions can be checked similarly, or deduced using Verdier duality. This completes the proof of (3.16). To prove the lemma in general, we use the commutative diagram in Figure 1. Our problem lies along the diagonal of this diagram. Across the top of the diagram is an instance of (3.14), and down the right-hand side of the diagram is an instance of (3.15). The squares involving maps labeled "\(m\)" are all cartesian. We have \[(\mu_{\alpha,\eta})_{*}\widetilde{\mathsf{C}}^{P}_{\alpha}((\mathscr{A}_{i})_{i \in P_{\mathfrak{g}}})\cong(\mu_{\mathrm{id}_{Q},\eta})_{*}((m^{P,\varnothing}_ {\alpha})_{\eta})_{*}\widetilde{\mathsf{C}}^{P}_{\alpha}((\mathscr{A}_{i})_{i \in P_{\mathfrak{g}}}).\] By proper base change, we have \[p_{\eta}^{*}((m^{P,\varnothing}_{\alpha})_{\eta})_{*}\widetilde{\mathsf{C}}^{P }_{\alpha}((\mathscr{A}_{i})_{i\in P_{\mathfrak{g}}})\cong q_{\eta}^{*}((m^{ \overline{\mathrm{mm}}(\alpha^{-1}),\varnothing}_{\alpha})_{\eta})_{*} \widetilde{\mathsf{C}}^{\overline{\mathrm{mm}}(\alpha^{-1})}_{\alpha}(( \mathscr{A}_{i})_{i\in P_{\mathfrak{g}}}),\] and then by Lemma 3.7 we have \[((m^{P,\varnothing}_{\alpha})_{\eta})_{*}\widetilde{\mathsf{C}}^{P}_{\alpha}( (\mathscr{A}_{i})_{i\in P_{\mathfrak{g}}})\cong\widetilde{\mathsf{C}}^{Q}_{ \mathrm{id}_{Q}}((\mathscr{B}_{j})_{j\in Q_{*}}).\] Now apply \((\mu_{\mathrm{id}_{Q},\eta})_{*}\) to this equation. The result follows by the special case (3.16) considered above. ### Proof of Theorem 3.2 We will first establish the existence of and formula for \(\mathsf{T}^{\alpha}_{\mathbf{Gr}_{P}}(\mathsf{C}_{P}((\mathscr{A}_{i})_{i\in P _{\mathfrak{g}}}))\). Choose total orders on \(P_{\mathfrak{g}}\) and \(Q_{*}\) as SS3.4, so that \(*\) is the smallest element in both sets, and such that \(\alpha:P_{\mathfrak{g}}\to Q_{*}\) is order-preserving. Consider the diagram \[\widehat{\mathbf{Gr}}^{\varnothing,\varnothing}_{P}\xleftarrow{q}\widehat{ \mathbf{Gr}}^{\varnothing,P}_{P}\xrightarrow{p}\widehat{\mathbf{Gr}}^{P, \varnothing}_{P}\xrightarrow{\mu_{\mathrm{id}_{P}}}\mathbf{Gr}_{P}.\] Its base change along \(\bar{\alpha}:\mathbb{A}^{Q}\to\mathbb{A}^{P}\) is \[\widehat{\mathbf{Gr}}^{\varnothing,\varnothing}_{\alpha}\xleftarrow{q}\widehat {\mathbf{Gr}}^{\varnothing,P}_{\alpha}\xrightarrow{p}\widehat{\mathbf{Gr}}^{ P,\varnothing}_{\alpha}\xrightarrow{\mu_{\alpha}}\mathbf{Gr}_{Q}.\] To start, in view of (3.6), Lemma 2.22, and Remark 3.4(1), \[\mathsf{T}^{\alpha}_{\widehat{\mathbf{Gr}}^{\varnothing,\varnothing}_{P}} \left(\begin{matrix}\underline{\times}\\ i\in P_{\mathfrak{g}}\end{matrix}\right)\mathscr{A}_{i}\boxtimes\underline{ \Bbbk}_{(C\smallsetminus\{0\})^{P}}[|P|]\right)\] is well defined, and isomorphic to \[\left(\begin{matrix}\underline{\times}\\ i\in P_{\mathfrak{g}}\end{matrix}\mathscr{A}^{\prime}_{i}\right)\boxtimes \underline{\Bbbk}_{(C\smallsetminus\{0\})^{Q}}[|Q|]\quad\text{where}\quad \mathscr{A}^{\prime}_{i}=\begin{cases}\mathsf{Z}(\mathscr{A}_{i})&\text{if }i\in P\cap\alpha^{-1}(*),\\ \mathscr{A}_{i}&\text{otherwise}.\end{cases}\] Next, by two applications of Lemma 2.13, we obtain that \[\mathsf{T}^{\alpha}_{\widehat{\mathbf{Gr}}^{P,\varnothing}_{P}}(\widetilde{ \mathsf{C}}^{P}_{\mathrm{id}_{P}}((\mathscr{A}_{i})_{i\in P_{\mathfrak{g}}}))\] Figure 1. Diagram for the proof of Lemma 3.8 is well defined, and isomorphic to \[\widetilde{\mathsf{C}}^{P}_{\alpha}((\mathscr{A}^{\prime}_{i})_{i\in P_{\mathfrak{ g}}}). \tag{3.17}\] Applying Lemma 3.8 twice, we have \[(\mu_{\mathrm{id}_{P},\eta})_{\ast}\widetilde{\mathsf{C}}^{P}_{\mathrm{id}_{P}}(( \mathscr{A}_{i})_{i\in P_{\mathfrak{g}}})=\mathsf{C}_{P}((\mathscr{A}_{i})_{i \in P_{\mathfrak{g}}}),\ \ (\mu_{\alpha,\eta})_{\ast}\widetilde{\mathsf{C}}^{P}_{\alpha}(( \mathscr{A}^{\prime}_{i})_{i\in P_{\mathfrak{g}}})=\mathsf{C}_{Q}((\mathscr{B }_{i})_{i\in Q_{\mathfrak{g}}}),\] where the \(\mathscr{B}_{j}\)'s are as in Theorem 3.2. By Lemma 2.14, we conclude that the \(\alpha\)-nearby cycles of \(\mathsf{C}_{P}((\mathscr{A}_{i})_{i\in P_{\mathfrak{g}}})\) are well defined, and that \[\Upsilon^{\alpha}_{\mathbf{Gr}_{P}}(\mathsf{C}_{P}((\mathscr{A}_{i})_{i\in P _{\mathfrak{g}}}))\cong\mathsf{C}_{Q}((\mathscr{B}_{i})_{i\in Q_{\ast}}),\] as desired. This completes the proof of the first part of Theorem 3.2. Next, by Lemma 2.23, the natural map \[\Upsilon^{\beta\alpha}_{\widetilde{\mathbf{Gr}}_{P}^{\mathscr{B},\mathscr{B} }}\left(\left[\overline{\times}\right]\mathscr{A}_{i}\boxtimes\underline{ \Bbbk}_{(C\smallsetminus\{0\})^{P}}[|P|]\right)\to\Upsilon^{\beta}_{\widetilde{ \mathbf{Gr}}_{\alpha}^{\mathscr{B},\mathscr{B}}}\Upsilon^{\alpha}_{\widetilde {\mathbf{Gr}}_{P}^{\mathscr{B},\mathscr{B}}}\left(\left[\overline{\times} \right]\mathscr{A}_{i}\boxtimes\underline{\Bbbk}_{(C\smallsetminus\{0\})^{P} }[|P|]\right)\] is an isomorphism. By two applications of Lemma 2.19, we find that the map \[\Upsilon^{\beta\alpha}_{\widetilde{\mathbf{Gr}}_{P}^{\mathscr{B},\mathscr{B} }}(\widetilde{\mathsf{C}}^{P}_{P}((\mathscr{A}_{i})_{i\in P_{\mathfrak{g}}})) \to\Upsilon^{\beta}_{\widetilde{\mathbf{Gr}}_{\alpha}^{P,\mathscr{B}}}\Upsilon ^{\alpha}_{\widetilde{\mathbf{Gr}}_{P}^{\mathscr{B},\mathscr{B}}}(\widetilde{ \mathsf{C}}^{P}_{P}((\mathscr{A}_{i})_{i\in P_{\mathfrak{g}}}))\] is an isomorphism, and then Lemma 2.20 implies that so is the map \[\Upsilon^{\beta\alpha}_{\mathbf{Gr}_{P}}(\mathsf{C}_{P}((\mathscr{A}_{i})_{i \in P_{\mathfrak{g}}})\to\Upsilon^{\beta}_{\mathbf{Gr}_{Q}}(\Upsilon^{\alpha} _{\mathbf{Gr}_{P}}(\mathsf{C}((\mathscr{A}_{i})_{i\in P_{\mathfrak{g}}}))\] This completes the proof of Theorem 3.2. ### Groupoid perspective Let \(P\) be a finite set. Let \(K=K_{P}\) be the set whose elements are sequences of surjective pointed maps \[\gamma=(P_{\ast}\stackrel{{\alpha_{1}}}{{\longrightarrow}}P_{1 \ast}\stackrel{{\alpha_{2}}}{{\longrightarrow}}\cdots\stackrel{{ \alpha_{k-1}}}{{\longrightarrow}}P_{k-1,\ast}\stackrel{{ \alpha_{k}}}{{\longrightarrow}}\varnothing_{\ast}). \tag{3.18}\] Given such a sequence \(\gamma\), an _elementary refinement_ of \(\gamma\) is a new sequence \(\gamma^{\prime}\) obtained by decomposing some \(\alpha_{i}\) into a composition of two surjective maps: say \[\gamma^{\prime}=(P_{\ast}\stackrel{{\alpha_{1}}}{{ \longrightarrow}}\cdots\stackrel{{\alpha_{i-1}}}{{\longrightarrow }}P_{i-1,\ast}\stackrel{{\alpha^{\prime}_{i}}}{{\longrightarrow }}Q_{\ast}\stackrel{{\alpha^{\prime\prime}_{i}}}{{\longrightarrow }}P_{i,\ast}\stackrel{{\alpha_{i+1}}}{{\longrightarrow}}\cdots \stackrel{{\alpha_{k}}}{{\longrightarrow}}\varnothing_{\ast})\] where \(\alpha_{i}=\alpha^{\prime\prime}_{i}\circ\alpha^{\prime}_{i}\). Make \(K\) into a poset by declaring that \(\gamma\preceq\gamma^{\prime}\) if \(\gamma^{\prime}\) can be obtained from \(\gamma\) by a (possibly empty) sequence of elementary refinements. Of course, this poset can be regarded as a category in the usual way: there is a morphism \(\gamma\to\gamma^{\prime}\) if \(\gamma\preceq\gamma^{\prime}\). This poset (resp. category) has a unique minimal element (resp. initial object): namely, the unique pointed map \(P_{\ast}\to\varnothing_{\ast}\). **Lemma 3.9**.: _Let \(K^{\Xi}\) be the groupoid obtained from \(K\) by formally inverting all morphisms. For any two objects \(\gamma,\gamma^{\prime}\in K^{\Xi}\), there is a unique morphism \(\gamma\to\gamma^{\prime}\). As a consequence, the nerve of \(K^{\Xi}\) is a contractible Kan complex._ This is a standard lemma that holds for any poset with a unique minimal (or maximal) element. Proof.: The initial object of \(K\) remains an initial object in \(K^{\Xi}\). It is easy to check that in a groupoid with an initial (or final) object, there is exactly one morphism \(\gamma\to\gamma^{\prime}\) for any two objects \(\gamma,\gamma^{\prime}\). In particular, \(K^{\Xi}\) is equivalent to the category with one object and one morphism. Therefore, its nerve is homotopy-equivalent to a singleton simplicial set, i.e., it is contractible. Let \(\gamma\) be as in (3.18). For brevity, we introduce the notation \[\underline{\Upsilon}^{\gamma}(\mathsf{C}_{P}((\mathscr{A}_{i})_{i\in P_{\mathfrak{ s}}})):=\Upsilon^{\alpha_{k}}_{\mathbf{Gr}_{P_{k-1}}}\circ\cdots\circ\Upsilon^{ \alpha_{2}}_{\mathbf{Gr}_{P_{1}}}\circ\Upsilon^{\alpha_{1}}_{\mathbf{Gr}_{P}}( \mathsf{C}_{P}((\mathscr{A}_{i})_{i\in P_{\mathfrak{g}}})).\] (Here, all the functors are well defined thanks to Theorem 3.2.) **Proposition 3.10**.: _For any object \(\mathscr{A}_{\bullet}\) in \(\mathsf{Perv}_{I}(\mathrm{Fl}_{G},\Bbbk)\), and any collection of objects \(\{\mathscr{A}_{p}\}_{p\in P}\) in \(\mathsf{Perv}_{L^{+}G}(\mathrm{Gr}_{G},\Bbbk)\), there is a contractible \(\infty\)-groupoid whose objects are of the form \(\underline{\Upsilon}^{\gamma}(\mathsf{C}_{P}((\mathscr{A}_{i})_{i\in P_{ \mathfrak{g}}}))\)._ Proof.: Define a functor \(F:K\to\mathsf{Perv}_{I}(\mathrm{Fl}_{G},\Bbbk)\) as follows: on objects, we set \[F(\gamma)=\underline{\Upsilon}^{\gamma}(\mathsf{C}_{P}((\mathscr{A}_{i})_{i\in P _{\mathfrak{g}}})).\] If \(\gamma\to\gamma^{\prime}\) is an elementary refinement, then Theorem 3.2 gives us an isomorphism \[F(\gamma\to\gamma^{\prime}):F(\gamma)\xrightarrow{\sim}F(\gamma^{\prime}).\] By Lemma 2.18, this rule extends to arbitrary morphisms in \(K\), so \(F\) is a well-defined functor. Since \(F\) sends every morphism in \(K\) to an isomorphism, it extends uniquely to a faithful functor \(F^{\Xi}:K^{\Xi}\to\mathsf{Perv}_{I}(\mathrm{Fl}_{G},\Bbbk)\). Its image is a (non-full) subcategory of \(\mathsf{Perv}_{I}(\mathrm{Fl}_{G},\Bbbk)\) whose nerve is contractible by Lemma 3.9. _Remark 3.11_.: Here are some examples of vertices in the \(\infty\)-groupoid from Proposition 3.10 in the case where \(P=\{1,\ldots,n\}\) (cf. Remark 3.4). Choose an enumeration \(\{\chi_{1},\ldots,\chi_{n}\}\) of \(\{1,\ldots,n\}\), and let \(\gamma_{\chi}\) be the sequence \[P_{\bullet}=\{\chi_{1},\ldots,\chi_{n}\}_{\bullet}\xrightarrow{\alpha_{1}} \{\chi_{2},\ldots,\chi_{n}\}_{\bullet}\to\cdots\to\{\chi_{n-1},\chi_{n}\}_{ \bullet}\xrightarrow{\alpha_{n-1}}\{\chi_{n}\}_{\bullet}\xrightarrow{\alpha_ {n}}\varnothing_{\bullet},\] where \(\alpha_{i}(\chi_{i})=\ast\) and \(\alpha_{i}(\chi_{j})=\chi_{j}\) for \(j>i\). Let \(f_{\chi_{i}}\) denote the composition \[\mathbf{Gr}_{\{1,\ldots,n\}}\times_{\mathbb{A}^{\{1,\ldots,n\}}}\mathbb{A}^{ \{\chi_{i}:\chi_{i+1},\ldots,\chi_{n}\}}\to\mathbb{A}^{\{\chi_{i}:\chi_{i+1}, \ldots,\chi_{n}\}}\to\mathbb{A}^{\{\chi_{i}\}}.\] By Example 2.11 and Lemma 2.12, we have \[\underline{\Upsilon}^{\gamma_{\chi}}((\mathscr{A}_{i})_{i\in P_{\mathfrak{g}}}) \cong\Psi_{f_{\chi_{n}}}\cdots\Psi_{f_{\chi_{1}}}(\mathsf{C}_{P}((\mathscr{A }_{i})_{i\in P_{\mathfrak{g}}})).\] On the other hand, if we let \(\gamma_{\min}\) denote the unique map \(P_{\bullet}\to\varnothing_{\bullet}\), then Theorem 3.2 says that \[\underline{\Upsilon}^{\gamma_{\min}}((\mathscr{A}_{i})_{i\in P_{\mathfrak{g}} })\cong\mathscr{A}_{\bullet}\star^{I}\left(\bigstar_{p\in P}^{I}\mathsf{Z}( \mathscr{A}_{p})\right).\] Our considerations therefore fully justify [6, Proposition 5.2.1].
2308.00767
Membrane-in-the-middle optomechanics with a soft-clamped membrane at milliKelvin temperatures
Soft-clamped silicon nitride membrane resonators reach coherence times tau in excess of 100 ms at milliKelvin bath temperatures. However, harnessing strong optomechanical coupling in dry dilution refrigerators remains challenging due to vibration issues and heating by optical absorption. Here, we propose to address these issues with an actuator-free optical cavity and mechanical resonator design, in which the cavity is mounted on a simple vibration-isolation platform. We observe dynamical backaction when the cavity is driven with a free-space optical beam stabilized close to the red sideband using a two-beam locking scheme. Finally, we characterize the effect of absorption heating on the coherence time, and find a scaling with the intracavity power P as tau proportional to P to the power of -(0.34+/-0.04).
Eric Planz, Xiang Xi, Thibault Capelle, Eric C. Langman, Albert Schliesser
2023-08-01T18:10:14Z
http://arxiv.org/abs/2308.00767v2
# Membrane-in-the-middle optomechanics with a soft-clamped membrane at milliKelvin temperatures ###### Abstract Soft-clamped silicon nitride membrane resonators are capable of coherence times \(\tau\) exceeding 100 ms at millikelvin bath temperatures. However, harnessing strong optomechanical coupling in dry dilution refrigerators remains a challenge due to vibration issues and heating by optical absorption. Here, we address these issues with an actuator-free optical cavity and mechanical resonator design, with the cavity mounted on a simple vibration-isolation platform. We observe dynamical backaction when the cavity is driven with a free-space optical beam stabilized close to the red sideband using a two-beam locking scheme. Finally, we characterize the effect of absorption heating on coherence time, finding it scales with the intracavity power \(P\) as \(\tau\propto P^{-(0.34\pm 0.04)}\). ## I Introduction Cavity optomechanics has emerged as a dynamic field over the past few decades, fuelled by great progress in the fabrication of integrated, low-loss mechanical resonators [1]. Coupling light to mechanical motion through radiation pressure effects in optomechanical systems has led to advances both in fundamental physics and technological applications. Membrane-in-the-middle (MIM) systems, which utilize a partially reflective membrane resonator inside an optical cavity, have been of particular significance [2]. They have been used in a wide variety of experiments, ranging from investigations of the tenets of continuous quantum measurement [3; 4; 5; 6; 7], to topological [8] and parametric [9] energy transfer, and for applications in quantum information processing [10], gravitational wave detection [11], and force sensing [12; 13]. When operating MIM systems in the regime where motion of the mechanical oscillator is dominated by quantum uncertainties, it is necessary for the optomechanical coupling rates to exceed the thermal decoherence rate of the system. Towards this goal, a large focus has been placed on developing mechanical resonators that demonstrate ultralow decoherence. _Soft-clamped membrane resonators_[14], which comprise a phononic crystal pattern with an isolated defect, were used to reach the quantum regime at moderate (\(T\sim 10\)K) cryogenic temperatures [5], as well as approach the quantum regime at room temperature [15]. The operation of such membranes in a dry dilution refrigerator is critical for applications such as electro-optic transduction[16; 10] and realizing long-lived quantum memories[17]. However, this introduces several challenges associated with (i) maintaining stability within the high-finesse cavity, and (ii) optical absorption heating of the membrane resonator at high intracavity fields. Challenge (i) involves aligning to and locking the high-finesse cavity. Various approaches have been explored, including misaligning fiber-coupled cavities at room temperature to achieve high coupling efficiencies at low temperatures [18]. To realize optical lock, actuators in fiber-coupled cavities [19] and free-space coupling to optomechanical cavities [10] have been used. However, dry dilution refrigerators tend to possess significant vibrations due to the use of a pulse tube system to maintain the Helium 4 at sufficiently low temperatures [20]. These vibrations often result in large excursions that complicate locking to high-finesse cavities. Challenge (ii) arises from the optical absorption induced heating of membrane resonators at high intracavity fields. This phenomenon, which has been observed in numerous optomechanical and electro-optic experiments [21; 22], is problematic as it can lead to higher mechanical bath temperatures. Studies have shown that patterned SiN membranes demonstrate low thermal conductivity [23] and absorption heating effects can become significant at millikelvin temperatures, especially within the near-infrared regime [11]. Plain membranes with sub-millimeter dimensions have been employed in (wet) dilution refrigerators before, in which no significant heating was observed when driven with laser radiation at \(\sim 1\)-\(\mathrm{\SIUnitSymbolMicro m}\) wavelength. ([24] made this observation by exposing a 100 nm thick membrane to 7.4 mW intracavity power, while [10] report such a finding while shining 200 mW of intracavity power on a 40 nm thick membrane.) Here, we investigate soft-clamped membranes resonators, which while offering higher quality factors (\(Q>10^{9}\)) and coherence times \(\tau>140\) ms [17], are expected to suffer from increased heating due to their patterned structure and larger dimensions (\(\sim\)cm). In response to these issues, here we present a design for a sideband-resolved optomechanical assembly that offers a method for effective coupling and locking a laser to the cavity within a dilution refrigerator. Figure 1 illustrates the soft-clamped membrane design with a mechanical resonance of interest at 1.32 MHz. The chip design incorporates additional coupling to a microwave cavity, although the details of this feature are outside the scope of this manuscript. We also investigate the absorption heating effect of the membrane from 805 nm wavelength laser light. ## II Mechanical Design The cavity design shown in Fig. 2(a) employs a plano-convex, over-coupled Fabry-Perot cavity with highly reflective mirrors. The wavelength-dependent reflectivities of these mirrors allows tuning both the finesse of the cavity and the over-coupling ratio by adjusting the wavelength. These features enable us to achieve a cavity finesse over 30,000, a cavity linewidth below 300 kHz, and substantial over-coupling greater than 95%. The cavity assembly is made of oxygen-free high-conductivity copper. The individual parts are clamped together tightly using short stainless steel screws to reduce fluctuations within the cavity due to differences in thermal contraction. Within the assembly, the membrane frame lies parallel to the flat cavity mirror, separated by a 500 \(\mathrm{\SIUnitSymbolMicro m}\) silicon spacer. With an equally thick membrane chip, the membrane is located at approximately 1 mm from the flat mirror's surface [25]. The total cavity length is \(\sim 24\) mm. With the convex mirror's radius of curvature around 25 mm, the waist of the optical mode at the position of the membrane is \(\sim 43\)\(\mathrm{\SIUnitSymbolMicro m}\). The light is coupled to the cavity by aiming a free-space laser beam through windows in the cryostat, onto the more transmissive cavity mirror. Movements of the cavity assembly in directions orthogonal to the longitudinal cavity axis can induce rather dramatic fluctuations in the intracavity field. Likewise, movements of the cavity in the axial direction can lead to a motion of the mechanical resonator not limited by thermal noise. To address those two issues, the cavity assembly is affixed to the simple home-built vibration isolation platform shown in Fig. 2(b) and (c). It consists of a heavy (1.9 kg) rectangular (264 mm \(\times\) 130 mm \(\times\) 6.25 mm) copper plate that is suspended from the mixing chamber plate of a dry dilution refrigerator (LD250 by Bluefors) via thin copper sheets (24 cm\(\times\)5.1 cm \(\times\) 0.6 mm), anchored at four points forming an inner rectangle of 170 mm \(\times\) 104 mm. This construction is'soft' for oscillations along the cavity axis, allowing the platform to swing at a low eigenfrequency of 2.3 Hz, with the aim of mitigating MHz mechanical noise due to non-thermal, external vibrations. For oscillations orthogonal to the cavity axis, it is much stiffer, with the first eigenfrequency appearing at 145 Hz in the horizontal direction. Here, the goal is to avoid large amplitude, low frequency excursions that lead to cavity axis pointing noise. This platform allowed us to lock the laser to cavities with finesses up to \(\sim 31,000\), despite an active pulse tube disturbing the system (albeit without an intracavity membrane). Figure 1: (a) Photograph of the ‘Lotus’ membrane design used in the noise thermometry experiments. The defect diameter is designed to be 230 \(\mathrm{\SIUnitSymbolMicro m}\), corresponding to clipping losses low enough to achieve sideband-resolved optomechanics. (b) Room-temperature spectrum of the membrane showing a bandgap between the dashed lines and the defect mode at 1.32 MHz that we address. (c) Simulated displacement of the mechanical mode localized at the defect in the phononic crystal patterned into a silicon nitride (SiN) membrane. Color map corresponds to displacement amplitude, from negative (blue) to positive (red). Figure 2: (a) Schematic of the cavity assembly that shows the cavity mirrors (blue), the SiN membrane chip (red) and a silicon spacer chip (yellow), held in place by the copper parts of the assembly (gray). In particular, two copper “mirror holders” press the two mirrors against the bulk cavity spacer using nitrile O-rings. The curved mirror’s holder can be moved orthogonal to the cavity axis, which allows centering the cavity field on the membrane defect. (b) Photograph of the vibration isolation platform mounted on the cryostat, with the cavity and the in/out coupling lenses. (c) top (bottom): Displacement pattern of the lowest (second lowest)-frequency mode of the vibration isolation platform along the cavity axis (y-direction) (orthogonal to the cavity axis (x-direction)) at 2.3 Hz (145 Hz). ## Resonator design This work utilizes a variant of Lotus-class soft-clamped membranes, which have demonstrated quality factors exceeding 1 billion in electromechanical experiments [17]. Our experiment specifically employs a phononic dimer membrane containing two coupled defects leading to a pair of hybridized mechanical modes, one symmetric and the other antisymmetric [13]. The released membrane has a rectangular extent of \(5.4\ \mathrm{mm}\times 4.8\ \mathrm{mm}\). In our case, only one mechanical mode, at a frequency of \(\Omega_{m}/2\pi\sim 1.32\ \mathrm{MHz}\) will be considered. Fig. 3(a) shows a zoom-in of the mechanical defect of a 'Lotus' membrane described in Fig. 1. If the cavity mode diameter at the membrane position exceeds the defect size of a patterned membrane, additional optical losses occur. These 'clipping losses' originate from the phase difference between the light field that travels through the material and that outside the defect. This results in altering the cavity wavefront and leads to coupling into higher-order modes. Minimizing these additional cavity losses is crucial to attain sideband resolution, a prerequisite for sideband cooling of a mechanical resonators to the quantum regime. To gain insights into this cavity loss effect, we carried out a series of tests using optomechanical cavity assemblies with varied defect sizes. Figure 3(b) shows the hexagonal hole pattern etched into 200 nm thick SiN membranes, which we used in these tests to simulate the defect of a soft-clamped membrane. In the assembly procedure for these tests, we first clamp the membrane and flat cavity mirror together. Then, we scan a laser beam across the membrane, and measure the back-reflected power. These scans typically show interference between the light reflected directly from the membrane and off the flat cavity mirror behind the membrane (Fig. 3(c)). With 2D scans of the membrane plane, we can calculate the tilt between the membrane and cavity mirror from the interference fringe. In the example shown in Fig. 3(c), this leads to an angle of approx. 1 mrad for one interference fringe across 0.8 mm with a 830 nm laser beam. For all test assemblies, the angle is kept \(<\)1 mrad to ensure tilt is not a dominant source of cavity loss. The same scan method is then used to find the center of the membrane defect with the laser beam. In the example shown in Fig. 3(d), it occurs at a position around 7.93 mm. Once we have found the center position in the membrane plane, we position and fix the curved mirror in the assembly, forming the membrane-in-the-middle cavity. The longitudinal position of the membrane with respect to the standing wave of the optical cavity determines the optomechanical coupling, among other factors. To identify positions of high coupling, we measure a series of subsequent fundamental cavity resonance frequencies. The position-dependent perturbation of the intracavity field through the dielectric SiN material causes a deviation of the cavity spectrum from equidistant modes separated by a constant free spectral range (FSR). This deviation \(\Delta\omega_{\mathrm{FSR}}\) can be converted into a coupling point using the model provided in [26]. It shows a periodicity of \(2kz=2\pi Nz/L\), where \(N\) is the mode number, \(z\) the distance to the closest cavity mirror (here 1 mm), and \(L\) is the cavity length (here 24 mm). We therefore record the resonance frequency of 24 subsequent cavity modes, in order to observe all coupling points and obtain the graph in Fig. 3(e). Then we perform cavity loss measurements via cavity ringdowns at various coupling points, yielding the data presented in Fig. 3(f). The findings demonstrate that the defect size does impact the cavity loss. We find that sufficiently small defects lead to significant enough cavity losses that sideband resolution becomes unattainable. Based on these results, we use Lotus-class defects with an innermost diameter of 230 um for relevant optical cavities. ## Noise thermometry For the optomechanical measurements at millikelvin temperatures, we clamp the cavity assembly to the vibration isolation platform described in Fig. 2(b). We shine laser light towards the cavity, through the cryostat's windows, from fiber couplers mounted on a breadboard, which is itself clamped to the outer shield of the dilution refrigerator. Figure 4 shows a simplified version of the optical setup, which uses two orthogonally polarized beams derived from the same laser via two acousto-optic modulators (AOMs) [10]. In our experiments we use a widely tuneable, low-noise Ti:sapphire laser SolsTiS from MSquared at a wavelength around 800 nm. The first, weak "lock" beam is parked at resonance with the cavity and its reflection is used to derive a Pound-Drever-Hall (PDH) error signal to lock the laser frequency to the cavity. The second, stronger, red-detuned, "science" beam probes (and cools) the mechanical resonance. Its detuning from the cavity resonance is set by the frequencies at which the two free-space AOMs from CSRayzer are driven. We record mechanical noise spectra by direct detection of the science beam using avalanche photodiodes of type APD410A2 from Thorlabs. We initially measure the thermal noise spectrum at room temperature, revealing the fundamental mode of the soft-clamped membrane at approximately 1.32 MHz (see Fig. 1(b)). As the setup cools, thermal contraction of the silicon frame triggers an approx. 2% downshift in mechanical frequency. At millikelvin temperatures we observe the same resonance close to 1.30 MHz. As we increase the science beam power, we observe dynamical backaction damping as a broadening mechanical linewidth [1], as shown in Fig. 5(a). For a systematic laser cooling series, we maintain the dilution refrigerator temperature at 20 mK and perform measurements at the science beam detunings [-1.0, -1.5, -2.0] MHz. The science beam's input power is swept up to 10 uW, at an estimated mode-matching efficiency of 0.8. A fit to the PDH error signal gives a cavity linewidth \(\kappa/2\pi=(2.0\pm 0.2)\) MHz, corresponding to a finesse of \((3.1\pm 0.3)\times 10^{3}\). Figures 5(b-d) show the results of Lorentzian fits to these spectra. We note resonance frequency shifts of different signs for distinct detunings, attributed to the anticipated optical spring effect. Moreover, the linewidth increase directly relates to the dynamical backaction effect, from which we infer a vacuum optomechanical coupling rate [1] of \(g_{0}/2\pi\approx 1.2\) Hz. For comparison, the maximum achievable coupling rate in a membrane-in-the-middle configuration can be approximated as \(g_{0}^{\text{max}}\approx 2(\omega_{c}/L)|r|x_{\text{zpf}}\xi\), where \(\omega_{c}/2\pi\) is the cavity resonance frequency, \(L\) is the cavity length, \(r\) the optical field reflectivity of the membrane, \(x_{\text{zpf}}\) is the zero-point fluctuation, and \(\xi\) is the mode overlap between membrane displacement and optical field [15]. With the 50 nm thick membrane and 24 mm long optical cavity, we anticipate \(g_{0}^{\text{max}}/2\pi\approx 8\) Hz at perfect mode-overlap. We attribute the discrepancy with the measured \(g_{0}\) to the unoptimized positioning of the membrane along the cavity axis and a potentially imperfect transverse overlap (\(\xi<1\)). Finally, in Fig. 5(e) we plot the area \(A\) of the mechanical peak over the intracavity power \(P\) squared. In a direct-detection measurement, as in this setup, \(A/P^{2}\) is proportional to the steady-state phonon occupation number \(\bar{n}_{\text{f}}\). In our experiment's regime, quantum backaction is negligible and the occupation is approximately given by \[\bar{n}_{\text{f}}=\frac{\Gamma_{\text{m}}\bar{n}_{\text{th}}}{\Gamma_{\text{ opt}}+\Gamma_{\text{m}}}\approx\frac{\Gamma_{\text{m}}\bar{n}_{\text{th}}}{ \Gamma_{\text{opt}}}\, \tag{1}\] where \(\Gamma_{\text{m}}\) is the mechanical damping rate, \(\bar{n}_{\text{th}}\) is the occupation of the mechanical bath, and \(\Gamma_{\text{opt}}\) is the optomechanical damping rate [1]. As \(\Gamma_{\text{opt}}\propto P\)[1], we would expect a power law \(A/P^{2}\propto\bar{n}_{\text{f}}\propto P^{s}\propto n_{\text{cav}}^{s}\) with slope \(s=-1\) to emerge in the log-log-scale in Fig. 5(e)--were there no additional heating effects. However, we obtain a slope of \(s=-0.67\pm 0.04\). Combined with eq. (1), this suggests a power scaling of the decoherence rate \(\Gamma_{\text{m}}\bar{n}_{\text{th}}\propto P^{\alpha}\) with \(\alpha=(-1-s)=0.33\pm 0.04\), and consequently the (heating-limited) coherence time \(\tau\propto P^{-\alpha}\). This is in rough agreement with published results of the optical absorption heating and mechanical linewidth broadening of soft-clamped membranes. Indeed, reference [11] (Fig. S2) suggests that the mechanical temperature scales approximately as: \[\bar{n}_{\text{th}}\propto P^{0.33}, \tag{2}\] whereas the articles [17] and [28] find a scaling of the damping with the mechanical bath temperature close to: \[\Gamma_{\text{m}}\propto n_{\text{th}}^{0.66}. \tag{3}\] By combining Eqs. 2 and 3, a crude approximation would then suggest a scaling: \[\Gamma_{\text{m}}\cdot\bar{n}_{\text{th}}\propto(P^{0.33})^{0.66}\cdot P^{0.3 3}\propto P^{0.55}, \tag{4}\] that is \(\alpha_{\text{lit.}}\approx 0.55\), which has to be compared to our findings of \(\alpha\approx 0.33\). We attribute the small discrepancy between those values to the questionable extrapolation we use to combine Eqs. (2) and (3), which have been measured in very different experimental situations. ## IV Conclusion We have presented a detailed experimental procedure to construct and analyze an optomechanical setup with a Lotus-class soft-clamped membrane within an overcoupled Fabry-Perot cavity. The setup operates in a dry Figure 3: (a) Zoom-in to the geometry of a ’Lotus’ defect. (b) Microscope picture of a 200 nm thick clipping test membrane used in the test assemblies. The hexagonal pattern of holes simulates the hole structure in soft-clamped membranes. (c) Scan of the in-coupling laser beam along x-axis outside the hole structure with one cavity mirror removed, for a test membrane such as the one showed in (b). The back-reflected power shows interference between light being reflected off the membrane and the cavity mirror behind. (d) Scan of the in-coupling laser beam after minimizing the tilt, for a test membrane such as the one showed in (b). The scan shows a clear feature stemming from the hole structure around the defect between 0.5 and 0.8 mm. These graphs are used to align the in-coupling laser beam to the center of the defect. (e) Optomechanical coupling at various wavelengths, as determined by measuring the frequencies of 24 subsequent optical resonances. (f) Cavity loss seen in optomechanical cavity assemblies with various defect sizes. The data point at \(\infty\) has been performed using a plane membrane, i.e., without hole structure. dilution refrigeration thanks to a stable mechanical design and a vibration isolation platform. We have explored how the finite size of the membrane defect can introduce additional cavity losses. We have developed an efficient procedure of assembling MIM cavities and derived design restrictions on soft-clamped membranes, here being a defect size \(>200\) um. The thermal noise spectra observed during the noise thermometry experiments shows clear dynamical backaction cooling. From the noise thermometry measurement, we derived the power-law scaling of the decoherence rate with the optical power with an exponent of \(\alpha=0.33\pm 0.04\), in reasonable agreement with existing literature. This suggests that soft-clamped membranes are subject to optical absorption heating and mechanical linewidth broadening. Our findings provide insights for researchers in quantum optomechanics to optimize their experimental procedures. The techniques and approaches developed here could facilitate the realization of new optomechanical systems that operate in the quantum regime. In the future, coupling the second defect of the phononic dimer to a microwave resonator could enable quantum transduction experiments, with long-term intermediate storage of the quantum state, opening new possibilities in quantum Figure 5: Noise Thermometry. (a) Mechanical spectrum PSD normalized to the input optical power squared for different optical input powers showing cooling of the mechanical resonance. For example, \(P_{\mathrm{in}}=5\) μmW at \(-1.5\) MHz detuning corresponds to an intracavity photon number of \(2.2\times 10^{6}\). (b) Shift in the mechanical frequency at various optical powers. The different symbols, for the Fig 5(b) to 5(e), correspond to different detunings, as specified in the legend of Fig. 5(c). (c) Noise spectrum background scaling linearly, thereby indicating a shot-noise limited measurement. (d) Mechanical linewidth at various powers showing dynamical backaction. (e) Phonon occupation during cooling power sweeps indicating power-law scaling. The normalization is obtained from the so-called “Gorodetsky method”[27] during a separate measurement on the same sample at first high (\(\measuredangle\approx\)4 K) baseplate temperature to obtain an estimate of \(g_{0}\), then at low (\(\approx 15\) mK) baseplate temperature using this estimate of \(g_{0}\) to retrieve an estimate of \(T_{\mathrm{bath}}=643\pm 108\) mK. The small dynamical backaction in the low powers of the shown data set is taken into account, but not the small amount of optical heating expected here, which is why the real mechanical occupancy is expected to be slightly higher than shown. Figure 4: Experimental Setup. Optical lock and science beams are derived from the same laser, frequency-offset by two acousto-optic modulators. The two beams are prepared in orthogonal polarization with linear polarizers (Pol.), and subsequently combined using a polarizing beam splitter (PBS). In a combination with a set of waveplates (\(\lambda/4\),\(\lambda/2\)), the same PBS separates the beams reflected from the cavity again. Non-polarizing beam splitters (BS) reroute fractions of the lock and science input powers towards photodetectors PD4 and PD2, respectively. Their signals are used to stabilize the two beams’ powers by feeding back to an electrooptic amplitude modulator (EOM/AM) and to the power of the radio-frequency signal driving the AOMs, respectively. The laser frequency is locked to the cavity via an error signal generated by demodulating the photocurrent detected on photodetector PD3, with feedback to a piezoelectric actuator that adjusts the frequency of the laser. The optomechanical cavity is mounted on the vibration isolation platform inside the dilution refrigerator together with two in- and out-coupling lenses. The photodetector PD1 records the intensity of the detuned science beam. The photocurrent spectra, as recorded with an R&S FSW26 spectrum analyser, therefore also contain the thermomechanical noise of the membrane resonances. Additionally, the cavity mode can be monitored in transmission via a camera and PD5. communication and computation. Other applications include compact setups for manipulating optical quantum noise [11, 29], as well as the search for unconventinoal decoherence [30] and dark matter [31]. ## Acknowledgements This work was supported by the European Research Council project PHOQS (grant no. 101002179), the Novo Nordisk Foundation (grant no. NNF20OC0061866) and the Danish National Research Foundation (Center of Excellence "Hy-Q"). This project has furthermore received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreements No. 801199 and 101107341. **Disclosures.** The authors declare no conflicts of interest. **Data availability.** Data underlying the results presented in this paper are available on [https://doi.org/10.5281/zenodo.8207950](https://doi.org/10.5281/zenodo.8207950).
2303.02202
Using a DSL to read ROOT TTrees faster in Uproot
Uproot reads ROOT TTrees using pure Python. For numerical and (singly) jagged arrays, this is fast because a whole block of data can be interpreted as an array without modifying the data. For other cases, such as arrays of std::vector<std::vector<float>>, numerical data are interleaved with structure, and the only way to deserialize them is with a sequential algorithm. When written in Python, such algorithms are very slow. We solve this problem by writing the same logic in a language that can be executed quickly. AwkwardForth is a Domain Specific Language (DSL), based on Standard Forth with I/O extensions for making Awkward Arrays, and it can be interpreted as a fast virtual machine without requiring LLVM as a dependency. We generate code as late as possible to take advantage of optimization opportunities. All ROOT types previously implemented with Python have been converted to AwkwardForth. Double and triple-jagged arrays, for example, are 400x faster in AwkwardForth than in Python, with multithreaded scaling up to 1 second/GB because AwkwardForth releases the Python GIL. We also investigate the possibility of JIT-compiling the generated AwkwardForth code using LLVM to increase the performance gains. In this paper, we describe design aspects, performance studies, and future directions in accelerating Uproot with AwkwardForth.
Aryan Roy, Jim Pivarski
2023-03-03T20:24:34Z
http://arxiv.org/abs/2303.02202v1
# Using a DSL to read ROOT TTrees faster in Uproot ###### Abstract Uproot reads ROOT TTrees using pure Python. For numerical and (singly) jagged arrays, this is fast because a whole block of data can be interpreted as an array without modifying the data. For other cases, such as arrays of std::vector<std::vector<float>>, numerical data are interleaved with structure, and the only way to deserialize them is with a sequential algorithm. When written in Python, such algorithms are very slow. We solve this problem by writing the same logic in a language that can be executed quickly. AwkwardForth is a Domain Specific Language (DSL), based on Standard Forth with I/O extensions for making Awkward Arrays, and it can be interpreted as a fast virtual machine without requiring LLVM as a dependency. We generate code as late as possible to take advantage of optimization opportunities. All ROOT types previously implemented with Python have been converted to AwkwardForth. Double and triple-jagged arrays, for example, are \(400\times\) faster in AwkwardForth than in Python, with multithreaded scaling up to 1 second/GB because AwkwardForth releases the Python GIL. We also investigate the possibility of JIT-compiling the generated AwkwardForth code using LLVM to increase the performance gains. In this paper, we describe design aspects, performance studies, and future directions in accelerating Uproot with AwkwardForth. ## 1 Introduction A majority of the particle physics data today is stored as ROOT files[1]. However, there is considerable variation in how the ROOT files are serialized. The newer versions (such as RNTuple) are exclusively stored in a columnar format. This means that that the data for each column is stored as a contiguous block. This is not entirely true for the older format, TTree, which stores simple data types in a columnar fashion and complex data types in a record-oriented format, which means that each row is stored as a contiguous block with bytes between rows. This introduces a need to iterate over objects when deserialising the data. Only a handful of libraries in Python allow users to read ROOT files, and only one allows them to do so without any compiled code dependency, that library is Uproot [2]. Uproot is a part of the Scikit-HEP ecosystem with a ROOT file reader implemented in pure Python. When it comes to TTrees, Uproot is much slower when deserialising record-oriented data, such as data types of nested depth greater than one. This is due to the fact that data with nested depth less than or equal to one (numerical and singly jagged) can be read by casting whole blocks of data as NumPy arrays, making it a constant time operation. For data types of nested depth greater than one (doubly jagged and more), the Pythonic deserialiser has to alternate between reading list contents and list lengths. This slows down the Pythonic deserialiser by orders of magnitude. In this paper, we present a solution to this problem by updating the Uproot library to generate specialised deserialisation code in a Domain Specific Language (DSL) to read each unique data type. The new DSL based deserialiser is the default option starting from Uproot version 5. ## 2 Design of the DSL The design of the DSL informs the majority of the subsequent design decisions of the new deserialiser. The biggest choice to be made is between a compiled language and an interpreted one. While a compiled language could deliver higher performance, it would also introduce the need for a runtime compiler, which would be hard to install as a dependency. On the other hand, an interpreted language, while easy to install and use, could suffer from lower speed when compared to the compiled alternative. Another important restriction governing the choice of DSL is our inability to simply pre-compile the deserialisation code for the specific data types. This is because the data types in the ROOT file format are discovered at runtime by reading the TStreamerInfo part of the file. This means that the code for each type of data will necessarily have to be generated at runtime. Given the fact that even developers do not see the code generated in the DSL, it does not need to be highly human-readable. This opens us up to the possibility of choosing a DSL that is instead easy to generate. To summarise, the restrictions for the choice of DSL are as follows: * It should be lightweight, i.e. should not depend on compilation tool chains like LLVM [3]. * It should be considerably faster than Python. * It does not need to be easy to read but it does need to be easy to generate. For example, the syntactic indentation of Python is not necessary and hard to generate consistently. ## 3 AwkwardForth DSL AwkwardForth [4] is a DSL designed to satisfy the requirements listed in the last section. It is based on standard Forth with some additional built-in commands for parsing files. It is shipped with Awkward Array [5], a Python library for handling nested lists of arbitrary lengths, already required by Uproot to represent the complex data types serialised in ROOT files, since NumPy arrays are limited to purely numeric data. AwkwardForth is considerably faster than Python. It takes about 5-10 ns per instruction to execute compared to about 1000-2000 ns per instruction for Python. This higher speed can be primarily attributed to two factors: * Python folows object pointers at runtime, AwkwardForth, like all Forths, only has one data structure, a stack of integers. * Python checks types at runtime, AwkwardForth has only one type, either all 32-bit integers or all 64-bit integers. Forth has a minimal syntax, consisting only of a stream tokens seperated by whitespace. All of these properties are retained by AwkwardForth, making it well suited for our task. The Awkward Array library ships with a Virtual Machine (VM) for running AwkwardForth code. This VM requires no extra installation and is pip installed along with the rest of the library, so there is no extra dependency requirement. While not having a compiler makes the new reader more accessible, we are aware of the fact that some of the potential users could already have a JIT-compiler like Numba [6] installed. In principle, we could use Numba if it is available in the environment to JIT-compile the AwkwardForth code. This would give an option for those users to have an even faster reader than the interpreted one. This possibility is discussed in a later section. ## 4 Data Types Covered Many primitive and complex data types have been encountered by Uproot users in its 6-year history. The new deserialiser must be capable of reading all of these data types to be the default option for Uproot version 5. To realise this, we implemented support for a large number of frequently used data types, including all of the STL types (vectors, maps, sets etc), arrays, a number of built-in classes (TString, Tarray, TDatime, TRefArray etc.), and user-defined classes. The only data types not implemented in AwkwardForth are those that can't be represented as Awkward Arrays and class features that have never been reported in Uproot's history. These cases fall back on the old Pythonic reader. ## 5 The Implementation Implementing the reader required making a number of design choices to ensure that it covers all the corner cases in a complex file format like ROOT. Reading a ROOT TTree file requires discovering a lot of information at runtime. The same Awkward data type may be serialized as different C++ classes, for example, strings can be represented as std::string, char* and TString. This variety within the same data type is due to historical reasons and the richness of the C++ type system. But sometimes, even the same C++ data type can be serialized in different ways. For example, sometimes object headers are skipped, and the decision to skip some headers are encoded in other headers in the data stream, not in external metadata. If all code is generated before deserializing, decisions about which headers to skip would have to be made repeatedly for all objects in the data stream. While that would have some impact on fully JIT-compiled code (preventing vectorization), it has more impact on interpreted code like AwkwardForth. To be most effective, we need to generate the AwkwardForth code as late as possible, after we know which object headers exist. Object headers are represented by an AwkwardForth "skip" command and missing headers involve no code at all. Neither case involves expensive code branching ("if" statements). We implement this late AwkwardForth generation as a new feature of the old Pythonic deserialiser. The Pythonic deserialiser runs over the first data entry, generating code as a side-effect. Then the AwkwardForth runs over the whole dataset (repeating the first entry). Due to the GIL in Python, the old Pythonic deserialiser could not make effective use of multi-threading. This was a huge disadvantage as multi-threading can scale up the reading of large files. The new AwkwardForth based deserialiser is not limited by the GIL as the VM is written in C++. This allows multi-threaded execution of the AwkwardForth code, enabling the deserialisation of large files at a much higher rate. While the execution of the AwkwardForth code can be multi-threaded, the code generation itself is done in a single thread, to avoid redundantly generating the same code in multiple threads. When code-generation is done, the completely formed VM is distributed to all the threads, then they can each start reading the file from different points in the bytestream. ## 6 Performance The new deserialiser performed as well as expected. Figure 1 shows the deserialisation rate of large samples of these four data types: float, std::vector<float>, std::vector<std::vector<float>>, that is, doubly nested list, as well as triply nested lists. The first two of these are columnar, and therefore unaffected by the update. The last two, however, are examples of record-oriented data, which the new AwkwardForth deserialiser reads approximately \(400\times\) faster than the old Pythonic deserialiser. In Figure 2, we can see that the deserialisation rate also scales very well with the number of threads (up to about 1000 MiB/sec). After which the serial part dominates the end to end workflow, in accordance with Amdahl's law. Before and after the parallel part of the workflow, data need to be copied into the Python process, the AwkwardForth code needs to be generated, finalized arrays needs to be concatenated, etc. We could identify about 80% of the flat part of the curve as serial steps. All of these tests were carried out with uncompressed data that was already loaded from disk into the operating system's virtual memory (RAM). Thus, the rates measured do not include decompression (typically slower) or disk-reading (depends on the speed of the disk). ## 7 Future Directions: JIT-compiling AwkwardForth The VM used to run the AwkwardForth code consists of a minimal interpreter written in C++, which interprets virtual bytecode in a loop. This was sufficient for this task as most of the AwkwardForth code generated to deserialise ROOT TTrees consists of simple stack manipulations, type conversions, and endian swapping. The lightweight VM without any code optimisation features is able to execute the AwkwardForth code very efficiently and generate favourable results as mentioned in the last section. ### JIT-compiling AwkwardForth One of AwkwardForth's design goals was to not need a compilation toolchain like LLVM, but since Numba is a popular JIT-compiler in the scientific Python world, it may be available anyway. In principle, we could check to see if Numba is installed, and if it is, fully JIT-compile the AwkwardForth code. This way, the same AwkwardForth code could be used with and without JIT-compilation; JIT-compilation would only provide an optional speed boost. In this section, we investigate the extent of performance gains that could be achieved in such Figure 1: Deserialization rate of the new deserializer, the old Pythonic deserializer, and C++ ROOT (C++ for loop calling TTree::GetEntry). Figure 2: Scaling of the new deserialiser with the number of threads. Left: vertical axis is rate. Right: vertical axis is time. a case. As a proof of concept, we implemented a bare-bones compiler, written in Python, to compile AwkwardForth bytecode to LLVM. This new experimental compiler was designed to fit into the existing deserialisation pipeline of Uproot. We reuse the tokenizer and the bytecode generator of the existing interpreter and translate this bytecode into Python code that Numba can compile (which we'll be calling "Numba code"). This allows us to use code that is already tested and deployed as a first step in the compilation workflow. The new compiler works by stepping through the list of bytecodes provided by the VM and generating the equivalent Numba code. The Numba code is then put into a function with a Numba @numba.jit decorator to compile it. The functions for pushing and popping from the stack are predefined Numba functions available in the Python environment. The stack is implemented as a NumPy array with a fixed length and an integer cursor to keep track of the head of the stack. These are both passed into the generated function as arguments. This allows us to manipulate the stack from outside, just as we can with the current AwkwardForth implementation. The task of deserialising typically requires execution of simple commands (with minimal branching) inside a loop. This same set of commands are repeated for each entry in the input dataset, which can be arbitrarily large. Since LLVM optimizes the code it is given, we can write naive code. However, it does help if the Numba code has fewer checks for stack underflow and overflow, and we can usually identify such situations while generating the Numba code by counting pushes and pops. In some cases, it's not possible to know the number of elements that could be be pushed or popped, and runtime error checks are left in for only these cases. ### The Performance of the new compiler We implemented a few operations from the VM and ran identical pieces of AwkwardForth code in both the interpreted VM and the new experimental compiler. The code that we tested does not include any data parsing, so that we can measure the rate of computation independently of fetching data from RAM. Thus, these tests represent the best possible speedup from JIT-compilation, for code that is arithmetically intense. Our test computation is \[x_{i+1}=(x_{i}+1)*(x_{i}-2)+3\] (1) AwkwardForth: dup 1 + swap 2 - * 3 + Including the do...loop itself, this is 10 AwkwardForth instructions, which we expect to be 5-10 ns per instruction. When JIT-compiled (at -O3) in x86, the expression becomes \[\begin{array}{l}\mbox{lea}\quad\mbox{eax, [rdi+1]}\\ \mbox{sub}\quad\mbox{edi, 2}\\ \mbox{imul}\quad\mbox{eax, edi}\\ \mbox{add}\quad\mbox{eax, 3}\end{array} \tag{3}\] The expression only uses the fastest hardware commands (addition, subtraction, and multiplication, each about 1 clock tick) and in a way that they cannot be optimized away. Figure 3 shows the results of this comparative study. JIT-compilation adds a constant-sized overhead of about 400 ms for the compilation itself, such that it only begins to scale with \(\sim\)10\({}^{7}\) iterations or more. However, in the asymptotic limit of many iterations, the JIT-compiled version has a \(\sim\)30\(\times\) faster rate. The above test excluded the cost of data transfer, which is highly variable. Different computers can vary from 1 GB/sec to 10 GB/sec, and the number of bytes per iteration depends on the type of data structure and data values (lists with many items versus few items). Record-oriented data has at least tens of bytes per entry, so the bottleneck due to data transfer would be at least 1 second for 10\({}^{9}\) entries (assuming 10 bytes per entry and 10 GB/sec), which is about the same rate as the experimental compiler itself. If data transfer is slower than this (more bytes per entry or slower memory transfer rate), then data transfer would dominate over the gains from JIT-compilation. Thus, the benefit of JIT-compilation would depend strongly on use-case. From this study, we have learned that JIT-compilation of the AwkwardForth would be at least sometimes beneficial, by as much as \(30\times\) in the best case. However, data transfer rates can mask this difference in some cases and not others. ## 8 Conclusion In this paper, we presented the implementation details of a DSL based ROOT file deserialiser. The new deserialiser uses the AwkwardForth DSL to generate type-specific code to read ROOT TTrees files. The new deserialiser was implemented to overcome the orders-of-magnitude slowdown of using Python to deserialise data in sequential loops for record-oriented data. The new deserialiser showed a \(400\times\) gain in speed for this type of data. We also studied the possibility of optionally JIT-compiling AwkwardForth code when Numba is in the Python environment. A proof of principle implementation showed a \(30\times\) speedup, in an ideal case of arithmetically intensive instructions. Data transfer rates, however, can mask this difference when more than tens of bytes per entry need to be transferred. Starting in Uproot version 5 (already released), AwkwardForth became the default deserialiser. All up-to-date users of the Uproot package with complex data to read are now enjoying its benefits. ## 9 Acknowledgements This work was supported by the National Science Foundation under Cooperative Agreement OAC-1836650 (IRIS-HEP).
2305.01485
A Heath-Jarrow-Morton framework for energy markets: a pragmatic approach
In this article we discuss the application of the Heath-Jarrow-Morton framework Heath et al. [26] to energy markets. The goal of the article is to give a detailed overview of the topic, focusing on practical aspects rather than on theory, which has been widely studied in literature. This work aims to be a guide for practitioners and for all those who deal with the practical issues of this approach to energy market. In particular, we focus on the markets' structure, model calibration by dimension reduction with Principal Component Analysis (PCA), Monte Carlo simulations and derivatives pricing. As application, we focus on European power and gas markets: we calibrate the model on historical futures quotations, we perform futures and spot simulations and we analyze the results.
Matteo Gardini, Edoardo Santilli
2023-05-02T15:09:42Z
http://arxiv.org/abs/2305.01485v3
# A Heath-Jarrow-Morton framework for energy markets: a pragmatic approach ###### Abstract In this article we discuss the application of the Heath-Jarrow-Morton framework Heath et al. [26] to energy markets. The goal of the article is to give a detailed overview of the topic, focusing on practical aspects rather than on theory, which has been widely studied in literature. This work aims to be a guide for practitioners and for all those who deal with the practical issues of this approach to energy market. In particular, we focus on the markets' structure, model calibration by dimension reduction with Principal Component Analysis (PCA), Monte Carlo simulations and derivatives pricing. As application, we focus on European power and gas markets: we calibrate the model on historical futures quotations, we perform futures and spot simulations and we analyze the results. **Keywords**: Stochastic processes, Heath-Jarrow-Morton, Energy Markets, Monte Carlo, Principal Components Analysis, Calibration, Pricing. ## 1 Introduction Electricity markets across the world differ for many factors, both fundamental, such as demand and generation mix, and regulatory. In the most modern countries one goal in deregulating energy markets is to allow them to respond to supply and demand variation in a more efficient way. Focusing on US electricity market, Park et al. [39] has shown that as a result of deregulation, more competitive and interrelated environments are developing in the electricity and natural gas markets. In a previous paper Emery and Liu [22] have discovered that the daily settlement prices of New York Mercantile Exchange's (NYMEX's) California-Oregon Border (COB) and Palo Verde (PV) electricity futures contracts are cointegrated with the prices of its natural-gas futures contracts. Such a result has been confirmed by Mjelde and Bessler [36] which have shown how electricity prices mainly response to shocks in coal market, whereas Bachmeier and Griffin [3] deeply investigated the level of market integration between crude oil, coal, and natural gas prices. Moving to European countries several authors discussed how electricity prices react to variations in fuel prices. Testing for market integration between natural gas, electricity and oil prices in the UK in the period in which the natural gas market was deregulated but not yet linked to the continental European gas market, Asche et al. [2] have highlighted evidences for an integrated energy market. Panagiotidis and Rutledge [37] analyzed the relationship between UK wholesale gas prices and Brent oil price finding co-integration over the entire sample period(1996-2003). Likewise, using daily price data for Brent crude oil, NBP UK natural gas and EEX electricity Bencivenga et al. [7] have shown that gas, oil and electricity markets are integrated. On the other hand as a result of a robust multivariate long-run dynamic analysis Bosco et al. [13] have revealed the presence of four highly integrated central European electricity markets (France, Germany, the Netherlands and Austria). The trend shared by these four electricity markets appears to be common also to gas prices, but not to oil prices. The recent invasion of Ukraine by Russia, and the fear of a possible shortage in gas supply for Europe, led to an increase in gas and electricity prices1 which has never seen before as shown in Figure 1. This is easy to explain from an economical point of view, since in Europe natural gas is used to produce the 19.2% of electricity and natural gas power plants usually play the role of marginal technology in the electricity supply curve. At this point it should be evident that integration between energy markets must be taken into account if one is interested in energy commodities modeling, risk management or in derivatives pricing. Footnote 1: Such an increase has begun before the invasion, after the Covid-19 pandemic and many complex factors contributed to it. Nevertheless, the effect of the war on European energy commodities prices has been evident. Over the years many approaches have been proposed to model energy markets in a univariate setting. Pioneering papers in this field dates back at the beginning of the century: Schwartz and Smith [47], Lucia and Schwartz [32] and Schwartz [46] focused mainly purely Gaussian framework, whereas Cartea and Figueroa [19] proposed a mean-reverting model with jumps and a deterministic seasonality component for electricity spot prices. Saifert and Uhrig-Homburg [42] compared different modeling approaches in power markets, whereas a good summary of energy markets modeling is contained in Benth et al. [9]. Many non-Gaussian models based on Levy processes which have been proposed for equities, such as the variance gamma (Madan et al. [34]), the jump-diffusion model (Merton [35]) and the normal-inverse Gaussian process (Barndorff-Nielsen [5]): in particular a Two-Factor version of this process have been recently applied to the energy context by Piccirilli et al. [41]. Furthermore, many stochastic volatility models, like the ones proposed by Heston [27] and Bates [6], can be adapted to model commodity prices behavior. All these powerful tools can be used to properly consider many stylized facts such as jumps in price trajectories, skewness and fat-tails in log-returns distribution and volatility smiles. A review of financial modeling with jump processes can be found in Cont and Tankov [20]. Financial modeling in univariate setting has been deeply investigated, but challenging issues arise when we scale to a multi-commodities market. Within this context, the former modeling techniques become harder to apply in practice and literature is not as rich as in the one dimensional framework. Petroni and Sabino [40] have shown how some standard models such as the ones proposed by Black and Scholes [11], Schwartz and Smith [47] and # German power market and natural gas daily prices for year 2023 Jan 2020 Jul 2020 Jan 2021 Jul 2021 Jan 2022 Jan 2023 ###### Abstract The German power market and natural TTF gas daily prices are studied in the literature. The first two years of data are collected from the 2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-20202-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-202-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-20-2020-2020-2020-20-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-2020-20-2020-2020-2020-202-2020-2020-202-2020-2020-2020-2020-2020-2020-2020-20-2020-202-2020-2020-20-2020-20-2020-20-2020-20-2020-20-202-020-20-2020-20-20-2020-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-20-0-2 Cartea and Figueroa [19] can be extended to a multivariate context by adding dependent jumps which are modeled using self-decomposable laws, whereas Kiesel and Kusterman [30] introduced a structural model to properly consider the market coupling effect in electricity spot markets, in the spirit of what has been proposed by Carmona and Coulon [18]. A widely recognized approach for energy markets modeling has been proposed by Benth and Saltyte-Benth [8] which adapted the framework introduced by Heath et al. [26] to energy futures market. This modeling technique, together with the calibration of the underlying model, has been studied by many authors such as Sclavounos and Ellefsen [48], Hinderks et al. [28], Benth et al. [10], Broszkiewicz-Suwaj and Aleksander [15], Edoli et al. [21] and Feron and Gruet [23]. Despite their mathematical completeness and accurateness, these articles seems hard to be used in practice since many practicalities are not covered. The data preparation, a clear explanation on how the model should be implemented in practice, a "practitioner" interpretation of the results, the management of typical issues arising during the implementation are often missing. This works aims at filling this gap, by collecting all the results presented so far concerning the application of the Heath-Jarrow-Morton (HJM) framework to energy markets. By focusing on European power and gas futures markets, we present a very general approach for data analysis and preparation, for model calibration and, finally, for simulation. Furthermore, we discuss some model limitations and we suggest some possible extensions, trying to preserve both the numerical and the mathematical tractability of the framework. The article is organized as follows: Section 2, focuses on a slice of European power and gas futures markets, and shows how the HJM approach might be the appropriate modeling framework. Section 3 introduces the model and briefly explain its behavior with the support of some "toy-examples". In this section we also show how the PCA can be used in order to calibrate the market parameters. Section 4 considers real-market data, deals with different approaches to data preparation and shows how to properly calibrate the model on power and gas European futures prices. In Section 5 we show simulation results of the whole market considered in Section 4: we analyze the outcomes and we briefly discuss model's strengths and limitations. Section 6 concludes the work and discusses some possible model extensions. ## 2 Market structure and analysis In this section we focus on the European power and gas futures markets. The results we obtain are valid within this framework, but the approach can be easily adapted to any market. In particular, we consider data from the European Energy Exchange (EEX) for the power markets, whereas data regarding the natural gas markets comes from the Intercontinental Exchange (ICE). Recently, the power and gas futures markets in Europe has experimented an era of expansion and increasing interest. Within this market many contracts are present but the most traded ones are those with monthly, quarterly and yearly delivery. For example, a power future 2024 calendar is a contract between two counterparts to buy or sell a specific volume of energy in MWh at fixed price, decided at time \(t\), for all the hours of the year. In this case \(T^{s}=1/1/2024\) and \(T^{e}=31/12/2024\). Moreover, at time \(T^{s}\) the product expires and hence \(T^{s}\) plays the role of maturity2. Futures contracts with monthly or quarterly delivery are defined similarly. We will denote the price of such a contract at time \(t\) by \(F(t,T^{s},T^{e})\). Concerning the gas market, the situation is slightly different, but for the aim of this paper we can consider it to be similar to power one. In particular, we can assume that the same contract \(F(t,T^{s},T^{e})\) written on gas, will delivery the gas you need to produce 1 MWh of electricity for the whole delivery period \([T^{s},T^{e}]\). On the other hand, we denote by \(F(t,T)\), the so called _fixed delivery_ futures contract: \(F(t,T)\) is the price at time \(t\) of a contract with delivery \(T\), for example one year,one month, three months and so on from now. These latter products are not quoted in the market but can be easily obtaining by rearranging \(F(t,T^{s},T^{e})\) as we will show in Section 4.1. From a modeling point of view great simplification arises by defining the dynamic of \(F(t,T)\) and hence obtaining that of \(F(t,T^{s},T^{e})\). Footnote 2: Actually the product with delivery starting at \(T^{s}\) is traded until the day before the beginning of the delivery period, but in order to simplify the notation we consider \(T^{s}\) as the maturity. Without loss of generality, in this paper we focus on four power markets and on two natural gas markets. In particular we consider the German (DE), Italian (IT), French (F7) and Swiss (CH) power futures markets and the TTF and PSV which are the Dutch and the Italian hubs for the natural gas, respectively. Of course, the analysis can be extended to an arbitrary number of markets. The final goal is to model the dynamic of the whole forward curve for the aforementioned markets under a HJM framework. We can assume that each of traded contract acts as a source of uncertainty (which we call a "random factor") for the determination of the forward curve dynamic. On the other hand, as we stated before, all these markets are co-integrated and hence the hope is that we can use a small number of random factors to successfully model the forward curves dynamic. In oder to verify this assumption, we consider daily futures prices between the \(1^{st}\) of January 2020 to the \(31^{st}\) of December 2022 and we compute the correlation of the log-returns, following the approach proposed by Sclavounos and Ellefsen [48]. In Figure 2 we plot the correlation surfaces between daily log-returns of several futures products with fixed delivery3: we observe that the linear correlation coefficient is significant and hence this bodes well that only few stochastic factors drive the whole structure of the forward curve. Footnote 3: As mentioned before, in energy markets it is a common practice to distinguish between general future contracts and those ones with fixed delivery. Fixed delivery products refer to delivery occurring after a given amount of time with respect to the present date. For example, if today is the 19th of November 2023, \(F(t,T_{1})\) with \(T_{1}=\frac{1}{12}\) refers to the to product with delivery on the next month, namely December 2023: it is customary to refer to it as \(M1\) (a short hand for month plus one). An example will make the distinction clearer. \(F(t,T^{s},T^{e})\) refers to an absolute delivery, for example December 2023 and hence, \(T^{s}=1/12/2023\) and \(T^{e}=31/12/2023\). If today is the 19th October 2023, this product will be the \(M2\) and will be denoted by \(F(t,T_{2})\), with \(T_{2}=\frac{2}{12}\). As time goes on if we are on the 19th November 2023 we observe the same product \(F(t,T^{s},T^{e})\) but this time this product correspond to the \(M1\) since its delivery is only one month far from now. We denote this product by \(F(t,T_{1})\) with \(T_{1}=\frac{1}{12}\). Most financial models relies upon stochastic processes in continuous time and, mainly, on Levy processes: the well known Brownian motion is just the simplest of them. Working in a Levy framework leads to the development models characterized by a very reach structure which can be used to efficiently include many stylized facts. The interested reader can refer to Sato [44] and Applebaum [1] for an overview on Levy processes and to Cont and Tankov [20] for applications to financial markets. One of the assumptions Figure 2: Log-returns correlation surfaces for different commodities. of Levy processes is the independence of increments. For this reason, before using Levy processes for modeling purposes, one has to check that the increments are independent. Following the approach proposed in Brigo et al. [14], we compute the auto-correlation function (ACF) on six different time series, one for each market, on the calendar product with delivery the year 2023. We consider the series of daily log-returns \(x_{1},x_{2},\ldots,x_{n}\), with \[x_{i}=\ln\frac{S_{t_{i+1}}}{S_{t_{i}}},\] where \(S_{t_{i}}\) denotes the price of a given risky asset at time \(t_{i}\) and we compute the ACF with lag \(k\) as: \[ACF(k)=\frac{1}{(n-k)\hat{v}}\sum_{i=1}^{n-k}\left(x_{i}-\hat{m}\right)\left(x _{i+k}-\hat{m}\right),\quad k=1,\ldots,20,\] where \(\hat{m}\) and \(\hat{v}\) are the sample mean and variance. Roughly speaking we can consider the ACF as an estimate of the correlation between the random variables \(X(t_{i})\) and \(X(t_{i+k})\). ACF for the six selected products above are shown in the charts in Figure 3. For the all of them we do not observe any significant lags in the historical return time series, which means the independence assumption is acceptable in this case. Changing the delivery period of the product we get similar results. Therefore Levy processes can be used in order to properly model the futures prices. Within the classical HJM framework log-returns are assumed to be normally distributed. As observed by many authors in most financial markets log-returns are not normally distributed, but their distribution is often skewed and presents heavy tails or fat tails effect. Furthermore, the log-returns volatility is not constant but often clusters appear. In Figure 4 we plot the empirical probability density function of log-returns compared with the normal one fitted on the same set of data as above. In all cases we observe that the real log-returns distribution is peaked and presents tails which are heavier than the normal distribution ones. Furthermore, all distributions result to be skewed. In light of this results, by choosing a normal distribution we might lose some market peculiarities. On the other hand, in order to get a simple and stable calibration methodology the hypothesis of normality in log-returns is commonly accepted by practitioners. Gaussianity hypothesis in log-returns might be relaxed by using Levy processes or Levy copulas, as proposed by Panov and Samarin [38] and Cont and Tankov [20], but this approach is very hard to handle from a practical point of view. Indeed, the calibration step is hard to tackle, especially if the number of underlying asset is large. Several authors, such as Luciano and Semeraro [33], Schoutens [45], Ballotta and Bonfiglioli [4], Buchmann et al. [16] and Gardini et al. [24], investigated other techniques in order to consider dependence in log-returns remaining in a Levy framework. Most of them work quite well if the number of the risky assets to remains small, but complications arise when one deals with many risky underlying assets since the number of parameters rapidly grows as the number of underlying assets increases. Consequently calibration becomes hard to perform in practice and its results might be unreliable. For these reasons, the Gaussian framework remains a milestone among practitioners in multi commodity energy markets. On the other hand, if a focus on a single product is required, a more general model among the ones we listed should be considered. Figure 3: Sample ACF computed on log-returns. Figure 4: Daily futures log-returns densities for calendar 23 products. In the next sections we apply the HJM framework to energy markets, with a particular focus on European power and natural gas markets. Nevertheless, the approach is very general and could be easily adapted to other commodity markets such as oil, precious metals and agricultural products. The model In this section we discuss in detail the HJM framework applied to energy markets. In particular we start from some "toy-models" which are useful to fix the main modeling concepts. Once that the dynamics and the calibration procedures for these simpler models are clear, we introduce the most general framework which turns out to be a simple extension of the previous setting. ### A single factor toy-model Consider the price of the fixed delivery future contract, signed at \(t_{0}\), \(F(t,T)\) at time \(t\in[t_{0},T]\) for a fixed delivery \(T\). Assume that we have only a single source of uncertainty and, assuming to work under the risk neutral measure \(\mathbb{Q}\), consider a dynamic of the following type: \[\frac{dF(t,T)}{F(t,T)}=\sigma(t,T)dW(t),\quad F(t_{0},T)=F(0,T)\;a.s., \tag{1}\] where the volatility \(\sigma(t,T)\) is assumed to be a deterministic time dependent function \(\sigma(t,T):[t_{0},T]\mapsto\mathbb{R}^{+}\) and \(F(0,T)\in\mathbb{R}^{+}\) is the value of the future contract at time \(t_{0}\). According to Samuelson [43] the term structure of commodity forward price volatility typically declines with contract horizon: this is what is commonly known as Samuelson's effect. Therefore, it is customary to assume a volatility function which depends on the time to maturity \(T-t\), namely \(\sigma\left(t,T\right)=\sigma(T-t)\). In particular, \(\sigma(T-t)\) will be decreasing in \(T-t\), reflecting the fact that contract with longer maturities are less volatile than contracts with a shorter one. Using very basic Ito's calculus we can solve the Equation (1), obtaining \[F(t,T)=F(t_{0},T)\exp\left\{-\frac{1}{2}\int_{t_{0}}^{t}\sigma\left(T-s\right) ^{2}ds+\int_{t_{0}}^{t}\sigma(T-s)dW(s)\right\}.\] The existence of an explicit solution for the stochastic differential Equation (1) is extremely important in order to simulate the trajectories of process \(F(T)=\left\{F(t,T);t_{0}\leq t\leq T\right\}\) exactly. Some possible realizations of the process over one year time horizon for a volatility function of the form: \[\sigma(T-t)=0.8e^{-2(T-t)},\] are shown in Figure 6. Observe that, at the beginning the volatility is low, since the time to maturity is large, whereas as \(T-t\to 0\) the volatility increases, according to the Samuelson's effect. When one performs simulations, "sanity checks" are important. By "sanity check" we mean the comparison of a numerical computation with a theoretical one. For example, by Ito's isometry, it is very easy to check that: \[Var\left[\ln F(t,T)\right]=Var\left[\int_{t_{0}}^{t}\sigma(T-s)dW(s)\right]= \int_{t_{0}}^{t}\sigma\left(T-s\right)^{2}ds,\;t\in\left[t_{0},T\right]. \tag{2}\] Hence, one can estimate numerically the variance of \(F(t,T)\) for each \(t\in[t_{0},T]\) and compare this result with the one given by Equation (2). From Figure 7 we observe that the numerical quantities and the theoretical ones are very close and hence we are guaranteed that the numerical simulation scheme is correctly implemented. The knowledge of and explicit expression for variance as the one in Equation (2) is important also for option pricing. Following Boerger et al. [17], if we assume the following dynamic for the forward price: \[\frac{dF(t,T)}{F(t,T)}=\boldsymbol{\sigma}(t,T)\cdot d\boldsymbol{W}(t),\] where \(\boldsymbol{\sigma}(t,T)=(\sigma_{1}(t,T),\ldots,\sigma_{n}(t,T))\) and \(\boldsymbol{W}=(W_{1},\ldots,W_{n})\) is a \(n\)-dimensional standard Brownian motion with independent components, the price of a call option with maturity \(T_{0}\leq T\) and strike price \(K\) is given by the standard Black formula: \[C(t_{0},T_{0},K)=e^{-r(T_{0}-t_{0})}\left(F(t_{0},T)\mathcal{N}\left(d_{1} \right)-K\mathcal{N}\left(d_{2}\right)\right), \tag{3}\] where \(r\geq 0\) is the risk-free rate, \(\mathcal{N}\left(\cdot\right)\) is the cumulative distribution function of a standard normal random variable and: \[d_{1} =\frac{\ln\frac{F(T_{0},T)}{K}+\frac{1}{2}Var\left[\log F(T_{0},T )\right]}{\sqrt{Var\left[\log F(T_{0},T)\right]}}\] \[d_{2} =d_{1}-\sqrt{Var\left[\log F(T_{0},T)\right]},\] where \(Var\left[\log F(T_{0},T)\right]\) can be easily computed by Ito's isometry and depends on the form we chose for \(\boldsymbol{\sigma}(T-t)\). For example, if we consider a bi-dimensional standard Brownian motion and we assume Figure 5: Annualized volatility of the toy model including the Samuelson’s effect. that: \[\sigma_{1}(t,T) =0.8e^{-2(T-t)},\] \[\sigma_{2}(t,T) =0.2,\] we obtain: \[Var\left[\log F(T_{0},T)\right]=\frac{0.64}{4}\left[e^{-4(T-T_{0})}-e^{-4(T-t_{0} )}\right]+0.04\left(T_{0}-t_{0}\right).\] The price of European call option with different strike prices \(K\) are shown in Figure 8 Of course, any form can be assumed for \(\sigma(T-t)\), but, from a practical point of view a step-wise volatility structure is usually considered. Indeed, if we consider monthly fixed delivery, such as \(M0,M1,\ldots,M36\), with \(M=36\) delivery dates \(\{T_{j}\}_{j=1}^{M}\), the volatility function assumes the following form: \[\sigma(T-t)=\sum_{j=1}^{M}\sigma_{j}1_{I_{j}}(T-t),\] where \(\sigma_{j}\in\mathbb{R}^{+}\) and \(I_{j}=(T_{j-1},T_{j}]\) and where \(T_{j}\) is the delivery of the \(j\)-th contract. As mentioned before, we typically do not observe monthly products but quarters, season and yearly contract. On the other hand it is possible to extract from the available market products the monthly quotations. The data preparation methodology and the calibration algorithm we use to estimate \(\sigma_{j}\) is discussed in Section 4. ### A model for a multi-commodity market Now we extend the "toy-model" we presented in the previous section to a multi-commodity framework. Assume we have \(k=1,\ldots,K\) markets (DE, F7, IT, TTF and so on), each Figure 6: Possible simulations of the forward prices \(F(t,T)\). of them consist in the same number \(M_{k}=M\) for \(k=1,\ldots,K\) of fixed delivery futures contracts \(F^{k}(t,T_{m})\). Recall that a contract which delivers at the beginning of the next month, independently of the trading date is stated by \(M1\), which means that the delivery is the next month after the current date. As mentioned above, these products are not quoted on the market, where only futures contracts \(F(t,T^{s},T^{e})\) can be observed, but they can be easily computed from the quoted ones. As a first step of our modeling framework we start by defining the dynamic of the fixed delivery futures contract. Assume that each of the the contracts is a "random factor" which might have an impact on the whole market dynamic. Hence we have a total number of \(\tilde{N}=M\cdot K\) random factors. The dynamic of a future product \(k=1\) with fixed delivery \(T_{m}\) is given by: \[\frac{dF^{k}(t,T_{m})}{F^{k}(t,T_{m})}=\sum_{j=1}^{\tilde{N}}\sigma_{kj}dW_{j} (t),\quad F^{k}(t_{0},T_{m})=F^{k}(0,T_{m})\ a.s.,\ t\geq 0. \tag{4}\] where \(W_{j}=\left\{W_{j}(t);t\geq 0\right\}\ j=1,\ldots,\tilde{N}\) are independent Brownian motions. From Equation (4) observe that the dynamic of the single monthly product \(F^{k}(t,T_{m})\) potentially depends on the ones of all other monthly futures products. From empirical evidence, energy markets are strongly co-integrated and hence it is clear that considering \(\tilde{N}\) independent Brownian motions appears to be unreasonable. A possible methodology, based on the dimensional reduction inherited from the Principal Component Analysis (PCA), to select a lower number of stochastic factors, will be presented in Section 4. Figure 7: Sanity check of the volatility of log-prices in the toy-model by using 5000 simulations. Observe that Equation (4) can be written in a matrix form as: \[\frac{d\mathbf{F}(t,T)}{\mathbf{F}(t,T)}=\mathbf{\sigma}\cdot d\mathbf{W}(t),\] where \(\mathbf{W}=\left(W_{1},\ldots,W_{\tilde{N}}\right)\) is a standard Brownian motion with independente components, \(\mathbf{\sigma}\) is a \(\tilde{N}\times\tilde{N}\) matrix of the form: \[\mathbf{\sigma}^{2}\left\{\begin{array}{ccccc}\sigma_{1,1}&\sigma_{1,2}&\sigma_{ 1,3}&\cdots&\sigma_{1\tilde{N}}\\ \sigma_{2,1}&\sigma_{2,2}&\sigma_{M+2,3}&\cdots&\sigma_{2,\tilde{N}}\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ \sigma_{M,1}&\sigma_{M,2}&\sigma_{M,3}&\cdots&\sigma_{2M,\tilde{N}}\\ \hline\sigma_{M+1,1}&\sigma_{M+1,2}&\sigma_{M+1,3}&\cdots&\sigma_{M+1,\tilde{N }}\\ \sigma_{M+2,1}&\sigma_{M+2,2}&\sigma_{M+2,3}&\cdots&\sigma_{M+2,\tilde{N}}\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ \sigma_{2M,1}&\sigma_{2M,2}&\sigma_{2M,3}&\cdots&\sigma_{2M,\tilde{N}}\\ \hline\vdots&\vdots&\vdots&\vdots&\vdots\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ \hline\sigma_{M(K-1)+1,1}&\sigma_{M(K-1)+1,2}&\sigma_{M(K-1)+1,3}&\cdots& \sigma_{M(K-1)+1,\tilde{N}}\\ \sigma_{M(K-1)+2,1}&\sigma_{M(K-1)+2,2}&\sigma_{M(K-1)+2,3}&\cdots&\sigma_{M( K-1)+2,\tilde{N}}\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ \sigma_{MK,1}&\sigma_{MK,2}&\sigma_{MK,3}&\cdots&\sigma_{MK,\tilde{N}}\\ \end{array}\right.\] Figure 8: Pricing of European call options with different strike prices. Exact formula is compared to a Monte Carlo pricing with \(10^{7}\) simulations. where \(\boldsymbol{\sigma}^{k}\in\mathbb{R}^{M\times\tilde{N}}\) denotes the matrix associated to random factors of \(k\)-th market product. Once we have defined the dynamic for the fixed delivery products that of the \(F(t,T^{s},T^{e})\) is a direct consequence. Assume now that \(T^{s}\) is an absolute date, for example 31/12/2023. Then, the dynamic of \(F^{k}(t,T^{s},T^{e})\) for \(t\in[t_{0},T^{s}]\) is given by 4: Footnote 4: Observe that we are assuming that \(\forall T^{s}\) there exists \(m_{0}\in[1,\ldots,M]\) such that \(T_{s}=T_{m_{0}}\). For example today \(t_{0}=19/2/2023\) we observe \(F(t,T^{s},T^{e})\) with \(T^{s}=1/5/2023\) and \(T^{e}=31/5/2023\) we are assuming modeling the product \(F(t_{0},T_{m_{0}})\) in such a way that \(T^{s}-t_{0}=T_{m_{0}}-t_{0}\). \[\frac{dF^{k}(t,T^{s},T^{e})}{F^{k}(t,T^{s},T^{e})}=\sum_{j=1}^{\tilde{N}}\sum_ {i=1}^{M}\hat{\sigma}_{ij}^{k}\mathbbm{1}_{I_{i}}(T-t)dW_{j}(t). \tag{5}\] The explicit solution of the Equation (5) for \(t\in[t_{0},T^{s}]\) is given by: \[\begin{split} F^{k}(t,T^{s},T^{e})=F^{k}(t_{0},T^{s},T^{e})\exp \left\{-\frac{1}{2}\sum_{j=1}^{\tilde{N}}\sum_{i=1}^{M}\left(\sigma_{ij}^{k} \right)^{2}\int_{t_{0}}^{t}\mathbbm{1}_{I_{i}}\left(T^{s}-s\right)ds\right.\\ \left.+\sum_{j=1}^{\tilde{N}}\sum_{i=1}^{M}\sigma_{ij}^{k}\int_{t _{0}}^{t}\mathbbm{1}_{I_{i}}\left(T^{s}-s\right)dW_{j}(s)\right\}.\end{split} \tag{6}\] The existence of a close form solution allows us to simulate the paths of the process in an exact way, without using any discretization method, such as the Euler's of Millstein's one (see Seydel [49]). Once again, observe that if \(t\approx t_{0}\) then \(T^{s}-t\) is "large" and hence we are considering the volatility of the products with a large maturity. As soon as time \(t\) goes on, the time to delivery \(T^{s}-t\) gets smaller and smaller hence we are taking into account the volatility of the fixed delivery products with a short time to maturity. Also in this case a sanity check can be performed. Assume we simulate the process following Equation (6) for a very short period, namely \(T^{s}-t\approx T^{s}-t_{0}\). Hence Equation (6) simplifies to: \[F^{k}(t,T_{m_{0}})=F^{k}(t_{0},T_{m_{0}})\exp\left\{-\frac{1}{2}\Delta t\sum_ {j=1}^{\tilde{N}}\left(\sigma_{m_{0}j}^{k}\right)^{2}+\sum_{j=1}^{\tilde{N}} \sigma_{m_{0}j}^{k}W_{j}(t)\right\}, \tag{7}\] where \(\Delta t=t-t_{0}\) and \(m_{0}\) is such that \(T_{m_{0}}=T^{s}\). By computing the log-return \(x^{k}(t)=\ln F(t,T_{m_{0}})-\ln F(t_{0},T_{m_{0}})\) we get: \[x^{k}(t)=-\frac{1}{2}\Delta t\sum_{j=1}^{\tilde{N}}\left(\sigma_{m_{0}j}^{k} \right)^{2}+\sum_{j=1}^{\tilde{N}}\sigma_{m_{0}j}^{k}dW_{j}(t),\] and if we compute its variance we get: \[Var\left[x^{k}(t)\right]=\Delta t\sum_{j=1}^{\tilde{N}}\left(\sigma_{m_{0}j}^ {k}\right)^{2}.\] Hence, we have a simple way to check that the variance of the simulated product is correct, by summing up the squares of the entries of the matrix \(\boldsymbol{\sigma}^{k}\) and multiplying by \(\Delta t\). Clearly if the approximation \(T_{m_{0}}-t\approx T-t_{0_{0}}\) does not hold, namely for larger \(t\) such expression is no longer valid. On the other hand, simple computations show that: \[Var\left[\log F^{k}(t,T^{s},T^{e})\right]=\sum_{j=1}^{\tilde{N}}\sum_{i=1}^{M} \left(\sigma_{ij}^{k}\right)^{2}\int_{t_{0}}^{t}\mathbbm{1}_{I_{i}}\left(T^{s }-s\right)ds, \tag{8}\] which can be easily used to check the correctness of the simulations. As stated in Section 3.1 once that the expression of \(Var\left[\log F^{k}(t,T^{s},T^{e})\right]\) is known, a Black's style pricing formula of the form shown for European call options as the one shown in Equation 3 can be easily derived. ### The spot dynamic Starting from Equation (6) we can retrieve the dynamic for the spot prices by defining the spot prices \(S(t)\) as: \[S(t)=\lim_{\begin{subarray}{c}T^{s}\to t\\ T^{e}\to t\end{subarray}}F(t,T^{s},T^{e}).\] Passing to the limit in Equation (6) we have that: \[\begin{split} S^{k}(t)=F^{k}(t_{0},t)\exp&\left\{ -\frac{1}{2}\sum_{j=1}^{\tilde{N}}\sum_{i=1}^{M}\left(\sigma_{ij}^{k}\right)^ {2}\int_{t_{0}}^{t}\mathbbm{1}_{I_{i}}\left(t-s\right)ds\right.\\ &\left.+\sum_{j=1}^{\tilde{N}}\sum_{i=1}^{M}\sigma_{ij}^{k}\int_ {t_{0}}^{t}\mathbbm{1}_{I_{i}}\left(t-s\right)dW_{j}(s)\right\}.\end{split} \tag{9}\] Observe that it is important to understand how volatility behaves as time \(t\) increases. If \(t\approx t_{0}\) then the volatility of the spot price \(S^{k}(t)\) the one of the fixed delivery futures products with the shorter time to maturity. As time \(t\) increases, the spot price \(S^{k}(t)\) somehow includes all the volatility effects from the products with shorter time to maturity to the ones with a longer one. This is reasonable from an economical point of view. Indeed, if we want to simulate the process \(S^{k}(t)\) in a year it would have been the \(M12\), the \(M11\) and so on up to \(M0\) and hence it includes all their volatility. Moreover, an expression for \(Var\left[S^{k}(t)\right]\) similar to the one in Equation (8) can be easily obtained. From a practical point of view, especially for power prices, it is customary to consider a hourly granularity. \(F^{k}(t_{0},t)\) represents the power forward price today for hour \(t\) and hence the function \(F^{k}(t_{0},t)\) for \(t\in[t_{0},T]\) for a fixed \(T\), represents the hourly forward curve for the product \(k\). In simulations routines, in order to simplify the simulation, we assume the the "shocks" with respect to the hourly forward curve are daily. For commodities with daily granularity, such as the natural gas, everything is performed on a daily basis. The hourly or daily forward curve should be inferred from futures market and can be obtained by using different methodologies, such as the one proposed by Benth et al. [9, Chapter 7]. ### A multidimensional toy example In order to make things clearer, in this section we present a toy version of the model we presented above. Assume we have only four products with different deliveries \(T_{m},\ m=1,\ldots,M\) with \(M=4\). Assume we are focusing on a single market (for example the power futures market in Germany) and hence \(K=1\). The fixed delivery products have the following dynamic: \[\frac{dF\left(t,T_{m}\right)}{F(t,T_{m})}=\sigma_{m1}dW_{1}(t)+\sigma_{m2}dW_{ 2}(t)+\sigma_{m3}dW_{3}(t)+\sigma_{m4}dW_{4}(t)=\sum_{j=1}^{\tilde{N}}\sigma_{ mj}dW_{j}(t).\] Observe that the components of the multidimensional standard Brownian motion \(\mathbf{W}=\left\{\left(W_{1}(t),W_{2}(t),W_{3}(t),W_{4}(t)\right);t\in[t_{0},T]\right\}\), can be both correlated and independent. The dynamic in Equation (3.4) can be written in a matrix form as: \[\frac{d\mathbf{F}(t,T)}{\mathbf{F}\left(t,T\right)}=\mathbf{\sigma}\cdot d\mathbf{W}(t),\quad t \in[t_{0},T]\] where : \[\mathbf{F}(t,T)=\begin{bmatrix}F\left(t,T_{1}\right)\\ F\left(t,T_{2}\right)\\ F\left(t,T_{3}\right)\\ F\left(t,T_{4}\right)\end{bmatrix},\qquad T=\min_{m\in[1,M]}T_{m},\qquad\mathbf{ \sigma}=\begin{bmatrix}\sigma_{11}&\sigma_{12}&\sigma_{13}&\sigma_{14}\\ \sigma_{21}&\sigma_{22}&\sigma_{23}&\sigma_{24}\\ \sigma_{31}&\sigma_{32}&\sigma_{33}&\sigma_{34}\\ \sigma_{41}&\sigma_{42}&\sigma_{43}&\sigma_{44}\end{bmatrix}\] In the market you do not directly observe the matrix \(\mathbf{\sigma}\) but it can be efficiently estimated from real market data. In order to do so, we assume that \(N\) equally spaced daily observation at times \(t_{0}\leq t_{1}\leq\cdots\leq t_{N}\) of the process \(\mathbf{F}\) are given and let \(\Delta t=t_{i+1}-t_{i}\) defined in fraction of years, for example \(\Delta t=\frac{1}{260}\). Define the log-return for the product with delivery \(T_{m}\) as: \[X_{i}^{m}=\ln\frac{F(t_{i+1},T_{m})}{F\left(t_{i},T_{m}\right)}\] and assume that the vector \(\mathbf{X}=\left[X^{1},X^{2},X^{3},X^{4}\right]\) is normally distributed with mean \(\mathbf{\mu}\) and covariance \(\mathbf{\Sigma}\). Since \(\mathbf{\Sigma}\) is a covariance matrix it is symmetric and positive-definite matrix and hence it factorizes as: \[\mathbf{\Sigma}=\mathbf{C}\mathbf{\Gamma}\mathbf{C}^{T}=\mathbf{C}\mathbf{\Gamma}^{\frac{1}{2}}\left( \mathbf{C}\mathbf{\Gamma}^{\frac{1}{2}}\right)^{T}\] for \(\mathbf{\Gamma}\in\mathbb{R}^{M\times M}\) diagonal matrix and \(\mathbf{C}\in\mathbb{R}^{M\times M}\) is an orthogonal matrix such that \(\mathbf{C}^{T}\mathbf{C}=\mathbf{I}\), where \(\mathbf{I}\) is the identity matrix. On the other hand we have that \(\mathbf{\Sigma}=\mathbf{X}^{T}\mathbf{X}\) and it can be proved that: \[\mathbf{\Sigma}=\Delta t\mathbf{\sigma}\mathbf{\sigma}^{T}.\] Therefore, \(\mathbf{\sigma}\) can be estimated as: \[\mathbf{\sigma}=\frac{\mathbf{C}\mathbf{\Gamma}^{\frac{1}{2}}}{\Delta t^{\frac{1}{2}}}.\] For this reason we can easily estimate the matrix \(\mathbf{\sigma}\) from the observed data using Algorithm 1. _Remark 1_.: Observe that the matrix \(\hat{\mathbf{\sigma}}\) defined in Algorithm 1 is not unique, but it is unique up to unitary transformation. Indeed assume that \(\mathbf{U}\in\mathbb{R}^{M\times M}\) is such that \(\mathbf{U}\mathbf{U}^{T}=\mathbf{I}\). Then: \[\mathbf{\Sigma}=\Delta t\mathbf{\sigma}\mathbf{\sigma}=\Delta t\mathbf{\sigma}\mathbf{U}\mathbf{U}^{T} \mathbf{\sigma}^{T}=\Delta t\mathbf{\sigma}\mathbf{U}\left(\mathbf{\sigma}\mathbf{U}\right)^{T}\] and hence we can define \(\bar{\mathbf{\sigma}}=\mathbf{\sigma}\mathbf{U}\) and this is a \(M\times M\) matrix with the property that: \[\mathbf{\Sigma}=\Delta t\bar{\mathbf{\sigma}}\bar{\mathbf{\sigma}}^{T}.\] Within our toy example assume that: \[\mathbf{\sigma}=\begin{bmatrix}0.15&0.019&-0.13&0.018\\ 0.25&0.014&-0.19&0.015\\ 0.185&0.012&-0.13&0.018\\ 0.125&0.044&-0.131&0.043\end{bmatrix},\] and hence simulate the trajectories of the process \(\mathbf{F}\) for a given time horizon \([t_{0},T]\). If we plot the trajectories obtained by simulating from in Equation (1) we obtain the results in Figure 9. Hence compute the log-returns and get \(\hat{\mathbf{\sigma}}\) following Algorithm 1, which gives: \[\hat{\mathbf{\sigma}}=\begin{bmatrix}-0.0004&-0.0056&-0.0054&0.1964\\ 0.0003&-0.0037&0.0203&0.3090\\ -0.0002&0.0087&0.0141&0.2228\\ 0.0001&0.0017&-0.0462&0.1809\end{bmatrix}.\] Depending on the number of time step you choose and on the seed of random number generator you could obtain a different matrix \(\hat{\mathbf{\sigma}}\). From Figure 9 It appears clear that the whole market can be properly described by using less than four risk factors. We get the same conclusion by looking at the matrix correlation \(\rho\) of log-returns and we obtain: \[\rho=\begin{bmatrix}1.00&0.99&0.99&0.98\\ 0.99&1.00&0.99&0.95\\ 0.99&0.99&1.00&0.95\\ 0.98&0.95&0.95&1.00\end{bmatrix}, \tag{10}\] If we perform the PCA analysis we get that the eigenvalues of the covariance matrix \(\hat{\mathbf{\Sigma}}\) are: \[\lambda=[0.8325,0.0107,0.0005,0.0000]\cdot 10^{-3}.\] It turns out that the first two principal components (eigenvectors) associated to the first two eigenvalues explains more than the \(99\%\) of the variance. For this reason we can use only two random (i.e. Brownian motion) factor to properly model all the market, instead of the original four. Define: \[\boldsymbol{\sigma}^{*}=\boldsymbol{C}^{*}\left(\boldsymbol{\Gamma}^{*}\right)^{1 /2}\Delta t^{-1/2},\] simulate the processes by using only two stochastic factors and \(\boldsymbol{\sigma}^{*}\) and compute the empirical matrix of log-returns, obtaining: \[\rho^{*}=\begin{bmatrix}1.00&0.99&0.99&0.97\\ 0.99&1.00&0.99&0.95\\ 0.99&0.99&1.00&0.95\\ 0.97&0.95&0.95&1.00\end{bmatrix}, \tag{11}\] which is extremely close to the one shown in Equation (10). If we compute the original covariance matrix \(\hat{\boldsymbol{\Sigma}}\) and \(\boldsymbol{\Sigma}^{*}\) given by: \[\boldsymbol{\Sigma}^{*}=\Delta t\boldsymbol{\sigma}^{*}\left(\boldsymbol{ \sigma}^{*}\right)^{T}\] we get: Figure 9: Possible simulation of the forward prices process \(\boldsymbol{F}\). \[\hat{\mathbf{\Sigma}} =\begin{bmatrix}1.4859&2.3309&1.6781&1.3756\\ 2.3309&3.6897&2.6580&2.1138\\ 1.6781&2.6580&1.9197&1.5256\\ 1.3756&2.1138&1.5256&1.3405\\ \end{bmatrix}\cdot 10^{-4},\] \[\mathbf{\Sigma}^{*} =\begin{bmatrix}1.4846&2.3302&1.6800&1.3760\\ 2.3302&3.6892&2.6592&2.1140\\ 1.6800&2.6592&1.9169&1.5250\\ 1.3760&2.1140&1.5250&1.3404\\ \end{bmatrix}\cdot 10^{-4},\] which confirms that only two Brownian motions are enough to explain the most part of the variance of log-returns and hence to model the market. ## 4 Calibration In the previous section, we have shown how to calibrate the model on a toy example. In this section, we move to a real market application, showing how to calibrate the model on futures power and gas markets quotations. In most power markets both peak-load and base-load products are present. The difference between them is that peak-load contracts delivers electricity only is some specific hours (typically from 8 to 20) during the working days, whereas a base-load contract delivers power for all the hours of the delivery periods. An off-peak contract, which is not typically traded, delivers energy in non peak-load Figure 10: Possible simulation of the forward prices process \(\mathbf{F}\). periods and can be obtained from peak-load and base-load quotations. Usually traders and risk managers are interested in considering both type of products. So far, we have not specified the difference between base-load and peak-load contracts, but we observe we can easily include them. In particular, in spot simulations both peak-load and base-load products' volatility might be included. A possible strategy is to simulate peak-load and base-load spot prices separately and hence merging together. On the other hand, in many power futures market peak-load contracts are less liquid that the corresponding base-load ones and hence "expertise adjustment" has to be done to properly retrieve peak-load quotations when they are missing. This leads to the introduction of arbitrary choices and hence to questionable spot simulations behavior. For these reason and in order to simplify the exposition we focus only on base-load quotation. Before calibrating the model we need some data preparation which is crucial to properly fit the parameters. This part is the most delicate one and great attention should be payed to it: if the data are not properly prepared, calibration might leads to wrong parameters estimation and output results might be misleading. This section is split in two parts. First we show how to prepare the data, then we use the PCA technique to calibrate the model. ### Data preparation As discussed in Section 3 we have assumed that the volatility \(\mathbf{\sigma}(t,T)\) is a function of the time to maturity \(T-t\). By previous discussion, the volatility function must be fitted on fixed maturity products \(F(t,T)\) but, unfortunately, in the market we observe \(F\left(t,T^{s},T^{e}\right)\), namely the price of a futures contract with maturity \(T^{s}\) and delivery period \([T^{s},T^{e}]\) at tile \(t\) (see Table 1). Before proceeding further, we need some notation. If today is the 16th February 2023, we name the product which delivers 1 MWh for all the hours of the next year \(Y1\), namely the closest available calendar traded at the current date. The same reasoning can be applied to monthly, quarterly and seasonal products. For example, \(Q1\) is the product which delivers in the following quarter. In our model we are interested in fitting the volatility of the \(M0,M1,\dots,Q1,\dots,Y1,Y2,\dots\). Observe that such products are exactly the fixed delivery futures contracts \(F(t,T)\). Therefore, the first step is to switch from \(F(t,T^{s},T^{e})\) in Table 1 to \(F(t,T)\) listed in Table 2. Observe that if we are on January 2020, \(M1\) is February 2020 which corresponds to the product \(F(t,T^{s},T^{e})\) with \(T^{s}=1/2/2020\) and \(T^{e}=28/2/2020\) (in red in Tables 1 and 2). When we move to February 2020 by referring to \(M1\) we need to consider the product \(F\left(t,T^{s},T^{e}\right)\) with delivery dates on April 2023 and so on. By proceeding in this way we construct the time series of the fixed delivery products \(F(t,T)\), namely of \(M0,M1,\dots,Q1,\dots,Y1,\dots\). An example of this structure is shown in Table 2. As stated before, observe that at the end of March 2020 (green quotation), \(Q3\) is the fourth quarter of 2020 whereas at the beginning of April 2020 the product \(Q3\) is the first quarter of the \begin{table} \begin{tabular}{c c c c c c c c} Trading date & Jan-20 & Feb-20 & Mar-20 & Q2-20 & Q3-20 & Q4-20 & 2021 & 2021 \\ \hline 2020-01-02 & 36.05 & 39.76 & 37.15 & 35.50 & 39.05 & 45.30 & 43.85 & 46.55 \\ \end{tabular} \end{table} Table 1: Some of the products \(F(t,T^{s},T^{e})\) available on German futures market for the trading date 4/1/2020. 2021. This is what is sometimes know as "rolling effect". The same happens for the \(M1\). In computing the log-returns of fixed delivery products from Table 2 attention should be payed, since when products roll, fake spikes in log-returns might be created: such spikes as not due to the market structure but simply on the way we look at it and hence must be removed. At this point one might be tempted to compute the log-returns (opportunely filtered) of the prices in Table 2 and apply the PCA to identify the number of risky factors. Of course this would be possible if one is interested in the simulation of the fixed delivery products \(M0,M1,\dots,Q1,\dots\) but the situation is slightly different if one is interested in the simulation of \(F(t,T^{s},T^{e})\), for example Jan-21, on the period starting from the 1st January 2020 to its maturity which occurs at the 31st December 2020. Observing Figure 11 and assuming that only \(M1\), \(M2\), \(M3\), \(Q1\), \(Q2\), \(Q3\) and \(Y1\) are available, the right volatility of this product is that of the \(Y1\) until the end of March 2020, then we have to consider that of the \(Q3\) up to the end of July and so on until December in which the proper volatility is that of the \(M1\). This observation, which is trivial from a theoretical point of view, might bring to some thorny problems when we move to the model implementation since the right volatility must be considered. In order to consider the right volatility for each time step of the simulation many approaches are possible. Observing Figure 11 the simplest thing is to fit the volatility structure on raw data in Table 2, compute for each time step how long does it last to the delivery and use the proper corresponding volatility. For example, if we are simulating Jan-21, in November 2020 we will use the volatility of the \(M2\) product. This approach is the fastest to implement but it is not free of pitfalls. For example, if we want to introduce the Brent quotations, we observe that for such a product nor quarter neither \begin{table} \begin{tabular}{l c c c c c c c} Trading date & \(M0\) & \(M1\) & \(M2\) & \(Q1\) & \(Q2\) & \(Q3\) & \(Y1\) & \(Y2\) \\ \hline 2020-01-02 & 36.05 & 39.76 & 37.15 & 35.5 & 39.05 & 45.3 & 43.85 & 46.55 \\ 2020-01-03 & 38.06 & 40.4 & 37.8 & 36.55 & 39.85 & 45.97 & 44.85 & 47.08 \\ 2020-01-06 & 36.93 & 39.46 & 37.29 & 36 & 39.36 & 45.5 & 44.55 & 46.89 \\ \(\dots\) & \(\dots\) & \(\dots\) & \(\dots\) & \(\dots\) & \(\dots\) & \(\dots\) & \(\dots\) \\ 2020-03-30 & 22.49 & 17.6 & 19.08 & 26.74 & 26.84 & 33.74 & 34.72 & 38.3 \\ 2020-03-31 & 15.74 & 17.06 & 19.79 & 26.74 & 27.63 & 34.45 & 35.65 & 39.05 \\ 2020-04-01 & 15.74 & 19.04 & 23.24 & 26.74 & 33.76 & 36.24 & 34.95 & 38.56 \\ 2020-04-02 & 16.39 & 19.09 & 23.26 & 26.86 & 33.98 & 36.57 & 35.35 & 39.04 \\ \(\dots\) & \(\dots\) & \(\dots\) & \(\dots\) & \(\dots\) & \(\dots\) & \(\dots\) & \(\dots\) \\ 2023-02-13 & 129.69 & 131.22 & 126.49 & 131.05 & 149.58 & 177.10 & 158.18 & 128.11 \\ 2023-02-14 & 129.95 & 131.01 & 126.46 & 131.33 & 150.25 & 179.23 & 159.34 & 128.50 \\ \end{tabular} \end{table} Table 2: Relative products for the German forward markets. Figure 11: Volatility to use for the simulation of the product Jan-21 over the period 1/1/2020 to 31/12/2020. calendar products are quoted, but only months. For this reason, it would not be very clear how to deal with Brent monthly quotations with with the coarser granularity ones in power markets insisting on the same delivery period. A possible solution is to reduce Brent monthly quotations to quarterly or yearly but in this case we would lose some information. On the other hand, another approach is to create monthly quotations starting from the products with a coarser granularity, namely quarter and yearly and hence fit the volatility parameters on these processed data. In order to do so, first we have to compute a _flat monthly forward curve_ by obtaining the value of each monthly futures contract in a market coherent way, guarantying that no arbitrage opportunities arise. A well known approach for the construction of a smooth forward curve with a seasonal effect which is coherent with the marketd observed futures quotations has been proposed by Benth et al. [9]. A similar procedure can be adopted and simplified by substituting the smooth curve with a step-wise one. The algorithm, which reduce to the solution of a linear system, is detailed in the following session. #### 4.1.1 Construction of the monthly forward curve In this section we show how to construct the monthly forward curve for a given market starting from the futures products available on the market. Details can be found in Benth et al. [9, chapter 7]. Consider the example in Figure 12 where overlapped products are allowed and suppose we want to define a step-wise forward curve on the intervals \(\left[\tau_{i-1},\tau_{i}\right],\ i=1,\ldots,n\) with \(n=7\) of the form: \[\epsilon\left(u\right)=\sum_{i=1}^{n}a_{i}\mathbb{1}_{\left[\tau_{i-1},\tau_{ i}\right]}(u),\] where \(\left\{a_{i}\right\}_{i=1}^{n}\) are the values we have to fit. Assuming \(t_{0}=0\) and denoting the price of the futures product at \(t_{0}\) by \(F\left(T_{i}^{s},T_{i}^{e}\right)=F\left(t_{0},T_{i}^{s},T_{i}^{e}\right)\) we have to guarantee that non arbitrage constraints are satisfied. For each quoted product \(i=1,\ldots,n\), we must satisfy the following relation: \[F\left(T_{i}^{s},T_{i}^{e}\right)=\frac{1}{T_{i}^{e}-T_{i}^{s}}\int_{T_{i}^{s }}^{T_{i}^{e}}\epsilon(u)du.\] It is easy to show that finding the values \(\left\{a_{i}\right\}_{i=1}^{n}\) reduces to the solution of a linear system which can be done numerically in a very efficient way. Once that the values of \(\left\{a_{i}\right\}_{i=1}^{n}\) are available \(\epsilon(u)\) is determined. For each trading date and for each market such a monthly forward curve must be computed. If we consider as trading date 4/1/2020 the resulting monthly forward curve is shown in Figure 13. Of course, if during the construction of the forward curve some products are completely overlapped they must be removed preserving those with the finer granularity. As useful remark, we observe that no bootstrapping has been done on the monthly forward curve. The bootstrapping of the forward curve is very common in energy markets since seasonal effects are present in many countries. Nevertheless, our final goal is to find the volatility of forward products and hence a bootstrapping method could introduce some deformation in quotations which might have an impact on the estimation of the volatility structure leading to biased results. Finally, we check if the latter methodology has an impact on the PCA we use to detect the number of random factors which drive the market. Since we are only obtaining monthly products form those with a coarser granularity, we are not expected to introduce more information and variability with respect the case in which we work with row date of Table 2. In order to verify this claim we compare the number of factor we need to explain the 95% of the variance both using raw and monthly data from German power Figure 12: Quoted futures product \(F\left(t,T_{i},T_{j}\right)\). Figure 13: Reconstructed monthly forward curve (black) compared with the quoted forward price for the trading date 2020-04-01 (green). Observe that non arbitrage constraints are satisfied on the year 2021. futures market over the period from 1/1/2020 to 31/12/2021. From Figure 14 we observe that in both cases fourteen principal components are needed to explain around the 95% of the total variance. In case of monthly data the original total number of random factors is much larger than that of the raw data, since many quotations have been replicated on a finer granularity scale. Nevertheless, since many of them are just replications of exiting ones they are not needed to explain a significant amount of the whole market variance as confirmed by Figure 15. In light of the preceding results we can claim that the calibration of volatility parameters using the PCA procedure we sketched in Section 4.2, can be done both on raw data or on processed monthly data. The latter approach is preferable when commodity with different granularity are present: in this case, working with products having the same granularity leads to cleaner analysis and easier coding. ### PCA for dimension reduction In this section we briefly formalize the example in Section 3.4 and we show how PCA can be used in order to identify a relatively small number of stochastic factors which drive the whole market. Assume we have \(K\) markets and for each of them we have the same number of contracts with given maturities and call them \(F^{k}(t,T_{m})\), with \(k=1,\ldots,K\) and \(m=1,\ldots,M\). Consider a matrix consisting of \(n_{obs}\) rows and \(\bar{N}=M\cdot K\) columns: each row contains the monthly forward curves constructed as show in Section 4.1. We compute the matrix of log-returns \(\mathbf{X}\in\mathbb{R}^{(n_{obs}-1)\times MK}\). Now we apply the PCA and we compute the eigenvalues and the associated eigenvectors. We have the following proposition. Figure 14: Percentage of the total variance explained by principal components on the raw and on the monthly data. **Theorem 4.1**.: _[Johnson and Wichern [29, Result 8.2]] Let \(\mathbf{X}=\left(X_{1},\ldots,X_{d}\right)\) be a random vector with covariance matrix \(\Sigma\) with eigenvalue-eigenvector pair \(\left(\lambda_{1},v_{1}\right),\ldots,\left(\lambda_{d},v_{d}\right)\) where \(\lambda_{1}\geq\lambda_{2}\geq\ldots,\geq\lambda_{d}\geq 0\). Let \(Y_{i}=v_{1}\mathbf{X}\) for \(i=1,\ldots,d\) be the principal components. Then:_ \[\sigma_{11}+\cdots+\sigma_{dd}=\sum_{i=1}^{d}var\left(X_{i}\right)=\lambda_{1 }+\cdots+\lambda_{d}=\sum_{i=1}^{d}var(Y_{i})\] This result states that the total population variance is given by the sum of the eigenvalue \(\lambda_{i}\), for \(i=1,\ldots,d\). Hence the percentage of the variance explained by the principal component \(Y_{k}\) is given by: \[\frac{\lambda_{k}}{\sum_{i=1}^{d}\lambda_{i}},\quad k=1,\ldots,d.\] The number of principal components \(k\) is chosen such that they are enough to explain a sufficiently large percentage of the variance. Usually this choice is made heuristically, namely there is not a rigorous way to chose the number \(k\). See Johnson and Wichern [29] for details. Once that we have chosen how many principal components to consider, say \(N\), we can define the matrix \(\mathbf{\sigma}^{*}\) as: \[\mathbf{\sigma}^{*}=\mathbf{C}^{*}\left(\mathbf{\Gamma}^{*}\right)^{1/2}\Delta t^{-1/2},\] where \(\mathbf{C}^{*}\) is the \(\mathbb{R}^{\tilde{N}\times N}\) matrix consisting on the first \(N\) eigenvector associated to the eigenvalues \(\lambda_{1},\ldots,\lambda_{N}\), \(\mathbf{\Gamma}^{*}\) is the diagonal \(\mathbb{R}^{N\times N}\) matrix containing the eigenvalues and \(\Delta t\) is the time interval between quotations (for example \(\Delta t=1/252\) in case of daily Figure 15: Variance explained from the principal components of the monthly data. One can easily observe that only the first principal components are required to explain a sufficiently large amount of the total variance. quotations). Once we have fitted the matrix \(\mathbf{\sigma}^{*}\) we can use Equation (6) to simulate the market by using \(N\) independent stochastic factors. ## 5 Numerical results In this section we consider a real market application. We calibrate the model on futures market quotations over the period from \(1/1/2020\) to \(31/12/2020\). By considering monthly fixed delivery futures products, obtained as shown in Section 4.1, in Table 2 for six different markets: four European power futures markets, Germany (DE), France (F7), Italy (IT) and Switzerland (CH) and two natural gas markets, PSV and TTF. We consider deliveries up to \(M=24\) months ahead, but larger maturities can be considered if products are available. As observed in Section 2 we expect a significant level of co-integration between markets. In Figure 19 we plot log-returns correlation matrices: the level of historical correlation is high for all maturities and it is higher for the long-term ones. Moreover, we observe the highest level of correlation across markets of the same commodity type, namely power and natural gas. Furthermore, the correlation between power and natural gas is significant too, since the natural gas is commonly used as fuel for electricity production in many European countries. Starting from this quotations we compute the log-returns matrix \(\mathbf{X}\). As discussed in the previous section, "fake" spikes in log-returns might be created and they must be removed. Moreover, since we are assuming that log-returns are Gaussian, we filter out the outliers by removing everything which lies more that three standard deviation away from the mean. This is a very rough way of filtering outliers out but it is a method commonly used by practitioners. Otherwise a similar approach to the one proposed by Cartea and Figueroa [19] can be adopted. Once matrix \(\mathbf{X}\) has been prepared, we perform a PCA analysis and we select a sufficient bulk of stochastic factors which are enough to explain a sufficiently large part of the total market variance. As observed by Feron and Gruet [23], Geman [25] and Koekebakker and Fridthjof [31] 5-10 factors in power markets are enough to explain a large part of the variance, whereas for other markets, such as metals, 4 are enough. In this example we start from 144 stochastic factors and, after a PCA, analysis we get that 10 factors are enough to explain the 90% of the total market variance. Following the procedure we illustrated in Section 3.4, by PCA we calibrate the model by fitting the matrix \(\mathbf{\sigma}^{*}\). Once this has been done, we are ready to simulate the structure of the forward curve. Considering as \(t_{0}=4/1/2021\), by using Equation (6) we simulate futures price \(F^{k}(t,T^{s},T^{e})\) with delivery period \(T^{s}=1/1/2022\), \(T^{e}=31/12/2022\) form \(t_{0}\) up to the \(31^{st}\) December 2021. The result is shown in Figure 17. The dynamic of the prices exhibits an extremely high level of dependence: in particular, both power and natural gas prices tends to rise or fall together over the whole simulation period. In Figure 18 we show some possible path realizations on a time interval \([t_{0},T]\), together with the fifth and the ninety-fifty percentile, of the single product German power futures calendar \(F(t,T^{s},T^{e})\) with delivery dates \(T^{s}=1/1/2022\) and \(T^{e}=31/12/2022\). Since the dynamic introduced by Equation (4) is diffusive we get, as expected, a time increasing variance of the process at time, for \(t\in[t_{0},T]\). Furthermore, in Figure 20 we show the log-returns matrices correlation computed Figure 16: Simulated, market and theoretical annualized volatility. on simulated fixed delivery products \(F^{k}(t,T_{m}),\ k=1,\ldots,K\ m\in[1,M]\) on a given time interval. A comparison between Figure 19 and Figure 20 shows that the correlation structure is properly replicated from the proposed model. Of course, since we used only 10 stochastic instead of the 144 original ones, the correlation surface computed from the simulation appears to be less varied than the original one. In particular, the correlation surface computed from the simulations is flat for long maturities. This happens because PCA selects a single stochastic factor to move all the long term structure. This is in accordance with empirical evidence since products with long time to maturity are strongly correlated and tend to oscillate in the same way. In many cases, practitioners are interested in computing the value at risk of a given portfolio. In order to achieve this task, many approaches, historical, parametric and Monte Carlo among the others, are available. If a Monte Carlo approach is chosen the simulation of the forward curve must be performed. In order to do so, we use Equation (7) to simulate the forward curve after a few days, typically one or two. Results are shown in Figure 21. In particular, we observe that the expected value of the simulated forward curves \(\mu\), converges at today's forward curve \(F(t_{0},T)\) as one should expect by construction. The fifth and the ninety-fifth percentiles in red, give an idea of the amplitude of the simulations: this is a useful visual check in order to show if the simulation routine performed well and it is largely used by practitioners. As final issue, we investigate the spot prices produced by the HJM framework. In this step, for the sake of conciseness, we focus only of two markets: German and Italian Figure 17: Sample paths for the product calendar 2022 with delivery date from 2022-01-01 to 2022-12-31. power spot markets. In Figure 22 we show a single realization of the stochastic process \(S^{k}=\left\{S^{k}(t);t\in[t_{0},T]\right\}\) for the two different commodities, according to the dynamic presented by Equation (9). As we can observed the spot prices tends to move together since they are driven by the same stochastic factors. In this case the correlation level in daily spot log-returns is high, approximately \(\rho=0.98\) and this is confirmed by Figure 23, where we displayed the contour plot of a bi-variate Gaussian distribution fitted on daily log-returns. It is worth remembering that many European power spot markets are coupled together: market coupling optimizes the allocation of cross-border capacities between countries. One of the possible effects on the prices dynamic is that at the same hour the electricity price in two different countries is the same. The proposed model does not take in account this behavior: in order to include such effect a fundamental component must introduced, as proposed by Carmona and Coulon [18] and Kiesel and Kusterman [30]. Another limitation of the approach we proposed is that the correlation we introduce in the spot simulations is the one we derive from futures prices which is typically higher than the one we get historically in the spot market. Despite this limitation, the methodology the methodology we introduced is a good way to produce a coherent framework both for spot and futures energy markets. Finally we stress out that, since Monte Carlo simulations are available both for power and futures products derivatives, pricing of complex energy financial claims, such as virtual power plants or storage might be easily performed following the algorithms presented by Tseng and Barz [50] and Boogert and de Jong [12] respectively. Figure 18: Sample paths of the product German power futures calendar 2022. In red the fifth and the ninety-fifth percentile. Figure 19: Historical daily log-returns correlation surfaces for different commodities. Figure 20: Simulated daily log-returns correlation surfaces for different commodities. Figure 21: Simulated two days forward curves: \(n_{sim}=400\). Figure 22: German and Italian power spot prices simulations. Figure 23: Contour plot of a bi-variate normal distribution fitted on daily log-returns of German and Italian power spot prices simulation. Correlation in log-returns \(\rho=0.98\). Figure 24: Sample paths for the German power spot price. Conclusions In this article we discussed in detail the implementation and a possible application of the Heath-Jarrow-Morton framework to energy markets from a very pragmatic point of view. In particular we have focused on the European power and natural gas markets. We introduced a Black-style dynamic for the so called _fixed-maturity_ products and we derived the ones of spot and futures prices. By showing that all the power and natural gas futures markets we considered are strictly dependent, we selected by the PCA algorithm only few stochastic factors which explain a large part of the variance. Furthermore, following Boerger et al. [17] we have shown how a closed form solution for European vanilla option is available. Finally, we have applied the model to real market data from European power and natural gas markets. We discussed the daily log-returns correlation structures and we have shown that the models fits the market narrowly. Moreover, we analyzed the futures and spot simulations in output from the model. HJM model is easy to implement, calibration step is not difficult, due to the hypothesis of normality of log-returns and simulations are fast to perform without introducing a discretization error, since an exact solution for the stochastic differential equation is available. Unfortunately, the model presents some drawbacks which must be considered when the output are used for risk-metric computation or for derivative pricing. First of all, log-returns are normally distributed and this is not the case in almost all financial markets: jumps, volatility smiles and clustering are often present and are not considered by the proposed framework. In order to consider such stylized facts one can introduce stochastic volatility, jumps in price dynamics maybe using subordination techniques. On the other hand, even if such models work great in uni-variate setting, they are too hard to calibrate in a multivariate framework, especially when the number of underlying assets is higher than three. In this latter case, a Gaussian approach could be preferred in term of computational al calibration complexity. On the other hand, if one needs to focus on the pricing of a derivative written o a single underlying asset, more complex models based on Levy processes or on stochastic volatility can be used in order to include many market stylized facts. Other drawbacks follow from the fact that the calibration has been performed on historical futures prices. First of all, it is not guaranteed that the model replicates the options quoted in the market: in order to guarantee that, a risk-neutral calibration should be performed. Unfortunately, option power markets are not very liquid and hence, in many situations, an historical calibration is the only possible way. The second arising problem is that, since we did not considered spot quotations, the correlation and the volatility in simulated log-returns directly inherit from the forward one instead of from the spot one. On the other hand, the proposed framework appears to be one of the easiest technique to obtain dependent futures and spot prices simulation in a multi-commodity setting in a market coherent way and, for this reason, it is widely used by practitioners. Nevertheless, it would be worth to include jumps in prices' dynamic or to consider a stochastic volatility approach in the multi-dimensional framework in order to better model the prices' dynamic, by preserving both mathematical and numerical tractability. This could be the direction of possible future researches.
2307.09878
Amortised Experimental Design and Parameter Estimation for User Models of Pointing
User models play an important role in interaction design, supporting automation of interaction design choices. In order to do so, model parameters must be estimated from user data. While very large amounts of user data are sometimes required, recent research has shown how experiments can be designed so as to gather data and infer parameters as efficiently as possible, thereby minimising the data requirement. In the current article, we investigate a variant of these methods that amortises the computational cost of designing experiments by training a policy for choosing experimental designs with simulated participants. Our solution learns which experiments provide the most useful data for parameter estimation by interacting with in-silico agents sampled from the model space thereby using synthetic data rather than vast amounts of human data. The approach is demonstrated for three progressively complex models of pointing.
Antti Keurulainen, Isak Westerlund, Oskar Keurulainen, Andrew Howes
2023-07-19T10:17:35Z
http://arxiv.org/abs/2307.09878v1
# Amortised Experimental Design and Parameter Estimation for User Models of Pointing ###### Abstract User models play an important role in interaction design, supporting automation of interaction design choices. In order to do so, model parameters must be estimated from user data. While very large amounts of user data are sometimes required, recent research has shown how experiments can be designed so as to gather data and infer parameters as efficiently as possible, thereby minimising the data requirement. In the current article, we investigate a variant of these methods that amorises the computational cost of designing experiments by training a policy for choosing experimental designs with simulated participants. Our solution learns which experiments provide the most useful data for parameter estimation by interacting with in-silico agents sampled from the model space thereby using synthetic data rather than vast amounts of human data. The approach is demonstrated for three progressively complex models of pointing. ## CCS Concepts * **Human-centered computing \(\rightarrow\) HCI theory, concepts and models; User models.** ## Keywords user models, adaptive experiment design, parameter estimation, active inference, computational rationality ## 1. Introduction User models take many forms in HCI, from simple lists of 'psychological factors' including, perhaps, personality variables and/or product preferences to cognitive models that simulate the processing of information in the mind. Some of the latter have focused on constraints imposed by the human perceptual and motor systems, others on the structure of human memory and others on control (what to do next). While these user models provide one of the theoretical anchors for the discipline, they are difficult to construct and difficult to fit to behaviour - sometimes requiring vast amounts of data from an expanding range of sensors (Figure 1). In the current article, we explore one particular approach to addressing this latter problem by automating model parameter estimation for three pointing tasks. The approach involves the design of an optimal sequence of experimental trials (or designs) that maximise the relevance of the available information. In the statistics literature 'optimal experimental design' (OED) is the problem of choosing which experimental trial to do next so as to maximize some objective. For example, an HCI researcher may want to measure the effect of a new pointing device on a user's movement accuracy, but how far away and how big should the movement target be on each successive trial so as to maximize the information gained from observing the user performing the task? We present an approach to efficiently solving this problem for estimating user model parameters in HCI. We argue that the presented approach has the potential to enhance the contribution that user models make to HCI by providing interactive systems with the means to automatically and rapidly fit user models to individual users and thereby personalise interaction so as to best fit the requirements of the individual. This capability is also important to cooperative/collaborative Artificial Intelligence (AI), that is the problem of how to get machines to work with people. Personalisation and collaboration are important objectives for HCI, in part, because they directly address the desire to design interaction for diverse users. While we do not investigate personalisation per se in the current article, we believe that user modeling is crucial to the future of personalisation and that this potential will only be fulfilled if the parameter estimation problem can be solved. Fitting a user model to an individual was difficult in the early days of cognitive modeling, in part because models such as GOMS (Goldsmith, 2002), were constructed manually. Production rules that mapped goals into actions were written by an analyst. Model constructing consisted of a painstaking and iterative process of protocol analysis, and production rule writing. Since then significant advances have been made on automatic construction of models. Rather than hand-coding production rules, modern modelling techniques now automatically learn a _control policy_ (task knowledge) using deep reinforcement learning. In particular, machine learning can be used to learn a model's control policy through exploration of simulated interaction.1 This approach has been successfully applied to the automatic construction of user models of menu search, decision making, gaze-based interaction, hierarchical control, and touchscreen typing (Kennedy et al., 2015; Kennedy et al., 2016; Kennedy et al., 2017; Kennedy et al., 2018; Kennedy et al., 2019; Kennedy et al., 2020; Kennedy et al., 2021). However, fitting the quantitative parameters of these models to individual humans still requires a significant contribution from the analyst. Footnote 1: The control policy is a function that maps observations into actions. It can be learned with a number of machine learning algorithms. One of the difficulties with fitting learning-based user models to human data is that the control policy must be trained to make predictions for each set of possible parameters. For example, in the gaze-based interaction model reported in (Kennedy et al., 2016) oculomotor noise and perceptual noise parameters are properties of an individual user. Both parameters introduce uncertainty in aimed movements to a target. The optimal control policy chooses an aim point for a target in accordance with these uncertainties. For example, when oculomotor noise is high then it makes sense to deliberately understood the target and then make a corrective submovement. This is because on average this results in a lower overall movement time than overshooting (which takes longer) and correcting. Unfortunately then, while automatically learning human-like control policies solves part of the user modelling problem, it does not solve the parameter estimation problem. User models typically have many parameters and the control policy, and therefore behaviour, is adapted to these parameter values. For example, it is known that the human control policy for pointing is adapted to noise both in the motor system and the visual system. In general, the parameter estimation problem is to find a set of values for model parameters such that the predicted behaviour of an individual user is as close as possible to the observed (measured) behaviour. It is quite often formalised as an optimisation problem; how to generate the best estimate of the parameters from the available data. Typically, the objective will be to find parameters that minimise the difference between the model behaviour and human behaviour, that is maximising fit. Estimating parameters so as minimise the discrepancy between model and human requires high quality data; a fact that, as we have said, has given rise to work on optimal experimental design (OED) [35]. This problem has been conceptualised as a problem of how to maximise expected information gain (EIG). In other words, the purpose of an experiment is to maximise the information that is gained about the parameter values that best fit the model to the human. Various approaches have been proposed to this problem. In one class of approaches, Bayesian Optimal Experimental Design (BOED), the idea is to choose experiments that maximise EIG between the prior probabilities of all possible parameters values and a posterior distribution that is conditioned on the expected observations [43]. BOED does not only give a point estimate of the best fitting parameter values but also a posterior probability that these (and all other parameter values) are the best fit. In some approaches, the choice of experiment is 'amortised' meaning that a near-optimal policy for choosing experiments is computed before the deployment of the policy for parameter estimation. In HCI, the advantage of amortisation is that design of experiments for estimating user models can be computed without slowing interaction with the user. Another advantage of amortisation is that experimental design can be non-myopic. This means that, rather than maximising EIG for each individual experiment, instead it can be maximised for a whole sequence of experiments. One approach to amortisation involves defining the optimal experimental design problem as a reinforcement learning problem. In the current article we propose a new approach to user model parameter estimation in HCI that takes advantage of the recent advances described above. The contribution is in offering a novel and practical method for estimating user model parameters through amortising the cost of choosing experiments. While the proposal is for a general method, we demonstrate its viability in this paper for pointing tasks. An overview of the approach is illustrated in Figure 2. The approach estimates the parameters of a user model for an individual human user (Phase 3), having previously computed an 'ensemble' user model for all possible parameter combinations (Phase 1) and then an optimal sequential experimental design policy (Phase 2). In phase 1, the ensemble model of the space of possible users is trained to perform the task for the distribution of possible parameter values and the distribution of possible task environments. The ensemble approach is an important recent advance in user modelling that is described in more detail below [27, 34]. In phase 2, the Analyst is trained to conduct the best sequence of experiments for determining the model parameters. Simulated users are randomly sampled given the parameter distribution and the Analyst learns to fit the model parameters to the simulated user. In phase 3, the trained Analyst conducts experiments on users and generates parameter fits. Phase 1 amortises the cost of computing the implications of different parameter values for model behaviour. Phase 2 amortises the cost of computing a non-myopic sequential policy for choosing experiments and Phase 3 takes advantage of the computation conducted in Phases 1 and 2 in order to optimally gather data and estimate user model parameters without any user-noticeable interaction latency. One confusion that arises about our approach is that once we have trained a simulator model for all possible parameter values (the 'ensemble' user model) then there is nothing else to learn. The confusion is resolved once it is realised that the 'ensemble' is a model of the distribution of all possible users (technically, a distribution over the parameters of the user model) and all possible tasks within the defined space. The assumption is that if we know the exact parameter values for a specific real-world user, then this model will accurately describe their behaviour, but the challenge is that, given a real-world user, we do not know the best parameters. We need the Analyst to design experiments and gather the data from the user in order to find the best parameters in the ensemble model. For training the Analyst, we generated hypothetical users by sampling parameters from the ensemble distribution, and exposed these known parameter values to the Analyst as a training signal, but there was no expectation that any individual real-world user will have the same parameter values. At test time, the Analyst had no knowledge of the user parameters (and the sets of training-time and test-time users are disjoint), and these parameters were inferred from the designed experiments. Another confusion is that it might seem that there is no point in learning about an individual user when Analyst already has a model of the distribution of all possible users. But, this model can only simulate an _specific_ user if it knows the correct parameters for that user. The point of Analyst is to infer the parameters of a user with unknown parameters (e.g. a new human user whose behaviour has not been observed before). Analyst needs to infer the best values of those parameters, and only then can the user model simulate that particular user. Further, it is the capacity to simulate a specific user that promises a role for Analyst in interaction personalisation and cooperative Artificial Intelligence. In what follows, we review the existing literature, formally define our approach, test it on two abstract tasks chosen so as to demonstrate the generality of the approach, and then report three studies of parameter estimation for user models of pointing. In the first of the studies, we demonstrate the effectiveness of the approach for estimating the parameters of a pointing user model from mouse click data. The user model has a single parameter for movement noise that gives rise to Fitts's Law like behaviour through a learned policy that generates multiple submovements to achieve point and click goals. In the second study, we extend the approach to simulated eyecmovement data for gaze-based pointing. These data consist of variable length sequences of interleaved saccades and fixations. In this gaze-based model there are two parameters, one for oculomotor noise and the other for perceptual noise. This gives rise to a potential identifiability problem. In the third study, we explore the capacity of the approach to not only identify perceptual/motor noise parameters but to also determine user preferences. Here the gaze-based model is applied to an interface in which pointing is achieved by looking at targets and pressing a button (a key on the keyboard). Because an experimental trial can be terminated at any time by pressing the button it gives rise to a speed/accuracy trade-off and the user's policy is optimised for their preference, or otherwise, for accuracy over speed. In this study both performance time and errors are used in the estimation of model parameters. ## 2. Background ### User models User models - computable representations of psychological constraints on interaction - have been influential in HCI since its inception. GOMs, a rule-based model for representing hierarchical task knowledge, provides a formalism for HCI researchers to conduct detailed task analyses (Han et al., 2017). The Model Human Processor (MHP), a theory of the temporal properties of cognitive resources, supported the prediction of task performance time. ACT-R and EPIC provide means to simulate cognition and action; ACT-R with a particular focus on human memory and EPIC on constraints imposed by perceptual/motor systems (Han et al., 2017; Wang et al., 2018; Wang et al., 2019). Fitts's Law, a mathematical formulation of the relationship between task difficulty and movement time, became a particularly influential model of pointing (Wang et al., 2018; Wang et al., 2019). Computationally rational models, based on machine learning problems but with cognitive bounds, provided a means to automatically learn control policies. Rather than hand crafted production rules, computationally rational models derive predictions by learning a control policy that is bounded only by human like resource constraints (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). Most of the approaches to user modelling described above remain actively and productively investigated but our focus here is on computationally rational models (Wang et al., 2019). While it has a distinctive approach to the automation of a control policy, it shares a need for new approaches to parameter estimation. In what follows we look in detail at three computationally rational models in order to further understand the parameter estimation problem. One approach to computational rationality involves defining an interactive cognitive task as a Markovian problem and solving it using machine learning, usually reinforcement learning. For example, multi-attribute decision making can be defined as a Partially Observable Markov Decision Problem (POMDP) in which information about relevant attributes is gathered using saccadic eye movements. The reward function specifies a trade-off between speed and accuracy such that information is only gathered if the benefits to decision accuracy outweigh the temporal cost (Han et al., 2017). As a consequence, once an optimal control policy has been learned, it generates attribute-wise, rather than option-wise information gathering - much like humans. While automatically acquired optimal control policies address a major part of the user modelling problem, they leave open the question of how to set model parameters. In the case of multi-attribute decision making the predictions are only as good as the attribute weights that define the user preference function. Similarly, gaze-based interaction can be defined as a POMDP in which partial observations are constrained by foveated vision and saccadic eye-movements by oculomotor noise (Han et al., 2017). The model of foveated vision imposes increasing localisation error with eccentricity of the target from the fovea. The solution to this POMDP is an optimal control policy that - again like humans - undershoots the Figure 2. Our approach takes a user model as input. This user model has prior distributions over parameters \(\theta\) but has not been fitted to individual human behaviour. In Phase 1, an Ensemble Cognitive Model (ECM) is trained to perform the task for the distribution of possible parameter values. In Phase 2, the cognitive model Analyst is trained to conduct the best sequence of experiments for determining the model parameters. In Phase 3, the trained Analyst is deployed with users and a fitted model it generated as output. target in order to minimise movement time (because undershoots take less time than overshoots). Again, the capacity to automatically generate a control policy is an important advance but it leaves open the question of how the parameter values are set so as to model individual users. In the case of gaze-based interaction parameters include the perceptual and oculomotor noise weights as well as the saccade duration intercept and slope parameters. An important aspect of user modeling is determining the prior distribution of possible parameter values. This distribution can be thought of as an hypothesis space that covers all of the possible behaviours of the population of users. It is constructed using knowledge that is available before performing experiments on individual humans. This prior parameterisation of the model induces an ensemble of possible user models, each of which can be expressed in the form of a particular POMDP. Importantly, this modelling framework is agnostic to the amount of prior information available, since the experimenter can specify arbitrary prior distributions for the parameters of the model. In the case of limited available prior knowledge, the simulator implementing the model can be initialised with non-informative priors, thus describing a diverse distribution of users with highly varying cognitive bounds and preferences. Once the prior distribution of possible user models is established then experiments on humans can be used to determine which user model (i.e. which parameter settings) best fits. ### Adaptive Experimental Design in ML In Bayesian approaches to experimental design the starting assumption is that the posterior probability of a parameter value given an experiment is proportional to the likelihood of the data times the prior \(p(\theta|y,d)\propto p(y|\theta,d)\times p(\theta)\). The key question is how to choose experiments that maximise the utility of the data. In Bayesian Optimal Experimental Design (BOED) it is assumed that the objective is to choose experiments that maximise Expected Information Gain (EIG). BOED usually relies on a likelihood model \(p(y|\theta,d)\) for predicting the probability of data \(y\) (the outcome of an experiment) given an experimental design \(d\) and parameter values \(\theta\). The objective is then to optimise EIG. EIG can be thought of as the _mutual information_ between \(\theta\) and \(y\). BOED methods have been successfully applied to the design optimisation problem in various settings (Blei et al., 2017; Goyal et al., 2017). However, BOED can require computationally expensive calculations, such as updating the posterior or estimating the mutual information between the model parameters and experiment outcomes. As these calculations are needed between the time steps of the experiment, this approach becomes impractical for many real-life settings. More recent work has amortised the cost of experiment selection using pre-trained deep neural networks. As an example, Foster et. al (Foster et al., 2018) suggest a policy network, parameterised by a deep neural network, to produce informative experimental design values. In their approach, the loss function is based on calculating a lower bound of the mutual information instead of costly exact values. This work was extended in (Krishnan et al., 2019), in which likelihood functions can be unknown, thus expanding this approach to implicit models. Another line of work is based on using the mutual information as the main criteria for selecting design values, but using neural estimators on the mutual information or its lower bounds (Blei et al., 2017; Goyal et al., 2017; Goyal et al., 2017; Goyal et al., 2017). Blau et al (Blau et al., 2019) present a Reinforcement Learning formulation for design optimisation. They defined sequential experimental design as a Markov decision process (MDP), highlighting the strong exploration capability of RL-based methods. In their approach, the optimisation target is the lower bound of expected information gain (EIG), comparable to the DAD method in (Foster et al., 2018). ## 3. Theory We present the theory in two parts. In the first part, we describe a user model with parameters that must be estimated from data. The user model is an example of a class of simulation-based reinforcement learning model that has recently become influential in HCI (Hau et al., 2017; Goyal et al., 2017; Goyal et al., 2017) but which because of their complexity currently lack adequate parameter estimation methods (though see (Goyal et al., 2017; Goyal et al., 2017)). In the second part of the current section we introduce our proposed _Analyst_. Like the user model, Analyst is also an RL agent and care is needed not to cause confusion. Where for the user model, RL learns a policy that models human cognitive control knowledge, for the analyst, RL learns an policy for choosing experiments and inferring parameters. ### User model We extended a reinforcement learning model of gaze based target selection previously reported in (Foster et al., 2018). The key assumption in the model is that the control of movement is computationally rational: that is, the saccade path and fixations are determined by an attempt to optimise some objective function (e.g. to minimise selection time) given the bounds imposed by the perceptual/motor system. The predicted eye movement strategies are therefore an adaptive consequence of the following constraints: (1) target eccentricity, (2) target size, (3) oculomotor saccade noise, (4) a target detection threshold, and (5) location and target size estimation noise in peripheral vision. The interaction between the target size, target eccentricity, signal-dependent oculomotor noise, eccentricity-dependent estimation noise and size- and eccentricity-dependent target detection results in a multi-step gaze-based selection process. Two-step selections are typical in humans but under some circumstances either one-step (for large targets) or 3-plus-steps (very small targets) can be observed (Goyal et al., 2017). The user model architecture is illustrated in Figure 3. The blue box contains processes (represented as white rectangles) that constitute a theory of human cognition in interaction with a 'world'. Each trial begins with the controller choosing an 'intent' - a motor movement to an aim point (a location in the world). The chosen intent is implemented via a noisy'motor' process. The motor process results in an 'action' which is the actual end point of the motor movement in the world. Additionally, the controller can perform a keypress that terminates the episode. Subsequently, a new stimulus is generated by the world which is the target location and width viewed from the new fixation. This stimulus is perceived by a foveated vision process that generates a noisy estimate of the target location and size (the 'observation'). The observation provides evidence as to the location and size of the target. This evidence is optimally integrated in a Bayesian'memory' process which outputs a 'belief'. The memory is observed by the 'control' process and by a 'utility' function. The utility function generates a reward signal that is used to train the controller. The controller's 'internts' are thereby conditioned on the belief. 2 The architecture implements an action-observation-reward cycle which repeats until the target is selected or the maximum step count for the episode is reached. Learning adjusts the mapping between the observation and the action so as to maximise the cumulative discounted rewards. The architecture can be described formally as a POMDP. Footnote 2: Note, that the first action is selected before any observations are made. Therefore, while it is not conditioned on an observation, it may through training with the reward be informed by a prior expectation of the distribution of target locations. * **State space \(S\)**: At each time step \(t\), the environment is in a state \(s_{t}\in S\). A state represents a possible target position and width, and denotes as \(s=(f_{x},f_{y},x_{x},t_{y},w)\) where \((f_{x},f_{y})\) is the fixation location and \((x_{x},t_{y})\) is the target location. For both, \(x,y\in[-1,1]\), with \(-1\) and \(1\) being the edge of the display, and \(w\in[0,1]\). * **Action space \(A\)**: An action, \(a_{t}\), is taken at each time step \(t\). On each of these steps the controller decides where to attempt to fixate next (the aim point). An aim point is denoted as a coordinate \(a=(a_{x},a_{y})\) where \(a_{x},a_{y}\in[-1,1]\). * **Reward function \(r(s,a)\)**: At each time step \(t\), a reward is generated by a utility function that models the preference utilities of a user. We assumed that users can trade speed for accuracy. Faster speeds are accompanied by more errors. The reward at time \(t\) is based on a linear gaze duration model \(r(s_{t},a_{t})=-(\theta_{a}\times Amplitude(t)+\theta_{b})\), where the slope \(\theta_{a}\) and intercept \(\theta_{b}\) are parameters of the user. If the user performs a keypress, the episode is terminated and a value \(r_{max}\times\theta_{preference}\) is added to the final reward if the target is fixated (gaze is within target radius), otherwise the reward is \(-r_{max}\times\theta_{preference}\) if the target is not fixated (an error). 3 The parameter \(\theta_{preference}\in[0,1]\) describes the speed-accuracy preference of the user. Additionally, if the maximum amount of steps is reached without a keypress, a termination penalty is added to the final reward. Footnote 3: Here the utility is conditioned directly on the state (cf. Figure 3). * **Transition function \(T(s_{t+1}|s_{t},a_{t})\)**: The environment switches to a new state according to a stochastic transition function. The target location remains unchanged but the fixation location changes according to the outcome of the action aim point. Aim points are corrupted by noise. Therefore, \(T(s_{t+1}|s_{t},a_{t})=N((f_{x},f_{y})|(a_{x},a_{y}),\sigma_{ocular}(t))\). The coulomotor noise is linearly dependent on the saccade distance (the amplitude) \(\sigma_{ocular}(t)=\rho_{ocular}\times Amplitude(t)\). * **Observation space \(O\) and observation function \(o=f(s,a)\)**: After taking the action (i.e, saccade to and fixate at a new position on the display), a new observation is received, which is a function of state and action \(o_{t}=f(s_{t},a_{t})\). The observation of the target position is dependent on the true target location and width (state), and the current fixation location (action). Specifically, the spatial uncertainty of the target position (standard deviation) in peripheral vision is linearly dependent on the distance between the target and the current fixation position, i.e., eccentricity. We similarly assume a linear dependency between the uncertainty and the target size. Therefore, the perceived target position is \(\tilde{t}_{x,t}\sim N(t_{x,t},\sigma_{o}(t))\), \(\tilde{t}_{y,t}\sim N(t_{y,t},\sigma_{o}(t))\), where \(\sigma_{o}(t)=\rho_{spatial}\times eccentricity(t)-\rho_{w}\times w+\rho_{b}\), \(\rho_{spatial}\), \(\rho_{w}\) and \(\rho_{b}\) are parameters of the model. We also assume that the observed target width is corrupted by a Gaussian noise source \(\tilde{w}_{t}\sim N(w_{t},\sigma_{w})\). Finally, a binary random variable \(z_{t}\sim p(z|s_{t},a_{t})\), \(z\in\{0,1\}\), indicates whether the user detects the target. Thus the full observation at time \(t\) is the tuple \(o_{t}=(\tilde{x}_{t},\tilde{y}_{t},\tilde{w}_{t},a_{t},z_{t})\). * **Discount rate \(0\leq Y<1\))**. The model receives a scalar reward at each time step, \(r(s_{t},a_{t})\). The optimal strategy is the one that maximises the expected long-term sum of rewards: \(E\big{\{}\sum_{t=0}^{T}Y^{t}r(s_{t},a_{t})\big{\}}\), given the constraints on the model defined above. #### 3.1.1. Belief update If the target is detected by the user (i.e. if \(z_{t}=1\) in the observation) the memory is updated by integrating Figure 3. User model architecture. Human cognition is modelled as five processes in interaction with a World. The Motor process models movement noise. Perception models increasing visual noise with eccentricity from the fovea. Memory integrates multiple observations using Bayesian inference and Utility generates a reward signal that captures the trade-off between factors such as speed and accuracy. the current belief \(b_{t-1}\) and the new observation \(o_{t}\) using Bayes rule (a Kalman filter) (King and Ba, 2014; Kempner, 2015). After taking an action (fixating at a location), the model receives noisy observations of the target location and width, which are sampled from a Gaussian distribution, \(o_{t,k}\sim N(s_{k},o_{\alpha,k}(t))\), with \(k\in\{1,2,3\}\) (see Observation function above). We omit the index subscript \(k\) for clarity. At the time step \(1\), \(b_{1}=o_{1},o_{b}^{2}(t=1)=o_{0}^{2}(t=1)\). The belief update from \(t\) to \(t+1\) is shown in Equation (1) below. \[\begin{split} b_{t+1}&=b_{t}+K_{t+1}[o_{t+1}-b_{t}] \\ {o_{b}}^{2}(t+1)&={o_{b}}^{2}(t)-K_{t+1}{o_{b}}^{2}(t) \\ K(t_{t+1})&=\frac{{o_{b}}(t)^{2}}{{o_{b}}(t)^{2}+ {o_{o}}(t+1)^{2}}\end{split} \tag{1}\] #### 3.1.2. Training For each study, an ensemble model was trained, meaning that the parameters to be estimated were sampled from their prior distributions and given as additional inputs to the controller, thus distilling a population of user models covering the desired space of behaviours into a single set of neural network weights. The controller was implemented as a simple fully-connected feed-forward neural network and optimised with the proximal policy optimisation (PPO) algorithm (King and Ba, 2014). The POMDP was solved for the restricted class of policies that conforms to the task bounds outlined above. ### Analyst As was the case for the user model, the environment of the Analyst is also formulated as a POMDP. The Analyst architecture is illustrated in Figure 4. The blue boxes form the reinforcement learning agent, and the green boxes represent the environment. The 'user model' represents the simulator of a synthetic user. By sampling user parameters from a specified prior distribution, and conditioning the user model on these parameters, a range of different behaviours can be simulated. The user model is also conditioned, and its behaviour affected, by the design values produced by the controller. The parameterised user model generates data given a specific experimental design. The Analyst tries various experimental designs to learn about the user model and thereby estimate its parameters. The'memory', which is considered as an internal process of the agent, stores the history of experimental designs and outcomes. The 'control' process includes the policy function that effectively maps the contents of the memory to a probability distribution over actions. The action contains the design for the next experiment as well as estimations of the user parameters with data collected from past experiments. The 'discrepancy' unit is used during the training of the Analyst to produce the reward signal for the RL agent. More formally, the Analyst POMDP is specified as follows: * **State space \(S\)**: At each time step \(t\), the environment is in a state \(s_{t}\in S\). A state \(s_{t}=(d_{t},y_{t},\theta_{p})\) represents the tuple consisting of the design value \(d_{t}\) that was used to run the experiment at that time step, the experiment outcome \(y_{t}\), and the user parameter vector \(\theta_{p}\). * **Action space \(A\)**: An action, \(a_{t}\), is taken at each time step \(t\). The action \(a_{t}=(d_{t+1},\theta_{e})\) tuple of the analyst includes designs \(d_{t+1}\) for the next experiment and parameter predictions \(\theta_{e}\) based on the information gathered so far. * **Reward function \(r(s,a)\)**: At each time step \(t\), a reward is generated by a discrepancy function, which measures the similarity of predicted and true parameters of the user model as the negative L1 error \(r(s_{t},a_{t})=-||\theta_{p}-\theta_{e}||_{1}\). The reward is directly influenced by the ability of the analyst to estimate parameters, but it is crucially also indirectly influenced by the analyst's ability to design informative experiments. * **Transition function \(T(s_{t+1}|s_{t},a_{t})\)**: The environment switches to a new state according to the transition function. The user parameter vector \(\theta_{p}\) is sampled from the prior \(p(\theta)\) at the beginning of the episode, and remains fixed until the end of the episode. At each time step \(t\), the user parameters \(\theta_{p}\) and the design \(d_{t+1}\) chosen by the analyst are used to run the simulator and produce an outcome \(y_{t+1}\sim p(y|\theta_{p},d_{t+1})\), which gives the new state \(s_{t+1}=(d_{t+1},y_{t+1},\theta_{p})\). * **Observation space \(O\) and observation function \(o=f(s,a)\)**: At each time step \(t\), the state is passed through an observation function \(o_{t}=f(s_{t},a_{t})=(d_{t},\tilde{y}_{t})\) before given to the analyst, where \(\tilde{y}_{t}\) is a corrupted measurement of the true experiment outcome \(y_{t}\). The user parameters \(\theta_{p}\) are treated as a latent variable, they are included in the state but not the observation. The partial observability of the environment motivates a policy that is conditioned on the full history of observations \(o_{\leq t}=o_{1}o_{2}...o_{t}\). The analyst policy is a stochastic function that samples parameter predictions and designs for the next experiment conditioned on the observation history as \(a_{t}\sim\pi^{analyst}(a_{t}|o_{\leq t})\). The objective of the analyst is to maximise the expectation of discounted return \(E[\sum\limits_{t=0}^{M}\gamma^{t}r(s_{t},a_{t})]\) where \(M\) is the number of experiments performed for a specific user and \(\gamma\) is the exponential discount rate. This objective is non-myopic since credit assignment for a particular experiment is performed based on the quality of all future parameter estimations. #### 3.2.1. Policy network The studies reported below use two different architectures for representing the policy network. In our first study, the observation includes information of the movement time and final fixation, target location and target width. In this case, the policy network implementation is a multilayer perceptron (MLP) network followed by mean pooling across experiments, output layer and heads for action distribution and value estimation (see figure 5, left). In studies 2 and 3, the observation includes eye movement data and information about the target location and width. In these studies, the policy network architecture uses an inductive bias to support relational inference (Bishop et al., 2014; Kempner, 2015) (see figure 5, right). Considering the Studies 2 and 3 policy architecture, the output of the local pooling is an embedding of the observations over one episode, \[e^{l}=\sum\limits_{t=1}^{T}f_{enc}(c^{target},c_{t}^{fixation},c_{t+1}^{fixation}) \tag{2}\] where \(f_{enc}\) denotes a relation network encoder (in our case an MLP network), conditioned on information of the target location \(c^{target}\) and information of the locations of two subsequent fixations \(c^{fisation}_{t}\) and \(c^{fisation}_{t+1}\) at time steps \(t\) and \(t+1\). The output of the global pooling is an embedding over \(M\) experiments \[e^{\theta}=\sum_{i=1}^{M}g(e^{I}_{i}) \tag{3}\] where \(M\) is the number of experiments and \(g(.)\) is an MLP network. The embedding \(e^{\theta}\) is fed to an fully connected output layer and to policy and value heads. User models formulated as POMDPs, instead of closed form models, offer the possibility of capturing and simulating complicated and realistic human-like behaviour, but at the same time they raise some challenges. With such user models, we are often only able to draw samples from the simulator, without the possibility of evaluating the likelihood or differentiating through the simulator. Because of this, new methods are needed that fulfill this requirement. In addition to the requirement for non-differentiable user-models, other desired properties are amortisation, non-myopic designs, and adaptation to the experiment outcomes. Table 1 summarises the capabilities of various recent approaches for optimal experimental design. Our method is the most flexible as it is able to generate amortized, non-myopic and adaptive solutions in likelihood-free environments without the need to differentiate through the simulator. ### Training The policy network is trained by using both score-based and path-wise gradient estimators (Srivastava et al., 2017). Since the design values are sampled Figure 4. Analyst architecture. The problem for the reinforcement learner is to learn a sequential policy for the ‘control’ process by acting in an environment that consists of a sequence of user models with stochastically sampled parameters. Actions consist of a \((d,\theta_{e})\) pair where \(d\) is a specification of the experiment and \(\theta_{e}\) is an estimate of the model parameters. The ‘discrepancy’ function calculates a reward by comparing the estimated parameters to the true parameters \(\theta_{p}\). The ‘memory’ stores the sequence of experiment designs \(d\) and data \(y\) that have been conducted for the latest sampled parameters. Figure 5. Two different policy network architectures. Study 1 uses a simpler policy network, whereas Studies 2 and 3 use a policy network architecture which implements relational reasoning. from the action distribution, it uses a score-based gradient estimator implemented with PPO (Ponon et al., 2017). Since the parameter estimations can be directly optimised with a loss function, extra parameter updates can be conducted with the pathwise gradient estimations thereby reducing variance. The training method uses an exponential moving average (EMA) method in order to regularise the training process (Ponon et al., 2018). The parameters of a separate EMA network are updated based on the parameters from the policy network, and the previous values of the EMA network: \[\phi^{{}^{\prime}}_{t}=\alpha\phi^{{}^{\prime}}_{t-1}+(1-\alpha)\phi_{t} \tag{4}\] where \(\phi^{{}^{\prime}}_{t}\) is the EMA network parameter vector at time step \(t\), \(\phi_{t}\) is the parameter vector of the policy network, and \(\alpha\) is the smoothing coefficient hyperparameter. The rewards for the policy network parameter updates are calculated by using parameter predictions from the EMA network, instead of the actual policy network. As a result, the policy parameter updates are less noisy and training is more efficient. The training was conducted by using the Stable Baselines 3 (SB3) library (Shen et al., 2017) by implementing custom policy networks for the relation network, and by using the SB3 callback system for EMA implementation. ## 4. Non-myopic and Adaptivity Demonstrations Our method produces amortised design strategies and parameter estimations in likelihood free settings without the need to differentiate through the simulator. As presented in Table 1, the existing methods are not applicable to our setting. To gain confidence for our method, its performance and range of applicability, we ran experiments on abstract tasks requiring non-myopic design strategies and adaptivity. ### Non-myopic demonstration We demonstrate first the capability of our method to produce non-myopic design selection strategies with an abstract task. We use a 1-dimensional Gaussian process to sample functions with a specified kernel. Depending on the selected kernel, nearby points are correlated. As a baseline for comparison to our approach, we use a one step lookahead algorithm which minimises expected uncertainty reduction over the design space. This myopic algorithm will converge to non-optimal design strategy with two data points as the algorithm selects designs that are less uniformly distributed. In contrast, our non-myopic algorithm is able to take into account the total number of trials to be conducted and can design a whole series of experiments that lead to higher information gain overall. The figure 6 illustrates examples of design selections of both algorithms. In the left panel, the design is calculated with one-step lookahead so as to optimally reduce uncertainty. The mid-point design is chosen first in 'ignorance' of the fact that another experiment is to be conducted and as a consequence the overall distribution of experiments is not optimal. In the right panel, our non-myopic algorithm has successfully learned to choose designs that more evenly cover the design space, thereby gaining more information over the whole series of experiments and optimally reducing uncertainty overall. The discrepancy function for the Analyst is the L2 distance between the true and estimated function. We measured the quality of the designs with two metrics, namely the L2 distance between the estimated and ground truth functions, and the reduction of variance as calculated with the integrated mean-squared error (IMSE) method as used in Chen et al. 2019 (Chen et al., 2019). As the myopic algorithm is analytically calculated to reduce variance, the latter metric offers a reliable comparison to the baseline. The results are shown in table 2, indicating a clear benefit of the non-myopic method. \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline Algorithm & Amortized & differentiable & non-myopic & Adaptive & likelihood-free \\ & & simulator not & & & \\ \hline DAD (Dang et al., 2018) & ✔ & ✔ & ✔ & ✔ & ✗ \\ \hline iDAD (Dang et al., 2018) & ✔ & ✗ & ✔ & ✔ & ✔ \\ \hline Blau et al (Biau et al., 2018) & ✔ & ✔ & ✔ & ✔ & ✗ \\ \hline MINEBED (Ponon et al., 2018) & ✔ & ✔ & ✔ & ✔ & ✔ \\ \hline Valentin et al (Ponon et al., 2018) & ✔ & ✔ & ✔ & ✔ & ✔ \\ (Ponon et al., 2018) & & & & & \\ \hline Our method & ✔ & ✔ & ✔ & ✔ & ✔ \\ \hline \end{tabular} \end{table} Table 1. Comparison of the capabilities of recent methods for optimal experimental design. * In MINEBED, a backup method is also described based on Bayesian optimisation, which does not require differentiable simulator. \begin{table} \begin{tabular}{||c c c||} \hline Metrics & mean & standard error \\ \hline \hline Analyst discrepancy & 0.0812 & 0.0089 \\ Baseline discrepancy & 0.1580 & 0.0204 \\ Analyst IMSE & 0.3909 & 0.1194 \\ Baseline IMSE & 0.7087 & 0.0 \\ \hline \end{tabular} \end{table} Table 2. Results of the non-myopic experiment. The Analyst learns a non-myopic design strategy that outperforms the optimal myopic design strategy. The mean and standard error was computed over an evaluation batch with 100 functions sampled from the prior. ### Adaptivity demonstration The adaptivity is demonstrated in a setting where the task is to estimate a parameter that affects the positioning of a logistic sigmoid function on the x-axis. The data is generated by conducting Bernoulli trials where the probability \(P(y_{i}=1)\) is defined by a logistic function: \[P(y_{i}=1|\theta,d)=\frac{1}{1+e^{-(d+\theta)}}\] where \(y_{i}\) is the outcome of the \(i\):th trial in the experiment, \(\theta\) is the parameter to be estimated and \(d\) is the design value. To train a non-adaptive baseline, we masked the outcomes from the Analyst until the last time step of the episode. In contrast, when the outcomes of the previous trials during the episode are available, the Analyst can generate an adaptive strategy where design choices are affected by the previous outcomes during the episode. To compare Analyst to the baseline, we use the MSE of the estimated parameter. The table 3 indicates a clear benefit of the adaptive design strategy over the non-adaptive strategy. The adaptivity of the designs is illustrated in Figure 7 which shows how the Analyst design selections converge on the mid-point of the slope in the sigmoid function. ## 5. Study 1: Estimation of Submovement Noise from Summary Data Following Meyer's law we assume that people make a series of submovements to achieve a goal (Meyer, 2017). In the first study, we apply our approach to a scenario where only summary statistics are provided to the analyst at the end of each episode. The summary statistics includes movement time from the beginning of the episode until the end of the episode, the target location, target width and the location of the final submovement. Although individual submovements are not observed, the goal is to infer the noise parameter affecting these movements. The movement time is calculated with the formula \(mt=\sum_{i}^{I}(\theta_{a}x_{i}+\theta_{b})\), where \(I\) is the number of user steps within the episode, \(x_{i}\) is the distance of the submovement during the current user step \(i\), \(\theta_{a}\) is the slope parameter, and \(\theta_{b}\) is the intercept parameter of the movement time model. In Study 1, (though not in subsequent studies) both of these movement time parameters are fixed. The goal in Study 1 is to estimate the movement noise parameter. ### Results The larger the movement noise, the further the actual movements can be from the intended aim points. As a result, with large noise values the user model requires more steps to reach the target. In order to verify that the user model has adapted to the various movement noise values, we plot the number of steps that the user model needs to reach the target on average. Figure 8a shows that the fully trained user model require more steps to reach the target when the noise levels are higher. As the user model's behaviour is impacted by the parameter value, it should be possible to train the Analyst to infer the movement noise from behavioural observations. We tested the ability of the Analyst to estimate parameters and the results are reported in Figure 9. In each of the panel's (a), (b) and (c) the true value of the movement noise is plotted against the estimated value. A linear regression fit wa \begin{table} \begin{tabular}{||c c c||} \hline Method & MSE mean & MSE standard error \\ \hline Adaptive Analyst & 2.018 & 0.034 \\ Non-adaptive baseline & 6.265 & 0.085 \\ Random design baseline & 14.434 & 0.234 \\ \hline \end{tabular} \end{table} Table 3. Results of the adaptivity demonstration. To provide a baseline, the Analyst trained without information of the previous outcomes during the episode learns a non-adaptive strategy. In contrast, when previous outcome information is available, the Analyst produces an adaptive strategy which beats both the non-adaptive baseline and a random baseline. The MSE mean and MSE standard error was computed over an evaluation batch with 10000 models sampled from the prior. Figure 6. Examples of design selections by myopic and our non-myopic methods. Left panel: A myopic method has selected a design to reduce variance with optimal one-step lookahead. Right panel: Our method has learned a non-myopic design strategy and can select more informative design values as the designs are chosen to more evenly cover the whole space. scatter plots (\(a=0.99,b=0.91,R^{2}=0.80\) for panel (a), \(a=0.83,b=0.01,R^{2}=0.77\) for panel (b), \(a=1.00,b=0.03,R^{2}=0.54\) for panel (c). The panels differ in the level of the perceptual noise, with low perceptual noise in panel (a), higher perceptual noise in panel (b) and highest perceptual noise in panel (c). Across all three panels it is clear that the Analyst is able to provide some level of estimate of the oculomotor noise, however, it is also clear that the estimates are much better when the perceptual noise is lower (panel (a)). Figure 9 also illustrates the distribution of experimental design choices made for each level of perceptual noise. Panels (d) and (e), which correspond to panel (a), show that for low perceptual noise, experimental designs tend toward higher eccentricities and large targets (though there is some variance). However, as perceptual noise increases, the analyst is incentivised to select targets closer to the origin, in order to avoid corrupting the data with very noisy observations. This claim is supported by panels (f) and (h) which show more experiments with smaller eccentricities at higher perceptual noise values. Finally, the quality of the design optimisation can be verified by training the analyst with random design values, and comparing the accuracy of the parameter estimations with an analyst trained to optimise design selections. The right panel of Figure 8 is a plot of the error in the estimate against the experiment number. With zero experiments there is no data and the parameter estimate is simply the learned prior of the parameter distribution. After the data from each experiment are incorporated into a new parameter estimate, the error decreases, and it decreases more quickly with Analyst designed experiments. It demonstrates a clear improvement in the accuracy of the parameter estimations with optimised designs. ## 6. Study 2: Inference of perceptual and motor noise from gaze movements In Study 2, we applied the Analyst to a user model of eye-movements. Instead of using summary statistics as in Study 1, we allowed Analyst to observe the gaze fixations of each step during the episode. We also extended the model with a target detection requirement. As a consequence, whether or not the user model observes the target is probabilistic. The probability of not observing the target increases when the target size is far away and target width is small. In this study we report the results of the Analyst inferring three parameters: oculomotor noise, perceptual noise and movement time intercept. The observations include fixations at each time step, duration of each gaze, information about the target location and target width. A key difference between Study 1 and Study 2 is that, where there is a fixed amount of data per experiment in Study 1, in Study 2, experiments that with more distance and smaller targets can generate more data. As we will see, this fact impacts the selected designs. ### Results As with Study 1, we first measured the effect of changes in the user model parameters on the behaviour. When considering oculomotor and perceptual noise values, increasing one of these noise values while keeping another fixed should cause the user model to require more gaze fixations to reach the target. This is clearly visible in the panel (a) in Figure 10, where the perceptual noise value versus required gaze steps to reach the target is plotted. The near linear increase in number of steps with increasing noise suggests that the parameter should be readily recoverable. The oculomotor noise is equivalent to movement noise in study 1, the effect of which is shown in Figure 8 of Study 1. In Study 2, the goal of Analyst is to select the most informative experiments for inferring the oculomotor noise, perceptual noise and the intercept parameters for the movement time model. Analyst performance is illustrated in panels (e) to (g) of Figure 10. A linear regression was performed to assess the quality of the fits for each parameter (\(a=0.85,b=0.01,R^{2}=0.76\) for panel (e), \(a=0.87,b=-0.01,R^{2}=0.84\) for panel (f), \(a=1.00,b=0.04,R^{2}=0.99\) for panel (g)). The selected design values are illustrated in panels (b) and (c) in Figure 10. In this case, the analyst has learned to design experiments with small targets and large eccentricities. Finally, the performance of the analyst is compared against an analyst trained by using random experimental designs. Panel (d) Figure 7. The Analyst learns to adapt to the data collected so far during the episode. Left panel: When the Analyst has information about the previous trials, it adapts to them and converges to the midpoint of the sigmoid function slope, which provides most information. Right panel: When information about the previous trial outcomes are not available, the Analyst learns a non-adaptive strategy in which the design values are sparsely distributed rather than focused where they are needed. in Figure 10 shows a clear benefit of the optimised designs across experiments. In summary, the results of Study 2 extend Study 1 by showing that the analyst can choose experimental designs and accurately infer multiple parameters at the same time. In addition, it can do so when each experiment returns a sequence of data (fixation locations and movement times) and not just point values. Finally, estimating parameters with selected designs outperforms doing so with random designs. ## 7. Study 3: Inference of Preferences In Study 3, we test to see whether Analyst can discover user model preferences. Preferences are important to HCI as they capture personal and sometimes discretionary aspects of how a person wants to interact with a computer. Preferences include preferences for music genre and/or movie directors, for example, but here we focus on speed-accuracy trade-offs. As we have said, the speed-accuracy trade-off is a significant determinant of how people choose to interact with computers (Sandel et al., 2018) and is readily detectable in behaviour. Figure 8. Study 1 results. Panel (a), The number of steps required by the user model to reach the target against increasing movement noise values. Panel (b), The Analyst makes better parameter estimations the more it makes experiments. The analyst also performs better across stages compared to a baseline that samples designs uniformly. In panel (a), the shaded area represents \(\pm\) 1 standard error over an evaluation batch with 1000 user episodes. In panel (b), the shaded area represents \(\pm\) 1 standard error over an evaluation batch with 1000 models sampled from the prior. Figure 9. Study 1 results. Panels a-c illustrate the accuracy of the movement noise estimations when the user model has low (panel a), medium (panel b) or high (panel c) perceptual noise. Below the scatter plots, the corresponding design selections (target distance from origin and width) by the analyst are illustrated as histograms (panels d-i). The speed-accuracy trade-off determines the error rate which is a key property of interaction. The faster users attempt to perform a pointing task then the more errors that they make. In Study 3 we tested the extent to which Analyst was capable of estimating these preference parameters. To do so, we simulated a task in which the user model is able to end an episode by pressing a key on a keyboard. The observation space is extended to include information about whether the user has pressed a key or not. ### Results As with the previous studies, we first test whether the parameters, particularly the preference parameter, make an identifiable difference to the behaviour of the user model. In this case a useful metric for measuring the response of the parameters to the behaviour of the user agent is the error rate, which describes how often the user model ends the episode when the gaze is not in the true target, or the maximum number of gazes is reached without a keypress by the user. Figure 11, panels (a) to (d) show the error rate for four of the model parameters. Panel (c) shows how the error rate is larger when the preference is biased towards speed, and approaches zero when the preference is biased towards accuracy. The lower row, panels (e)-(h) of Figure 11 shows the capability of the analyst to simultaneously infer all four parameters. As the Study 3 task is made more difficult by the increased number of parameters, the error of the estimates is greater when compared to Study 2. However, there is a good correlation between ground truth and parameter estimate for all four parameters. Having said that the worst of the four estimates is for the preference parameter which has a noticeable skew. Figure 12 shows the Analyst's distribution of experimental design choices in two histograms. It always picks the smallest target (panel b) but shows a broader distribution of choice of distances. It is instructive to compare this distribution to that for Study 2. In Study 2, a good strategy for the analyst was to select a small target, as far away from the fovea as possible. In contrast, less extreme distances are selected in Study 3. One reason for this difference may be that because of the speed-accuracy trade-off there is a risk that the smallest most distant targets are not selected at all and therefore there is little evidence gathered to inform the speed-accuracy trade off preference parameter. Therefore, in Study 2 it makes sense to select experimental designs in which the user model stands some chance of high accuracy. Lastly, the Analyst's optimal experiments outperform an analysis conducted with data from random experimental designs (Figure 12 panel (c)). In summary, Study 3 demonstrates that Analyst can simultaneously estimate multiple parameters, and importantly, it can estimate both capacity parameters (e.g. oculomotor and perceptual noise) and preference parameters (e.g. speed-accuracy trade off) - from Figure 10. Results of Study 2. Panel (a) shows the effect of the user model’s perceptual noise parameter on number of steps taken. Panels (b) and (c) show the distribution of the experimental designs chosen by the Analyst. Panel (d) shows the the improvement in parameter estimates with Analyst designed experiments versus with random experiments. The panels on the lower row show the accuracy of the parameter estimations for perceptual noise (e), oculomotor noise (f), movement time interception parameter (g). In panel (a), the shaded area represents \(\pm\) 1 standard error over an evaluation batch with 1000 user episodes. In panel (d), the shaded area represents \(\pm\) 1 standard error over an evaluation batch with 1000 models sampled from the prior. the same experiments. While there is a noticeable decrease in the quality of the parameter estimates with the increase in the number of parameters under consideration, compelling correlations are still generated. As with the other two studies the analyst selected experimental designs outperformed the random designs. ## 8. Discussion We have explored the properties of a new method of user model parameter estimation and shown that, for three progressively complex pointing tasks, it can make rapid estimates of parameter values for individual simulated users. It does so by learning a near-optimal policy for choosing the experiments that are most likely to generate informative data. All three of the studies showed that the learned policies lead to more accurate parameter estimates than random experiments. Figure 11. Results of Study 3. The upper row illustrates the adaptation of the user model to parameters for the perceptual noise (a), oculomotor noise (b), speed-accuracy preference (c) and movement time intercept (d). The lower row illustrates the corresponding accuracy of the parameter estimates. In panels (a) - (d)T, the shaded area represents \(\pm\) 1 standard error over an evaluation batch with 1000 user episodes. Figure 12. Results for Study 3. Panels (a) and (b) show the histograms of Analyst selected experimental designs. Panel (c) shows the performance gain that follows from inferring parameters on the basis of the designs selected by the analyst compared to random designs. In panel (c), the shaded area represents \(\pm\) 1 standard error over an evaluation batch with 1000 models sampled from the prior. Parameters could be estimated both with data that summarised a sequence of user submovements (Study 1), as well as with data that included multiple steps in a single experimental observation (Studies 2 and 3) - two important types of user data found in HCI. Further, in Study 1, 2 and 3, each of four successive designed experiment led to an improvement in the parameter estimates and a concomitant reduction in the prediction error. This improvement was more rapid for designed experiments than for random experiments. The estimated parameter values were highly correlated with the true value after only four experiments. While we have demonstrated the viability of the amortised approach for pointing tasks, more work is needed to verify that it generalises to other HCI tasks. While we believe that the approach is structured so as to provide design of experiments and inference of parameters for any complex simulation-based user models, further empirical work is needed. If generalisation is possible then, while very high computational costs are paid during training (between 3 and 9 hours of wall time on a laptop for the simulations reported above), the result is a very fast (milliseconds), data-lean (four observations) deployment. In other words, the approach trades the high cost of training an ensemble model and training the analyst, for a subsequent reduction in the time required to infer user model parameters. Post-training, not only can the best next experiment be determined in milliseconds but in addition the analyst can minimise the total number of experiments required to determine best fitting parameter values. However, further empirical work is required to see whether this promise is delivered for a broad range of HCI tasks. Additional properties of our approach include that it provides adaptive, non-myopic designs (See the experiments in Section 4) and that it can be trained with arbitrary non-differentiable simulators with intractable likelihoods. As we see in the next Section, these properties compare well with other work in this area. #### 8.0.1. Comparison to related approaches The work reported above was inspired and informed by recent work in both HCI and machine learning. Of particular importance was work on ensemble user models (Zhou et al., 2017; Wang et al., 2018) and work on reinforcement learning based experimental design and inference (Beng et al., 2018). Also of importance is the work on Bayesian approaches to optimal experimental design (Beng et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) and work on relation nets (Wang et al., 2018) which was crucial to tractable reinforcement learning. Bayesian experimental design has the clear potential advantage of mathematical rigour, as well as estimates of the posterior distribution of parameter values, rather than point estimates. The Bayesian framework, reviewed briefly above, tackles the problem of experimental design by optimising the expected information gain e.g. how much more certain we will become about the values of the parameters we are fitting. EIG is equivalent to maximising the Mutual Information (MI) between the parameters and data when performing the experiment design. However, EIG does not account for inaccuracies resulting from the amortisation in the parameter estimation. Thus, it is not clear that optimising EIG leads to good designs in our setting. Instead of optimising EIG, in the RL approach it is possible to directly optimise the designs for amortised parameter estimation through a joint objective. Also, estimating the MI in the conventional BOED framework is doubly intractable (Zhou et al., 2017; Wang et al., 2018). Due to this computational complexity, estimating Bayes optimal designs is not feasible when doing experiments. This has led to approaches that amortise the cost to a pre-trained deep network by using a tractable approximation to the MI (Beng et al., 2018; Wang et al., 2018; Wang et al., 2018). Tractable computation of the MI objective for implicit models is further complicated by the fact that the likelihood function of the parameter is not known (Ivanova et al., 2018; Wang et al., 2018). Ivanova et al. (2018) tackle this problem by introducing a separate critic network. Their approach however requires the use of a differentiable simulator. Using RL alleviates this need, as the score function gradient estimator directly calculates gradients from the specified reward. ### Future work The results reported above represent a preliminary investigation of the potential of RL-based experimental design and inference in HCI. They suggest a number of future studies. Perhaps most significantly, the effectiveness of the method must be tested with human participants. While the simulated participants used to test the approach above are sampled from the distribution of real users and previous studies have demonstrated the human-like behaviour of these simulations (Kumar et al., 2018), further work is needed. While human studies are beyond the scope of the current article, the software that we have built makes it very easy to deploy the Analyst learned policies in interactive software with an eye tracker. This software would - without lag - choose the best experimental design (target distance and width), observe a user's saccades and fixations, update model parameters, update the observation history and repeat. The approach must also be tested on a broader range of tasks so as to empirically establish its generality. While we have formalised the method in terms that we believe to be fully general, in this paper, we have only tested it on abstract problems and pointing tasks. A broader range of HCI-related tasks would include mean-search tasks (Kumar et al., 2018), decision making tasks (Kumar et al., 2018), and biomechanical control tasks (Wang et al., 2018), to name but three. In the future, user models with rapid parameter estimation could help enhance interaction for each individual user. Mouse gain functions, text completion, icon sizes, colour pallet, etc. are almost always never tuned to an individual's preferences and capacities. Instead, interfaces provide settings by which the interaction can be 'adapted' manually requiring the user to actively choose configurations in accordance with their beliefs about what is good for them. We believe that this process could be complemented with automatic personalisation methods based upon the RL-based Analyst reported above. In addition, further work is needed on picking good hyperparameters for the Analyst. As the reported studies became more complex, training of the neural networks became more challenging and some amount of hyperparameter tuning was involved in generating the results reported above. Finding the very best possible performance with extensive hyperparameter search was not within the scope of the current article. Therefore, the performance for both optimised Analyst and Analyst using random designs could be improved. Lastly, further work is needed to explore the implications of RL-based amortised parameter estimation for a range of HCI-related problems. A/B testing, for example, can be enhanced by first fitting a user model and then selecting an interaction design accordingly. With amortised methods, it may be possible to do this in
2302.05644
Partial k-means to avoid outliers, mathematical programming formulations, complexity results
A well-known bottleneck of Min-Sum-of-Square Clustering (MSSC, the celebrated $k$-means problem) is to tackle the presence of outliers. In this paper, we propose a Partial clustering variant termed PMSSC which considers a fixed number of outliers to remove. We solve PMSSC by Integer Programming formulations and complexity results extending the ones from MSSC are studied. PMSSC is NP-hard in Euclidean space when the dimension or the number of clusters is greater than $2$. Finally, one-dimensional cases are studied: Unweighted PMSSC is polynomial in that case and solved with a dynamic programming algorithm, extending the optimality property of MSSC with interval clustering. This result holds also for unweighted $k$-medoids with outliers. A weaker optimality property holds for weighted PMSSC, but NP-hardness or not remains an open question in dimension one.
Nicolas Dupin, Frank Nielsen
2023-02-11T10:13:15Z
http://arxiv.org/abs/2302.05644v3
# Partial k-means to avoid outliers, mathematical programming formulations, complexity results ###### Abstract A well-known bottleneck of Min-Sum-of-Square Clustering (MSSC, the celebrated \(k\)-means problem) is to tackle the presence of outliers. In this paper, we propose a Partial clustering variant termed PMSSC which considers a fixed number of outliers to remove. We solve PMSSC by Integer Programming formulations and complexity results extending the ones from MSSC are studied. PMSSC is NP-hard in Euclidean space when the dimension or the number of clusters is greater than 2. Finally, one-dimensional cases are studied: Unweighted PMSSC is polynomial in that case and solved with a dynamic programming algorithm, extending the optimality property of MSSC with interval clustering. This result holds also for unweighted \(k\)-medoids with outliers. A weaker optimality property holds for weighted PMSSC, but NP-hardness or not remains an open question in dimension one. Keywords:Optimization; Min-Sum-of-Square; Clustering; \(K\)-means; outliers; Integer Programming; Dynamic Programming; Complexity ## 1 Introduction The \(K\)-means clustering of \(n\)\(d\)-dimensional points, also called Min Sum of Square Clustering (MSSC) in the operations research community, is one of the most famous unsupervised learning problem, and has been extensively studied in the literature. MSSC was is known to be NP hard [4] when \(d>1\) or \(k>1\). Special cases of MSSC are also NP-hard in a general Euclidean space: the problem is still NP-hard when the number of clusters is 2 [1], or in dimension 2 [15]. The case \(K=1\) is trivially polynomial. The 1-dimensional (1D) case is polynomially solvable with a Dynamic Programming (DP) algorithm [19], with a time complexity in \(O(KN^{2})\) where \(N\) and \(K\) are respectively the number of points and clusters. This last algorithm was improved in [9], for a complexity in \(O(KN)\) time using memory space in \(O(N)\). A famous iterative heuristic to solve MSSC was reported by Lloyd in [14], and a local search heuristic is proposed in [12]. Many improvements have been made since then: See [11] for a review. A famous drawback of MSSC clustering is that it is not robust to noise nor to outliers [11]. The \(K\)-medoid problem, the discrete variant of the \(K\)-means problem addresses this weakness of MSSC by computing the cluster costs by choosing the cluster representative amongs the input points and not by calculating centroids. Although \(K\)-medoids is more robust to noise and outliers, it induces more time consuming computations than MSSC [5, 10]. In this paper, we define Partial MSSC (PMSSC for short) by considering a fixed number of outliers to remove as in partial versions of facility location problems like \(K\)-centers [7] and \(K\)-median [3], and study extensions of exact algorithms of MSSC and report complexity results. Note that a \(K\)-means problem with outliers, studied in [13, 20], has some similarities with PMMSC, we will precise the difference with PMMSC. To our knowledge, PMSSC is studied for the first time in this paper. The remainder of this paper is structured as follows. In Section 2, we introduce the notation and formally describe the problem. In Section 3, Integer Programming formulations are proposed. In Section 4, we give first complexity results and analyze optimality properties. In Section 5, a polynomial DP algorithm is presented for unweighted MSSC in 1D. In Section 6, relations with state of the art and extension of these result are discussed. In Section 7, our contributions are summarized, discussing also future directions of research. To ease the readability, the proofs are gathered in an Appendix. ## 2 Problem statement and notation Let \(E=\{x_{1},\ldots,x_{N}\}\) be a set of \(N\) distinct elements of \(\mathbb{R}^{L}\), with \(L\in\mathbb{N}^{*}\). We note discrete intervals \(\llbracket a,b\rrbracket=[a,b]\cap\mathbb{Z}\), so that we can use the notation of discrete index sets and write \(E=\{x_{i}\}_{i\in\llbracket 1,N\rrbracket}\). We define \(\Pi_{K}(E)\), as the set of all the possible partitions of \(E\) into \(K\) subsets: \[\Pi_{K}(E)=\left\{P\subset\mathcal{P}(E)\,\middle|\,\forall p,p^{\prime}\in P,\,\,\,p\cap p^{\prime}=\emptyset\,\text{and}\,\,\bigcup_{p\in P}=E\,\text{and} \,\,\text{card}(P)=K\,\right\}\] MSSC is special case of K-sum clustering problems. Defining a cost function \(f\) for each subset of \(E\) to measure the dissimilarity, \(K\)-sum clustering are combinatorial optimization problems indexed by \(\Pi_{K}(E)\), minimizing the sum of the measure \(f\) for all the \(K\) clusters partitioning \(E\): \[\min_{\pi\in\Pi_{K}(E)}\sum_{P\in\pi}f(P) \tag{1}\] Unweighted MSSC minimizes the sum for all the \(K\) clusters of the average _squared distances_ from the points of the clusters to the centroid. Denoting with \(d\) the Euclidean distance in \(\mathbb{R}^{L}\): \[\forall P\subset E,\,\,\,\,\,f_{\text{UMSSC}}(P)=\min_{c\in\mathbb{R}^{L}}\sum _{x\in P}d(x,c)^{2}=\sum_{x\in P}d\left(x,\frac{1}{|P|}\sum_{y\in P}y\right)^ {2} \tag{2}\] The last equality can be proven using convexity and order one optimality conditions. In the weighted version, a weight \(w_{j}>0\) is associated to each point \(x_{j}\in E\). For \(x\in E\), \(w(x)\) denotes the weight of point \(x\). Weighted version of MSSC considers as dissimilarity function: \[\forall P\subset E,\ \ \ f_{\text{MSSC}}(P)=\min_{c\in\mathbb{R}^{L}}\sum_{x\in P}w(x )\times d(x,c)^{2} \tag{3}\] Unweighted cases correspond to \(w_{j}=1\). Analytic computation of weighted centroid holds also with convexity: \[f_{MSSC}(P)=\sum_{x\in P}w(x)\times d\left(x,\frac{1}{\sum_{z\in P}w(z)}\sum_{ y\in P}w(y)\times y\right)^{2} \tag{4}\] We consider a partial clustering extension of MSSC problem, similarly to the partial p-center and facility location problems [3, 7]. A bounded number \(M<N\) of the points may be considered outliers and removed in the evaluation. It is an optimal MSSC computation enumerating each subset \(E^{\prime}\subset E\) removing at most \(M\) points, i.e. such that \(|E\setminus E^{\prime}|\leqslant M\). It follows that PMSSC can be written as following combinatorial optimization problem: \[\min_{E^{\prime}\subset E:|E\setminus E^{\prime}|\leqslant M}\ \min_{\pi\in\Pi_{K}(E^{\prime})}\sum_{P\in\pi}f_{MSSC}(P) \tag{5}\] Figure 1 shows an example of MSSC and PMSSC with \(M\in\{2,3,4\}\). In the "robust \(K\)-means problem" studied in [13, 20], also denoted or "\(K\)-means problem with outliers", "robust" also denotes the partial variant with a defined number of outliers. It is not the usual meaning of robust optimization in the operations research community. These papers consider only the unweighted Figure 1: MSSC clustering of a Pareto front in 4 clusters: (a) no outliers, (b) 2 outliers, (c) 3 outliers, and (d) 4 outliers. version of the problem, this paper highlights the difficulty of meaningfully formulating such a problem. The crucial difference with our assumptions is that their partial version concerns a discrete clustering version with a discrete set of possible centroids like \(K\)-medoids, not a partial version of MSSC where the centroid is continuous. Such problem will be denoted as "partial \(K\)-medoids problem", it is defined with (5)with following \(f_{medoids}\) measure instead of \(f_{MSSC}\): \[f_{medoids}(P)=\min_{c\in P}\sum_{x\in P}d(x,c)^{2} \tag{6}\] ## 3 Mathematical Programming Formulations Partial MSSC can be formulated with Integer Programming formulations, extending the ones from MSSC [2, 17, 18]. For \(n\in\llbracket 1;N\rrbracket\) and \(k\in\llbracket 1;K\rrbracket\), we use binary variables \(z_{n,k}\in\{0,1\}\) defined with \(z_{n,k}=1\) if and only if point \(x_{n}\) is assigned to cluster \(k\in\llbracket 1,K\rrbracket\). Using definition (3), the weighted centroid of cluster \(K\) is defined as a continuous variable \(c_{k}\in\mathbb{R}_{+}^{L}\). It give rises to a first quadratic formulation: \[\min_{z_{n,k},c_{k}}\sum_{k=1}^{K}\sum_{n=1}^{N}w_{n}d(x_{n}-c_{k })^{2}z_{n,k} \tag{7}\] \[s.t:\qquad\sum_{n^{\prime}=1}^{N}\sum_{k=1}^{K}z_{n^{\prime},k} \geqslant N-M \tag{8}\] Objective function (7) holds also for (1) and (3) with \(z_{n,k}\) encoding subsets \(P\in\pi\). If \(M=0\), constraint (8) is equivalent to \(\sum_{k=1}^{K}z_{n^{\prime},k}=1\) for each index \(n^{\prime}\), point \(x_{n^{\prime}}\) shall be assigned to exactly one cluster. Constraint (8) aggregates that at most \(M\) points are unassigned, ie \(\sum_{k=1}^{K}z_{n^{\prime},k}=0\) for these \(x_{n^{\prime}}\), and the other ones fulfill \(\sum_{k=1}^{K}z_{n^{\prime\prime},k}=1\). As for unpartial MSSC, last quadratic formulation is not solvable by mathematical programming solvers like Cplex and Gurobi because of non-convexity of the objective function. A compact reformulation, as for unpartial MSSC, allows such straightforward resolution. Using additional continuous variables \(s_{n,k}\geqslant 0\) as the squared distance from point \(x_{n}\) to its cluster centroid \(c_{k}\) if \(z_{n,k}=1\) and \(0\) otherwise. It induces following convex quadratic formulation with quadratic convex constraints with a big M that can be set to \(D=\max_{i,i^{\prime}}d_{i,i^{\prime}}^{2}\): \[\min_{z_{n,k},s_{n,k},c_{k}}\sum_{k=1}^{K}\sum_{n=1}^{N}w_{n}s_{n,k}^{2} \tag{9}\] \[s.t:\qquad\sum_{n^{\prime}=1}^{N}\sum_{k=1}^{K}z_{n^{\prime},k} \geqslant N-M\] (10) \[s_{n,k}\geqslant d(x_{n}-c_{k})^{2}-D(1-z_{n,k})\hskip 14.226378pt \forall n,k, \tag{11}\] Previous formulations have a common weakness, it induces symmetric solutions with permutations of clusters, which makes Branch & Bound tree search inefficient. As in [2] for unpartial MSSC, an extended reformulation can improve this known bottleneck. Enumerating each subset of \(E\), \(p\in\mathcal{P}=2^{E}\), \(c_{p}\) denotes the clustering cost of \(p\) with formula (4), and we define a binary variable \(z_{p}\in\{0,1\}\) with \(z_{p}=1\) if and only if subset \(p\) is chosen as a cluster. We define binaries \(y_{n}\in\{0,1\}\) with \(y_{n}=1\) if and only if point \(x_{n}\) is chosen to be counted as outlier and not covered. \[\text{PMSSC}= \min_{z}\sum_{p\in\mathcal{P}}c_{p}z_{p} \tag{12}\] \[s.c:\forall n,\ \sum_{p\in\mathcal{P}}1\!\!1_{n\in p}z_{p}\geqslant 1 -y_{n}\] (13) \[\sum_{n}y_{n}\leqslant M\] (14) \[\sum_{p\in\mathcal{P}}z_{p}\leqslant K \tag{15}\] Objective function (12) is linear in the extended reformulation. Constraint (14) bounds the maximal budget of uncovered points. Constraint (15) bounds the maximal number of clusters, having more clusters decreases the objective function. Constraints (13) express that either a point \(x_{n}\) is uncovered when \(y_{n}=1\) and there is no need to select a subset which contains \(x_{n}\), or one subset (at least) contains \(x_{n}\). Note that \(1\!\!1_{n\in p}z_{p}\) is one if and only if subset \(p\) contains point \(x_{n}\). These constraints are written with inequalities, equalities are valid also to have the same optimal solutions. Inequalities are preferred for numerical stability with Column Generation (CG) algorithm. Variables \(z_{p}\), contrary to variables \(y_{n}\), are of an exponential size and cannot be enumerated. CG algorithm applies to generate only a subset of \(z_{p}\) variables to compute the continuous (LP) relaxation of (12)-(15). We consider the Restricted Master Problem (RMP) for a subset of \(z_{p}\) variables in \(\mathcal{P}^{\prime}\subset\mathcal{P}\) of the LP relaxation, so that dual variables are defined for each constraint: \[\begin{array}{ll}\text{RMP}(\mathcal{P}^{\prime})=\min_{z\geqslant 0}\sum_ {p\in\mathcal{P}^{\prime}}c_{p}z_{p}\\ s.c:\forall n,&y_{n}+\sum_{p\in\mathcal{P}^{\prime}}1\!\!1_{n\in p}z_{p} \geqslant 1\ (\pi_{n})\\ &-\sum_{n}y_{n}\geqslant-M\ \ added in the RMP if \(-\sigma+\sum_{n}1\!\!1_{n\in p}\pi_{n}>c_{p}\). It defined CG sub-problems: \[\mathrm{SP}=\min_{p\in\mathcal{P}}c_{p}-\sum_{n}1\!\!1_{n\in p}\pi_{n} \tag{18}\] CG algorithm iterates adding subsets \(p\) such that \(c_{p}-\sum_{n}1\!\!1_{n\in p}\pi_{n}<-\sigma\). Once SP\(\geqslant-\sigma\), the RMP is optimal for the full extended formulation. As constraints (14) are always in the RMP, partial clustering induces the same pricing problem with [2]. Primal variables \(y_{n}\) influence numerical values of RMP, and thus the values of dual variables \(\pi_{n},\sigma\) that are given to the pricing problem, but not the nature of sub-problems. Sub-problems SP can be solved with Cplex or Gurobi, using the same reformulation technique as in (9)-(11). Defining binaries \(z_{n}\in\{0,1\}\) such that \(z_{n}=1\) iff point \(x_{n}\) is assigned to the current cluster, sub-problem SP is written as: \[\mathrm{SP}=\min_{p\in\mathcal{P}}c_{p}-\sum_{n}\pi_{n}z_{n} \tag{19}\] Considering continuous variables \(c\in\mathbb{R}^{d}\) for the centroid of the optimal cluster, and \(s_{n}\geqslant 0\), the squared distance from point \(x_{n}\) to centroid \(c\) if \(z_{n}=1\) and \(0\) otherwise. It gives rise to the following convex quadratic formulation: \[\mathrm{SP}= \min_{z_{n},s_{n},c_{d}}\sum_{n=1}^{N}s_{n}-\sum_{n}\pi_{n}z_{n} \tag{20}\] \[s.t:\forall n, s_{n}\geqslant d(x_{n},c)^{2}-D(1-z_{n})\] CG algorithm can thus be implemented using Cplex or Gurobi for LP computations of RMP and for computations of SP. This gives a lower bound of the integer optimum. Integer optimality can be obtained using Branch & Price. ## 4 First complexity results, interval clustering properties PMSSC polynomially reduces to MSSC: if any instance of PMSSC (or a subset of instances) is polynomially solvable, this is the case for any corresponding instance of MSSC considering the same points and a value \(M=0\) and the same algorithm. Hence, NP-hardness results from [1, 4, 15] holds for PMSSC: Theorem 4.1: _Following NP-hardness results holds for PMSSC:_ * _PMSSC is NP-hard for general instances._ * _PMSSC is NP-hard in a general Euclidean space._ * _PMSSC is NP-hard for instances with a fixed value of_ \(K\geqslant 2\)_._ * _PMSSC is NP-hard for instances with a fixed value of_ \(L\geqslant 2\)_._ After Theorem 4.1, it remains to study cases \(K=1\) and \(L=1\), where MSSC is polynomial. In the remainder of this paper, we suppose that \(L=1\), ie we consider the 1D case. Without loss of generality in 1D, we consider \(d(x,y)=|x-y|\). We suppose that \(E=\{x_{1}<\cdots<x_{N}\}\), a sorting procedure running in \(O(N\log N)\) time may be applied. A key element for the polynomial complexity of MSSC is the interval clustering property [16]: Lemma 1: _Having \(L=1\) and \(M=0\), each global minimum of MSSC is only composed of clusters \(\mathcal{C}_{i,i^{\prime}}=\{x_{j}\}_{j\in\llbracket i,i^{\prime}\rrbracket}= \{x\in E\,|\,\exists j\in\llbracket i,i^{\prime}\rrbracket,\,x=x_{j}\}\)._ The question is here to extend this property for PMSSC. Considering an optimal solution of PMSSC the restriction to no-outliers points is an optimal solution of PMSSC and an interval clustering property holds: Proposition 1: _Having \(L=1\) and an optimal solution of PMSSC induce an optimal solution of MSSC removing the outliers. In this subset of points, the optimality property of interval clustering holds._ Proposition 1 is weaker than Lemma 1, selected points are not necessarily an interval clustering with the indexes of \(E\). This stronger property is false in general for weighted PMSSC, one can have optimal solutions with outliers to remove inside the natural interval cluster as in the following example with \(M=1\), \(L=1\) and \(K=2\): \(\bullet\)\(x_{1}=1\), \(w_{1}=10\) \(\bullet\)\(x_{2}=2\), \(w_{2}=1000\) \(\bullet\)\(x_{3}=3\), \(w_{2}=1\) \(\bullet\)\(x_{4}=100\), \(w_{4}=100\) \(\bullet\)\(x_{5}=101\), \(w_{5}=1\) Optimal PMSSC consider \(x_{2}\) as outlier, \(\{x_{1};x_{3}\}\) and \(\{x_{4};x_{5}\}\) as the two clusters. For \(K=1\), changing the example with \(x_{4}=3.001\) and \(x_{5}=3.002\), gives also a counter example with \(K=1\) with \(\{x_{1};x_{3};x_{4};x_{5}\}\) being the unique optimal solution. These counter-examples use a significant difference in the weights. In the unweighted PMSSC, interval property holds as in Lemma 1, with outliers (or holes) between the original interval clusters: Proposition 2: _Having \(L=1\), each global minimum of unweighted PMSSC is only composed of clusters \(\mathcal{C}_{i,i^{\prime}}=\{x_{j}\}_{j\in\llbracket i,i^{\prime}\rrbracket}\). In other words, the \(K\) clusters may be indexed \(\mathcal{C}_{i_{1},j_{1}},\ldots,\mathcal{C}_{i_{K},j_{K}}\) with \(1\leqslant i_{1}\leqslant j_{1}<i_{2}\leqslant j_{2}<\cdots<i_{K}\leqslant j_ {K}\leqslant N\) and \(\sum_{k=1}^{K}(j_{k}-i_{k})\geqslant N-M-K\)._ As in [5], the efficient computation of cluster cost is a crucial element to compute the polynomial complexity. Cluster costs can be computed from scratch, leading to polynomial algorithm. Efficient cost computations use inductive relations for amortized computations in \(O(1)\) time, extending the relations in [19]. We define for \(i,i^{\prime}\) such that \(1\leqslant i\leqslant i^{\prime}\leqslant N\): * \(b_{i,i^{\prime}}=\sum_{k=i}^{i^{\prime}}\frac{w_{k}}{\sum_{l=i}^{i^{\prime}}w _{l}}x_{k}\) the weighted centroid of \(\mathcal{C}_{i,i^{\prime}}\). * \(c_{i,i^{\prime}}=\sum_{j=i}^{i^{\prime}}w_{j}d(x_{j},b_{i,i^{\prime}})^{2}\) the weighted cost of cluster \(\mathcal{C}_{i,i^{\prime}}\). * \(v_{i,i^{\prime}}=\sum_{j=i}^{i^{\prime}}w_{j}\) Proposition 3: _Following induction relations holds to compute efficiently \(b_{i,i^{\prime}},v_{i,i^{\prime}}\) with amortized \(O(1)\) computations:_ \[v_{i,i^{\prime}+1}= w_{i^{\prime}+1}+v_{i,i^{\prime}},, \forall 1\leqslant i\leqslant i^{\prime}<N \tag{21}\] \[v_{i-1,i^{\prime}}= w_{i-1}+v_{i,i^{\prime}},, \forall 1<i\leqslant i^{\prime}\leqslant N\] (22) \[b_{i,i^{\prime}+1}= \frac{w_{i^{\prime}+1}x_{i^{\prime}+1}+b_{i,i^{\prime}}v_{i,i^{ \prime}}}{v_{i,i^{\prime}+1}}, \forall 1\leqslant i\leqslant i^{\prime}<N\] (23) \[b_{i-1,i^{\prime}}= \frac{w_{i-1}x_{i-1}+b_{i,i^{\prime}}v_{i,i^{\prime}}}{v_{i-1,i^ {\prime}}}, \forall 1<i\leqslant i^{\prime}\leqslant N \tag{24}\] _Cluster costs are then computable with amortized \(O(1)\) computations:_ \[c_{i,i^{\prime}+1}= c_{i,i^{\prime}}+w_{i^{\prime}+1}(x_{i^{\prime}+1}-b_{i,i^{ \prime}})^{2}+v_{i,i^{\prime}}(b_{i,i^{\prime}+1}-b_{i,i^{\prime}})^{2} \tag{25}\] \[c_{i-1,i^{\prime}}= c_{i,i^{\prime}}+w_{i-1}(x_{i-1}-b_{i,i^{\prime}})^{2}+v_{i,i^{ \prime}}(b_{i-1,i^{\prime}}-b_{i,i^{\prime}})^{2} \tag{26}\] _Trivial relations \(v_{i,i}=w_{i}\), \(b_{i,i}=x_{i}\) and \(c_{i,i}=0\) are terminal cases._ Proposition 3 allows to prove Propositions 4 and 5 to compute efficiently cluster costs. Proposition 3 is also a key element to have first complexity results with \(K=1\) and \(M\leqslant 1\) in Propositions 6, 7. Proposition 4: _Cluster costs \(c_{1,i}\) for all \(i\in\llbracket 1;N\rrbracket\) can be computed in \(O(N)\) time using \(O(N)\) memory space._ Proposition 5: _For each \(j\in\llbracket 1;N\rrbracket\) cluster costs \(c_{i,j}\) for all \(i\in\llbracket 1;j\rrbracket\) can be computed in \(O(j)\) time using \(O(j)\) memory space._ Proposition 6: _Having \(L=1\) and \(K=1\), unweighted PMSSC is solvable in \(O(N)\) time using \(O(1)\) additional memory space._ Proposition 7: _Having \(L=1\), \(M=1\) and \(K=1\), weighted PMSSC is solvable in \(O(N)\) time using \(O(N)\) memory space._ ## 5 DP polynomial algorithm for 1D unweighted PMSSC Proposition 2 allows to design a DP algorithm for unweighted PMSSC, extending the one from [19]. We define \(O_{i,k,m}\) as the optimal cost of unweighted PMSSC with \(k\) clusters among points \(\llbracket 1,i\rrbracket\) with a budget of \(m\) outliers for all \(i\in\llbracket 1,N\rrbracket\), \(k\in\llbracket 1,K\rrbracket\) and \(m\in\llbracket 0,M\rrbracket\). Proposition 8 sets induction relations allowing to compute all the \(O_{i,k,m}\), and in particular \(O_{N,K,M}\): Proposition 8 (Bellman equations): _Defining \(O_{i,k,m}\) as the optimal cost of unweighted MSSC among points \(\llbracket 1,i\rrbracket\) for all \(i\in\llbracket 1,N\rrbracket\), \(k\in\llbracket 1,K\rrbracket\) and \(m\in\llbracket 0,M\rrbracket\), we have the following induction relations_ \[\forall i\in\llbracket 1,N\rrbracket,\ \ O_{i,1,0}=c_{1,i} \tag{27}\] \[\forall m\in\llbracket 1,M\rrbracket,\,\forall k\in\llbracket 1,K\rrbracket,\, \forall i\in\llbracket 1,m+k\rrbracket,\,\,\,O_{i,k,m}=0 \tag{28}\] \[\forall m\in\llbracket 1,M\rrbracket,\,\forall i\in\llbracket m+2,N\rrbracket,\,\, \,O_{i,1,m}=\min\left(O_{i-1,1,m-1},c_{1+m,i}\right) \tag{29}\] \[\forall k\in\llbracket 2,K\rrbracket,\,\forall i\in\llbracket k+1,N\rrbracket,\, \,\,O_{i,k,0}=\min_{j\in\llbracket k,i\rrbracket}\left(O_{j-1,k-1,0}+c_{j,i}\right) \tag{30}\] \[\forall m\in\llbracket 1,M\rrbracket,\,\forall k\in\llbracket 2,K\rrbracket,\, \forall i\in\llbracket k+m+1,N\rrbracket,\] \[O_{i,k,m}=\min\left(O_{i-1,k,m-1},\min_{j\in\llbracket k+m,i\rrbracket}\left(O_ {j-1,k-1,m}+c_{j,i}\right)\right) \tag{31}\] Using Proposition 8, a recursive and memoized DP algorithm can be implemented to solve unweighted PMSSC in 1D. Algorithm 1 presents a sequential implementation, iterating with index \(i\) increasing. The complexity analysis of Algorithm 1 induces Theorem 2, unweighted PMSSC is polynomial in 1D. ``` sort \(E\) in the increasing order initialize \(O_{i,k,m}:=0\) for all \(m\in\llbracket 0;M\rrbracket,k\in\llbracket 1;K-1\rrbracket,i\in\llbracket k;N-K+k\rrbracket\) compute \(c_{1,i}\) for all \(i\in\llbracket 1;N-K+1\rrbracket\) and store in \(O_{i,1,0}:=c_{1,i}\) for\(i:=2\) to \(N\) compute and store \(c_{i^{\prime},i}\) for all \(i^{\prime}\in\llbracket 1;i\rrbracket\) compute \(O_{i,k,0}:=\min_{j\in\llbracket k,i\rrbracket}\left(O_{j-1,k-1,0}+c_{j,i}\right)\) for all \(k\in\llbracket 2;\min(K,i)\rrbracket\) for\(m=1\) to \(\min(M,i-2)\) compute \(O_{i,1,m}:=\min\left(O_{i-1,1,m-1},c_{1+m,i}\right)\) for\(k=2\) to \(\min(K,i-m)\) compute \(O_{i,k,m}:=\min\left(O_{i-1,k,m-1},\min_{j\in\llbracket k+m,i\rrbracket}\left(O_ {j-1,k-1,m}+c_{j,i}\right)\right)\) endfor endfor delete the stored \(c_{i^{\prime},i}\) for all \(i^{\prime}\in\llbracket 1;i\rrbracket\) endfor initialize \(\mathcal{P}=\emptyset\), \(\underline{i}=\overline{i}=N\), \(m=M\) for\(k=K\) to \(1\) with increment \(k\gets k-1\) compute \(\overline{i}:=\min\{i\in\llbracket\underline{i}-m;\underline{i}\rrbracket|O _{\underline{i},k,m}:=O_{\underline{i}-i,k,m-i+\underline{i}}\}\) \(m:=m-\overline{i}+\underline{i}\) compute and store \(c_{i^{\prime},\overline{i}}\) for all \(i^{\prime}\in\llbracket 1;\overline{i}\rrbracket\) find \(\underline{i}\in\llbracket 1;\overline{i}\rrbracket\) such that \(\underline{i}:=\arg\min_{j\in\llbracket k+m,i\rrbracket}\left(O_{j-1,k-1,m}+ c_{j,\overline{i}}\right)\) add \(\left[x_{\underline{i}},x_{\overline{i}}\right]\) in \(\mathcal{P}\) delete the stored \(c_{i^{\prime},\overline{i}}\) for all \(i^{\prime}\in\llbracket 1;\overline{i}\rrbracket\) endfor return\(O_{N,K,M}\) the optimal cost and the selected clusters \(\mathcal{P}\) ``` **Algorithm 1**DP algorithm for unweighted PMSSC in 1D Theorem 2: _Unweighted PMSSC is polynomially solvable in 1D, Algorithm 1 runs in \(O(KN^{2}(1+M))\) time and use \(O(KN(1+M))\) memory space to solve unweighted 1D instances of PMSSC._ ## 6 Discussions ### Relations with state of the art results for 1D instances Considering the 1D standard MSSC with \(M=0\), the complexity of Algorithm 1 is identical with the one from [19], it is even the same DP algorithm in this sub-case written using weights. The partial clustering extension implied using a \(M+1\) time bigger DP matrix, multiplying by \(M\) the time and space complexities. This had the same implication in the complexity for p-center problems [6, 7]. Seeing Algorithm 1 as an extension of [19], it is a perspective to analyze if some improvement techniques for time and space complexity are valid for PMSSC. As in [7], a question is to define a proper value of \(M\) in PMSSC. Algorithm 1 can give all the optimal \(O_{N,K,m}\) for \(m\leqslant M\), for a good trade-off decision. From a statistical standpoint, a given percentage of outliers may be considered. If we consider that \(1\%\) (resp \(5\%\)) of the original points may be outliers, it induces \(M=0,01\times N\) (resp \(M=0,05\times N\)). In these cases, we have \(M=O(N)\) and the asymptotic complexity of Algorithm 1 is in \(O(KN^{3})\) time and using \(O(KN^{2})\) memory space. If this remains polynomial, this cubic complexity becomes a bottleneck for large vales of \(N\) in practice. In [7], partial min-sum-k radii has exactly the same complexity when \(\alpha=2\), which is quite comparable to PMSSC but considering only the extreme points of clusters with squared distances. PMSSC is more precise with a weighted sum than considering only the extreme points, having equal complexities induce to prefer partial MSSC for the application discussed in [7]. A reason is that the \(O(N^{2})\) time computations of cluster costs are amortized in the DP algorithm. Partial min-sum-k radii has remaining advantages over PMSSC: cases \(\alpha=1\) are solvable in \(O(N\log N)\) time and the extension is more general than 1D instances and also valid in a planar Pareto Front (2D PF). It is a perspective to study PMSSC for 2D PFs, Figure 1 shows in that case that it makes sense to consider an extended interval optimality as in [5, 7]. ### Definition of weighted PMSSC Counter-example of Proposition 1 page 1 shows that considering both (diverse) weights and partial clustering as defined in (5) may not remove outliers, which was the motivating property. This has algorithmic consequences, Algorithm 1 and the optimality property are specific to unweighted cases. One can wonder the sense of weighted and partial clustering after such counter-example, and if alternative definitions exist. Weighted MSSC can be implied by an aggregation of very similar points, the weight to the aggregated point being the number of original points aggregated in this new one. This can speed-up heuristics for MSSC algorithms. In this case, one should consider a budget of outliers \(M\), which is weighted also by the points. Let \(m_{n}\) the contribution of a point \(x_{n}\) in the budget of outliers. (33) would be the definition of partial MSSC with budget instead of (5): \[X=\left\{E^{\prime}\subset E:\sum_{x_{n}\in E\setminus E^{\prime}}m_{n}x_{n}| \leqslant M\right\} \tag{32}\] \[\min_{x\in X}\ \min_{\pi\in\Pi_{K}(x)}\sum_{P\in\pi}f(P) \tag{33}\] (5) is a special case of (33) considering \(m_{n}=1\) for each \(n\in\llbracket 1;N\rrbracket\). Note that this extension is compatible with the developments of Section 3, replacing respectively constraints (8) and (14) by linear constraints (34) and (35). These new constraints are still linear, there are also compatible with the convex quadratic program and the CG algorithm for the extended formulation: \[\sum_{n^{\prime}=1}^{N}\left(1-m_{n^{\prime}}\sum_{k=1}^{K}z_{n^{ \prime},k}\right)\geqslant M \tag{34}\] \[\sum_{n=1}^{N}m_{n}y_{n}\leqslant M \tag{35}\] For the DP algorithm of section 5, we have to suppose \(m_{n}\in\mathbb{N}\). Note that it is the case with aggregation of points, fractional or decimal \(m_{n}\) are equivalent to this hypothesis, it is not restrictive. Bellman equations can be adapted in that goal: (28), (29) and (31) should be replaced by: \[\forall m\in\llbracket 1,M\rrbracket,\ \forall k\in\llbracket 1,K\rrbracket,\ \forall i,\ \ \sum_{j=1}^{i}m_{i}\leqslant m \Longrightarrow O_{i,k,m}=0 \tag{36}\] \[\forall m\in\llbracket 1,M\rrbracket,\ \forall i,\ \ m_{i}>m\Longrightarrow O_{i,1,m}=c_{ \alpha_{m},i} \tag{37}\] \[\forall m\in\llbracket 1,M\rrbracket,\ \forall i,\ \ m_{i}\leqslant m \Longrightarrow O_{i,1,m}=\min\left(O_{i-1,1,m-m_{i}},c_{\alpha_{m},i}\right) \tag{38}\] where \(\alpha_{m}\) is the minimal index such that \(\sum_{j=1}^{\alpha_{m}}m_{j}>m\). \[m_{i}\leqslant m\Longrightarrow O_{i,k,m}=\min\left(O_{i-1,k,m-m_{i}},\min_{j\in \llbracket 1,i\rrbracket}\left(O_{j-1,k-1,m}+c_{j,i}\right)\right) \tag{39}\] \[m_{i}>m\Longrightarrow O_{i,k,m}=\min_{j\in\llbracket 1,i\rrbracket}\left(O_{j-1,k- 1,m}+c_{j,i}\right) \tag{40}\] This does not change the complexity of the DP algorithm. However, we do not have necessarily the property \(M<N\) anymore. In this case, DP algorithm in 1D is pseudo-polynomial. ### From exact 1D DP to DP heuristics? If hypotheses \(L=1\) and unweighted PMSSC are restrictive, Algorithm 1 can be used in a DP heuristic with more general hypotheses. In dimensions \(L\geqslant 2\), a projection like Johnson-Lindenstrauss or linear regression in 1D, as in [10], reduces heuristically the original problem, solving it with Algorithm 1 provides a heuristic clustering solution by re-computing the cost in the original space. This may be efficient for 2D PFs, extending results from [10]. Algorithm 1 can be used with weights. For the cost computations, Propositions 4 and 5 make no difference in complexity. Algorithm 1 is not necessarily optimal in 1D in the unweighted case, it gives the best solution with interval clustering, and no outliers inside clusters. It is a primal heuristic, it furnishes feasible solutions. One can refine this heuristic considering also the possibility of having at most one outlier inside a cluster. Let \(c_{i,i^{\prime}}^{(0)}\) be the cost of cluster \(x_{i},\ldots,x_{i^{\prime}}\) as previously and also \(c_{i,i^{\prime}}^{(1)}\) the best cost of clustering \(x_{i},\ldots,x_{i^{\prime}}\) with one outlier inside that can be computed as in Proposition 7. The only adaptation of Bellman equations that would be required is to replace (29, (31) by: \[\forall m\in\llbracket 1,M\rrbracket,\,\forall i\in\llbracket m+2,N \rrbracket,\,\,O_{i,1,m}=\min\left(O_{i-1,1,m-1},c_{1+m,i}^{(0)},c_{1+m,i}^{(1 )}\right) \tag{41}\] \[\forall m\in\llbracket 1,M\rrbracket,\,\forall k\in\llbracket 2,K\rrbracket,\, \forall i\in\llbracket k+m+1,N\rrbracket,\] \[O_{i,k,m}=\min\left(O_{i-1,k,m-1},\min_{j\in\llbracket k+m,i\rrbracket,l\in \{0,1\}}\left(O_{j-1,k-1,m-l}+c_{j,i}^{(l)}\right)\right) \tag{42}\] Note that if case \(L=1\) and \(K=1\) is proven polynomial, one may compute in polynomial time \(c_{j,i}^{(m)}\) values of optimal clustering with \(m\) outliers with points indexed in \(\llbracket j,i\rrbracket\) and solve weighted PMSSC in 1D with similar Bellman equations. This is still an open question after this study. ### Extension to partial \(K\)-medoids In this section, we consider the partial \(K\)-medoids problem with \(M\) outliers defined by (5) and (6), as in [13, 20]. To our knowledge, the 1D sub-case was not studied, a minor adaptation of our results and proofs allows to prove this sub-case is polynomially solvable. Indeed, Lemma 1 holds with \(K\)-medoids as proven in [5]. Propositions 4 and 5 have their equivalent in [8], complexity of such operations being in \(O(N^{2})\) time instead of \(O(N)\) for MSSC. Propositions 1 and 2 still hold with the same proof for \(K\)-medoids. Proposition 8 and Algorithm 1 are still valid with the same proofs, the only difference being the different computation of cluster costs. In Theorem 4.1 this only changes the time complexity: computing the cluster costs \(c_{i,i^{\prime}}\) is in \(O(N^{3})\) time instead of \(O(N^{2})\), it is not bounded by the \(O(KN^{2}(1+M))\) time to compute the DP matrix. This results in the theorem: Theorem 6.1: _Unweighted partial \(K\)-medoids problem with \(M\) outliers is polynomially solvable in 1D, 1D instances are solvable in \(O(N^{3}+KN^{2}(1+M))\) time and using \(O(KN(1+M))\) memory space._ ## 7 Conclusions and perspectives To handle the problem of MSSC clusters with outliers, we introduced in this paper partial clustering variants for unweighted and weighted MSSC. This problem differs from the "robust \(K\)-means problem" (also noted "\(K\)-means problem with outliers"), which consider discrete and enumerated centroids unlike MSSC. Optimal solution of weighted PMSSC may differ from intuition of outliers: We discuss about this problem and present another similar variant. For these extensions of MSSC, mathematical programming formulations for solving exactly MSSC can be generalized. Solvers like Gurobi or Cplex can be used for a compact and an extended reformulation of the problem. NP-hardness results of these generalized MSSC problems holds. Unweighted PMSSC is polynomial in 1D and solved with a dynamic programming algorithm which relies on the optimality property of interval clustering. With small adaptations, "\(K\)-means problem with outliers" defined as the unweighted partial \(K\)-medoids problem with \(M\) outliers is also polynomial in 1D and solved with a similar algorithm. We show that a weaker optimality property holds for weighted PMSSC. The relations with similar state-of-the-art results and adaptation of the DP algorithm to DP heuristics are also discussed. This work opens perspectives to solve this new PMSSC problem. The NP-hardness complexity of weighted PMSSC for 1D instances is still an open question. Another perspective is to extend 1D polynomial DP algorithms for PMSSC for 2D PFs, as in [5, 7]. Approximation results may be studied for PMSSC also, trying to generalize results from [13, 20]. Using only quick and efficient heuristics without any guarantee would be sufficient for an application to evolutionary algorithms to detect isolated points in PFs, as in [7]. Adapting local search heuristics for PMSSC is also another perspective [10]. If \(K\)-medoids variants with or without outliers are used to induce more robust clustering to noise and outliers, the use of PMSSC is promising to retain this property without having slower calculations of cluster costs with \(K\)-medoids. Finally, using PMSSC as a heuristic for \(K\)-medoids is also a promising venue for future research.
2306.06862
Saltation Matrices: The Essential Tool for Linearizing Hybrid Dynamical Systems
Hybrid dynamical systems, i.e. systems that have both continuous and discrete states, are ubiquitous in engineering, but are difficult to work with due to their discontinuous transitions. For example, a robot leg is able to exert very little control effort while it is in the air compared to when it is on the ground. When the leg hits the ground, the penetrating velocity instantaneously collapses to zero. These instantaneous changes in dynamics and discontinuities (or jumps) in state make standard smooth tools for planning, estimation, control, and learning difficult for hybrid systems. One of the key tools for accounting for these jumps is called the saltation matrix. The saltation matrix is the sensitivity update when a hybrid jump occurs and has been used in a variety of fields including robotics, power circuits, and computational neuroscience. This paper presents an intuitive derivation of the saltation matrix and discusses what it captures, where it has been used in the past, how it is used for linear and quadratic forms, how it is computed for rigid body systems with unilateral constraints, and some of the structural properties of the saltation matrix in these cases.
Nathan J. Kong, J. Joe Payne, James Zhu, Aaron M. Johnson
2023-06-12T04:35:33Z
http://arxiv.org/abs/2306.06862v3
# Saltation Matrices: ###### Abstract Hybrid dynamical systems, i.e. systems that have both continuous and discrete states, are ubiquitous in engineering, but are difficult to work with due to their discontinuous transitions. For example, a robot leg is able to exert very little control effort while it is in the air compared to when it is on the ground. When the leg hits the ground, the penetrating velocity instantaneously collapses to zero. These instantaneous changes in dynamics and discontinuities (or jumps) in state make standard smooth tools for planning, estimation, control, and learning difficult for hybrid systems. One of the key tools for accounting for these jumps is called the saltation matrix. The saltation matrix is the sensitivity update when a hybrid jump occurs and has been used in a variety of fields including robotics, power circuits, and computational neuroscience. This paper presents an intuitive derivation of the saltation matrix and discusses what it captures, where it has been used in the past, how it is used for linear and quadratic forms, how it is computed for rigid body systems with unilateral constraints, and some of the structural properties of the saltation matrix in these cases. ## I Introduction Many interesting problems in engineering can be modeled as hybrid dynamical systems, meaning that they involve both continuous and discrete evolution in state [1, 2, 3, 4]. These systems can be hybrid, e.g. due to physical contact, a result of digital logic circuits, or they can be triggered by control - reacting to sensor feedback or switching control modes. Meanwhile, most of the tools that exist for planning, estimation, control, and learning assume continuous (if not smooth) systems. A common strategy to adapt tools that were designed for smooth systems to hybrid systems is to minimize the effect of discontinuities [5, 6] e.g. by slowing down to near zero velocity at the time of an impact event [7]. However, these strategies do not make use of the underlying dynamics of the system and only seek to mitigate them. This may work out for certain fully actuated systems, but many hybrid systems of interest are underactuated and cannot always cancel out the discontinuous dynamics. Rather than assuming continuous dynamics, we present tools that account for the effects of discrete events. Often, discrete events are called "jumps" or "resets" that map state from one continuous domain to another. The key to capturing hybrid events is to both model what occurs at the moment of reset and what happens to neighboring trajectories (variations) that reset at different times. One might think that analyzing the evolution of these variations simply requires linearization of the dynamics by taking the Jacobian of the reset map, but this only captures part of the story. It is just as important to capture the variation that arises from changes in reset timing. If the hybrid modes have different dynamics at the boundary, then trajectories that spend a different amount of time in each mode will result in changes in variation. The _saltation matrix_, sometimes referred to as the jump matrix, captures the total variation caused by both event timing and reset dynamics and is the key tool to understanding the evolution of trajectories near a hybrid event up to first order. The saltation matrix originally appeared in [8, Eq. 3.5], where it was used to analyze the stability of periodic motions. Other major works include [9, 10, 11]. It provides essential information about event driven hybrid systems that can be used for stability analysis as well as for creating efficient estimation and control algorithms [12, 13, 14, 15, 16, 17, 18, 19]. The word "saltation" directly translates to "leap" from Latin - which closely matches to the "jump" name for the hybrid events - and is also used to describe how sand particles "leap" along the ground when blown by wind in the desert [20]. Fig. 1: Example drop on a slanted surface with initial covariance. The saltation matrix (\(\Xi\)) correctly estimates the end distribution’s covariance where covariance in the direction of the constraint is eliminated. Using the incorrect update, only the Jacobian of the reset map (D\({}_{x}R\)) leads to retaining belief in the direction of the constraint. An illustrative example of how the saltation matrix can capture a common hybrid system, a rigid body with contact, is shown Fig. 1. Here a distribution of balls is dropped on a slanted surface. When each ball makes contact with the surface, a plastic impact law is applied which resets the system into a sliding mode on the surface by zeroing out the velocity into the surface. For this system, the distribution starts out in the full 2D space and ends up constrained to the 1D surface after all balls have made impact. However, since the reset map only changes the velocity of the ball, its Jacobian does not capture this change in the position variations. The saltation matrix captures this information and accurately predicts the resulting covariance by accounting for the difference in timing. Sec. V in this tutorial shows that a similar trend is found for general rigid body contact systems. Recently, there has been an increasing use of saltation matrices in a number of areas from robotics to computational neuroscience, discussed in Sec. II. To help researchers better understand the saltation matrix and its growing importance, this paper provides: * (Sec. II) A literature survey of where the saltation matrix is being used in a variety of application areas. * (Sec. III) A tutorial on the definition of the saltation matrix (Sec. III-A), its derivation (Sec. III-B), and how it appears in linear (Sec. III-C) and quadratic forms (Sec. III-D). * (Sec. IV) An example showing the saltation matrix calculation for a simple contact system and a discussion of the properties of saltation matrices in various cases. * (Sec. V) The calculation of saltation matrices for a common class of hybrid dynamical systems, rigid body dynamics with contact and friction, that unifies and extends prior analysis that has been scattered across different texts. This section provides more details on the properties of saltation matrices presented in (Sec. IV), including the eigenstructure of the saltation matrix for different cases. In addition to providing a survey and tutorial for the saltation matrix, this paper also presents an alternate derivation of the saltation matrix using the chain rule (App. A), a derivation of the case in which the perturbed trajectory reaches a guard condition before the nominal trajectory (App. B), and derivations for how it is used to propagate covariances (App. C) and to update the Riccati equations (App. D), all of which have not been presented previously. ## II Survey of saltation matrix applications The saltation matrix is a valuable tool for analysis and control in a wide variety of fields such as general bifurcations theory [26, 27, 28, 29, 30], power circuits [31, 32, 33, 34, 35, 36, 37, 38, 39], rigid body systems [40, 41, 42, 43, 44, 45], chemical processing [46], and hybrid neuron models [47, 48, 49, 25, 24, 50, 25]. Fig. 2 shows a few examples that demonstrate the usage of the saltation matrix in the legged robotics, power circuits, and neural modelling literature. Often, the saltation matrix is used to assess the stability of hybrid dynamical systems, especially for periodic systems [8, 10, 11]. The most popular method for analyzing stability of periodic hybrid systems is to analyze the fundamental matrix solution (as shown in Sec. III-C) which for periodic systems, is called the monodromy matrix [40, 44, 45, 51, 52, 53, 54, 55, 56, 57, 58]. The monodromy matrix is heavily used in the circuits field specifically for determining local stability of switching power converters and determining if bifurcations occur [59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90]. See [59] for an in depth review for analyzing the stability of switching mode power converters. For more information on bifurcations in periodic systems, see [91, 92, 93, 94] which discuss Lyapunov exponents (the rate of separation of infinitesimally close trajectories) for hybrid systems. In [16], the saltation matrix components of the monodromy matrix are used to analyze known robotic stabilizing phenomena such as paddle juggling and swing leg retraction. The saltation matrix formulation reveals "shape" parameters, which are terms in the saltation matrix that are independent from the system's dynamics, but have an effect on the stability of the system. These shape parameters can be optimized to generate stable open loop trajectories for complex hybrid systems that undergo periodic orbits. A more restrictive but stronger form of stability analysis, known as contraction theory [95], can be done by analyzing the convergence of neighboring trajectories through hybrid events [96] - where global asymptotic convergence is guaranteed if both the continuous-time flow and the saltation matrix are infinitesimally contractive. Another version of stability was analyzed in [97, 98, 99, 100] as sensitivities to system parameters. Adapted saltation conditions were used to characterize sensitivities across hybrid events. These results were used to formulate and solve optimal design problems. In addition to stability analysis, saltation matrices are also useful for generating controllers. In optimal control, value functions are propagated along a trajectory to generate feedback controllers. For linear time-varying LQR, sensitivity information about a trajectory is used to schedule optimal gains along that trajectory. To implement optimal trajectory tracking for a hybrid system, [19] utilized the saltation matrix to update the sensitivity equation (as shown in Sec. III-D). Due to the sudden jump from the reset map, the optimal controller will also have a jump in the gain schedule, as first noted in [101]. Other work further expanding and improving on [19] include [17, 18, 102, 103]. A key concept from these works for tracking hybrid trajectories is "reference spreading" or "reference extension" which creates a new references by extending the pre-transition state through the guard and the post-transition state backwards in time. If there is a mode mismatch, the correct reference extension is selected to track. Using similar value function approximations and reference spreading, [13] proposed a contact implicit trajectory optimization method by extending these ideas to iterative LQR (iLQR). This approach is able to generate both the nominal state trajectory and the feedback controller without having to specify the mode sequence in advance, as in [104, 105, 106, 107], or depend on complementarity constraints that are difficult to solve, as in [108, 109]. Recently, this hybrid iLQR has also been used as an online Model Predictive Controller (MPC) [14]. The saltation matrix has also been used to supplement the concept of hybrid zero dynamics to design robust controllers for bipedal robots. In [21], the norm of the saltation matrix is included in the optimal controller cost function to mitigate the divergent effects of impact. State estimation uses sensitivity information in an analogous way, where the saltation matrix can be used to propagate covariance through a hybrid transition (Sec. III-D). The first paper to do this is [110], which considers covariance propagation for power-spectral density calculation in circuits. This covariance propagation law was also applied to Kalman filtering for hybrid dynamical systems [12]. This work has also been extended to covariance propagation with noisy guards and uncertainty in the reset map [15]. In [111, 112] hybrid dynamics are considered in an invariant extended Kalman filter for use on lie groups. Using covariance propagation is powerful for state estimation because it efficiently maintains the belief of a distribution through hybrid events. In [12], this "Salted Kalman Filter" runs with comparable accuracy to a hybrid particle filter, e.g. [113], at a fraction of the computation time. The main drawbacks are that it uses a Gaussian approximation, that the entire distribution is propagated instantaneously, and that it is not capable of keeping track of a split distribution that exists near a hybrid transition (whereas non-parametric filters like the particle filter can maintain a non-Gaussian and split distribution). In cases where multiple guard conditions are met at the same time such as simultaneous leg touchdown, the hybrid event must be analyzed with another tool known as the Bouligand derivative (B-derivative) [114, 115, 52, 116, 53, 117] as the saltation matrix only considers the effects of individual hybrid transition events. The B-derivative can be thought of as a set of composed saltation matrices which capture infinitesimal effects of differing transition sequences. The B-derivative has been used to analyze stability in systems with simultaneous impacts in [115]. ## III The saltation matrix and how to use it This section defines the saltation matrix and the broad class of hybrid systems where the saltation matrix applies (Sec. III-A), derives the expression of the saltation matrix using a geometric approach (Sec. III-B), and demonstrates the use of saltation matrices in linear (Sec. III-C) and quadratic forms (Sec. III-D). Table I summarizes the notation used throughout the rest of the paper. ### _Saltation matrix definition_ While there are many definitions of hybrid dynamical systems, e.g. [1, 2, 3, 4], this treatment of the saltation matrix is based on the definition from [13]. **Definition 1**: _A \(C^{r}\)_**hybrid dynamical system**, for continuity class \(r\in\mathbb{N}_{>0}\cup\{\infty,\omega\}\), is a tuple \(\mathcal{H}:=(\mathcal{J},\Gamma,\mathcal{D},\mathcal{F},\mathcal{G},\mathcal{ R})\) where the parts are defined as:_ 1. \(\mathcal{J}:=\{\mathrm{I},\mathrm{J},...\}\subset\mathbb{N}\) _is the finite set of discrete_ **modes**_._ 2. \(\Gamma\subseteq\mathcal{J}\times\mathcal{J}\) _is the set of discrete_ **transitions** _forming a directed graph structure over_ \(\mathcal{J}\)_._ 3. \(\mathcal{D}:=\mathrm{II}_{\mathcal{I}\in\mathcal{J}}\)__\(D_{\mathrm{I}}\) _is the collection of_ **domains**_, where_ \(D_{\mathrm{I}}\) _is a_ \(C^{r}\) _manifold and the state_ \(x\in D_{\mathrm{I}}\) _while in mode_ \(\mathrm{I}\)_._ 4. \(\mathcal{F}:=\mathrm{II}_{\mathcal{I}\in\mathcal{J}}F_{\mathrm{I}}\) _is a collection of_ \(C^{r}\) _time-varying_ **vector fields**_,_ \(F_{\mathrm{I}}:\mathbb{R}\times D_{\mathrm{I}}\to\mathcal{T}D_{\mathrm{I}}\)_._ 5. \(\mathcal{G}:=\mathrm{II}_{(\mathrm{I},\mathrm{J})\in\Gamma}\)__\(G_{\mathrm{I},\mathrm{J}}(t)\) _is the collection of_ **guard sets**_, where_ \(G_{(\mathrm{I},\mathrm{J})}(t)\subseteq D_{\mathrm{I}}\) _for each_ \((\mathrm{I},\mathrm{J})\in\Gamma\) _is defined as a regular sublevel set of a_ \(C^{r}\) _guard function, i.e._ \(G_{(\mathrm{I},\mathrm{J})}(t)=\{x\in D_{\mathrm{I}}|g_{(\mathrm{I},\mathrm{J} )}(t,x)\leq 0\}\) _and_ \(\mathrm{D}_{x}g_{(\mathrm{I},\mathrm{J})}(t,x)\neq 0\)__\(\forall\)__\(g_{(\mathrm{I},\mathrm{J})}(t,x)=0\)_._ 6. \(\mathcal{R}:\mathbb{R}\times\mathcal{G}\to\mathcal{D}\) _is a_ \(C^{r}\) _map called the_ **reset** _that restricts as_ \(R_{(\mathrm{I},\mathrm{J})}:=\mathcal{R}|_{G_{(\mathrm{I},\mathrm{J})(t)}}\)_:_ \(G_{(\mathrm{I},\mathrm{J})}(t)\to D_{\mathrm{J}}\) _for each_ \((\mathrm{I},\mathrm{J})\in\Gamma\)_._ Note that this definition incorporates the **control input**\(u(t,x)\) into the dynamics \(\mathcal{F}\) as \(\mathcal{F}(t,x,u(t,x))\to\mathcal{F}(t,x)\). Fig. 3 shows an example hybrid system with a hybrid execution consisting of a starting point \(x(0)\) in \(D_{\mathrm{I}}\) flowing with Fig. 2: The saltation matrix has been used in many different fields, including the control of legged robots in tasks such as (a) a quadrupedal backflip [14] and (b) robust bipedal walking [21]; the analysis and control of power circuits such as (c) supervising control of a buck converter [22] and (d) the bifurcation behavior of DC drives [23]; and the modelling of neural activity in the brain such as (e) the stability analysis of a Wilson-Cowan neural mass model [24] and (f) the modelling of synaptic filter behavior [25]. dynamics \(F_{\rm I}\) and reaching the guard condition \(g_{({\rm I,J})}(t,x)=0\) at time \(t\), applying the reset map \(R_{({\rm I,J})}(t,x)\) resetting into \(D_{\rm J}\) and then flowing with the new dynamics \(F_{\rm J}\). Denote \(t^{-}\) as the instant before the reset map is applied, \(t^{+}\) the instant after the reset map is applied, and \(x(t^{\pm})=x^{\pm}\) the limiting value of the signal \(x\) from the left \((-)\) or right \((+)\). The goal in this paper is to understand how perturbations about a nominal trajectory evolve over time. For smooth systems, it is well known that perturbations about a nominal trajectory can be approximated to first order using the derivative of the dynamics \(F(t,x)\) with respect to state: \[\delta\dot{x}={\rm D}_{x}F(t,x)\delta x \tag{1}\] Hybrid systems with time triggered reset maps can be similarly analyzed using the Jacobian of the reset map, \(\delta x^{+}={\rm D}_{x}R(t,x)\delta x^{-}\). However, the Jacobian of the reset map does not account for differences that are introduced from time-to-impact variations in systems with event driven resets, where the differences in dynamics in the two hybrid modes must be considered. The saltation matrix, e.g. [8, Eq. 3.5], [9, Pg. 118 Eq. 6], [10, Eq. 7.65], or [96, Prop. 2], accounts for these terms to capture how perturbations are mapped through event-driven hybrid transitions to the first order. From here on, the term hybrid transition/system refers to this event-driven class. **Definition 2**: _The **saltation matrix** for transition from mode I to mode J is the first order approximation of the variational update at hybrid transitions from mode I to J, defined as:_ \[\boxed{\Xi_{({\rm I,J})}:={\rm D}_{x}R^{-}+\frac{\left(F_{\rm J}^{+}-{\rm D}_{ x}R^{-}F_{\rm I}^{-}-{\rm D}_{t}R^{-}\right){\rm D}_{x}g^{-}}{{\rm D}_{t}g^{-}+{ \rm D}_{x}g^{-}F_{\rm I}^{-}}} \tag{2}\] Note that the matrix multiplication in (2) results in an outer-product between the terms in the parentheses and \({\rm D}_{x}g^{-}\) to get a rank-1 correction to the Jacobian of the reset map. The saltation matrix is an \(n_{\rm J}\times n_{\rm I}\) matrix, where \(n_{\rm I}\) is the dimension of the states in domain \(D_{\rm I}\) and \(n_{\rm J}\) is the dimension of the states in domain \(D_{\rm J}\). The following evaluations are made for the terms in the saltation matrix: \[F_{\rm I}^{-} :=F_{\rm I}(t^{-},x(t^{-})) \tag{3}\] \[F_{\rm J}^{+} :=F_{\rm J}(t^{+},x(t^{+}))\] (4) \[x(t^{+}) :=R_{\rm(I,J)}(t^{-},x(t^{-}))\] (5) \[{\rm D}_{x}R^{-} :={\rm D}_{x}R_{\rm(I,J)}(t^{-},x(t^{-}))\] (6) \[{\rm D}_{t}R^{-} :={\rm D}_{t}R_{\rm(I,J)}(t^{-},x(t^{-}))\] (7) \[{\rm D}_{x}g^{-} :={\rm D}_{x}g_{\rm(I,J)}(t^{-},x(t^{-}))\] (8) \[{\rm D}_{t}g^{-} :={\rm D}_{t}g_{\rm(I,J)}(t^{-},x(t^{-})) \tag{9}\] Note that \({\rm D}_{t}\) in (7) and (9) refers to the derivative with respect to the first coordinate (and not the time dependence of \(x\), which is captured by other terms). The saltation matrix maps perturbations to the first order from pre-transition \(\delta x(t^{-})\) to post-transition \(\delta x(t^{+})\) as: \[\boxed{\delta x(t^{+})=\Xi_{({\rm I,J})}\delta x(t^{-})+\mbox{h.o.t.}} \tag{10}\] where h.o.t. represents higher order terms. The saltation matrix in (2) is suitable when the following assumptions are true, as listed in [96] 1. Guards and resets are differentiable 2. Trajectories cannot undergo an infinite number of resets in finite time (no Zeno) 3. Trajectories must be transverse to the guard at an event: \[\frac{d}{dt}g_{({\rm I,J})}(t,x(t))={\rm D}_{t}g^{-}+{\rm D}_{x}g^{-}F_{\rm I }^{-}<0\] (11) \begin{table} \begin{tabular}{l|l} \(A\) & Linearized vector field matrix, (30) \\ \({\rm COV}\) & Covariance \\ \({\rm D}_{\rm v}\) & Jacobian w.r.t \(*\) \\ \({\cal D},D\) & Hybrid domain, Def. 1 \\ \({\mathbb{E}}\) & Expectation \\ \(e\) & Coefficient of restitution, (67) \\ \({\cal F},F\) & Vector field, Def. 1 \\ \(f\) & Constraint force vector, (61) \\ \(f_{\rm n},f_{\rm t}\) & Normal and tangential constraint forces, (66) \\ \({\cal G},G,g\) & Guard sets and guard function, Def. 1 \\ \(\tilde{g}\) & Linearized guard function, (17) \\ \(H\) & Hamiltonian, (152) \\ h.o.t. & Higher order terms, (10) \\ \(I\) & Identity matrix \\ \({\rm I,J}\) & Hybrid modes, Def. 1 \\ \(i,j\) & Hybrid mode indexes, (34) \\ \(J\) & Constraint Jacobian, Sec. IV \\ \({\cal J}\) & Set of discrete modes, Def. 1 \\ \(L\) & Limit cycle, Sec. III-C \\ \(\ell\) & Loss function, (148) \\ \(M,C,N,\Upsilon\) & Mass, Coriolis, nonlinear force, and input matrices, (61) \\ \(M^{\dagger},J^{\dagger},\Lambda^{\dagger}\) & Dagger elements for rigid body systems, (62) \\ \(m,n\) & Configuration \& state space dimensions, Def. 2, Sec. V-A \\ n,t & Normal or tangential direction constraints, Sec. V-A \\ \(P\) & Co-vector quadratic matrix, (40) \\ \({\cal P}\) & Poincaré map, Sec. III-C \\ \(p\) & Costate, Appendix D \\ \(Q\) & Penalty on state, Appendix D \\ \(q\), \(\dot{q}\), \(\tilde{q}\) & Configuration, velocity, and acceleration, Sec. IV \\ \(\mathcal{R}\), \(R\) & Reset map, Def. 1 \\ \(\tilde{R}\) & Linearized reset map, (16) \\ \(\mathbb{R}\) & Set of real numbers \\ \(S\) & Poincaré section, Sec. III-C \\ \({\cal T}*\) & Tangent bundle over * \\ \(T\) & Time period, Sec. III-C \\ \(t\) & Time, Sec. III \\ \(\tilde{t}\) & Perturbed impact time, Sec. III-B \\ \({\rm U,V,S,C}\) & Rigid body modes, Secs. IV, V \\ \(V\) & Penalty on input, Appendix D \\ \(v,\lambda\) & Eigenvector and eigenvalue, Sec. IV-C \\ \(u\) & Control input, Def. 1 \\ \(X\) & Random variable, Appendix C \\ \(x\) & State, Def. 1 \\ \(x^{*}\) & Fixed point, Sec. III-C \\ \(\widetilde{x}\) & Perturbed trajectory, Sec. III-B \\ \(\delta x\) & Perturbation, (18) \\ \(Z\) & Additional terms, (83) \\ \(\Gamma\) & Set of discrete transitions, Def. 1 \\ \(\Delta\) & Discrete timestep, Sec. III-C \\ \(\theta\) & Angle of sloped surface, (41) \\ \(\mu\) & Floquet exponent III-C \\ \(\mu_{s},\mu_{k}\) & Static and kinetic friction coefficient, (66) \\ \(\Xi\) & Saltation matrix, (2) \\ \(p\) & Random variable mean, Appendix C \\ \(\Sigma\) & Covariance, Appendix C \\ \(\sigma\) & Floquet multiplier III-C \\ \(\tau\) & Time to impact map, (104) \\ \(\Phi\) & Monodromy matrix, (34) \\ \(\phi\) & Solutions of the flow, (100) \\ \(\Omega\) & Saltation block element, (52) \\ \(0\) & Zero matrix \\ \((*)^{-},(*)^{+}\) & Pre-impact and post-impact, Def. 1 \\ \end{tabular} \end{table} TABLE I: Notation used and equation, definition, or section of introduction. The saltation matrix relies on differentiating the guards and resets so they must be differentiable. Excluding Zeno conditions ensures we avoid computing infinite saltation matrices in finite time, which would clearly be unsound for analysis. Transversality ensures that neighboring trajectories impact the same guard unless the impact point lies on any other guard surface, in which case the Bouligand derivative is the appropriate analysis tool [52, 114, 115, 116, 117]. Transversality also ensures the denominator in (2) does not approach zero. In some cases, the saltation matrix for a hybrid transition can become an identity transformation. Knowing when the saltation matrix is identity is useful to simplify computation and analysis. The most common reason for a saltation matrix to become identity is if both of these conditions are true: 1. the reset map is an identity transformation, \(R=I_{n\times n}\), where \(n\) is the dimension of the state \(x\) in both \(D_{\text{I}}\) and \(D_{\text{J}}\) 2. the dynamics in both modes are the same before and after impact, \(F_{\text{I}}^{-}=F_{\text{J}}^{+}\): \[\boxed{\begin{array}{c}R=I_{n\times n}\\ F_{\text{I}}^{-}=F_{\text{J}}^{+}\end{array}}\implies\Xi=I_{n\times n}\end{array}\] (12) An example of such a transition is a foot lifting off from the ground. If the reset map is an identity transformation, then \(\mathrm{D}_{x}R\) is also identity and \(\mathrm{D}_{t}R\) is zero. Using these conditions to simplify the expression in (2) gives: \[\Xi_{\text{(I,J)}}=I_{n\times n}+\frac{\left(F_{\text{J}}^{+}-I_{n\times n}F_ {\text{I}}^{-}-0_{n\times n}\right)\mathrm{D}_{x}g^{-}}{\mathrm{D}_{t}g^{-}+ \mathrm{D}_{x}g^{-}F_{\text{I}}^{-}}=I_{n\times n} \tag{13}\] ### _Saltation matrix derivation_ In this section, the derivation of the saltation matrix (2) is presented, following the geometric derivation from [10] with the addition of reset maps. There are many alternate ways to derive (2): a derivation using the chain rule is included in Appendix A and a derivation using a double limit can be found in [96]. Suppose the nominal trajectory of interest is \(x(t)\) as shown in Fig. 4. The trajectory starts in mode I and goes through a hybrid transition to mode J at time \(t\). The saltation matrix is a first-order approximation, so the flow is treated as a constant in each mode, evaluated at time \(t^{\pm}\) as in (3) and (4) such that for an infinitesimal timestep \(\delta t\): \[x(t^{-})=x(t^{-}-\delta t)+F_{\text{I}}^{-}\delta t\qquad\text{ in mode I} \tag{14}\] \[x(t^{+}+\delta t)=x(t^{+})+F_{\text{J}}^{+}\delta t\qquad\text{ in mode J} \tag{15}\] The reset and guard are also linearized at \(t^{-}\) as in (6) and (8), such that \[\bar{R}(t^{-}+\delta t,x+\delta x)=R_{\text{(I,J)}}(t^{-},x(t^{-}))+\mathrm{ D}_{x}R^{-}\delta x+\mathrm{D}_{t}R^{-}\delta t \tag{16}\] \[\bar{g}(t^{-}+\delta t,x+\delta x)=g_{\text{(I,J)}}(t^{-},x(t^{- }))+\mathrm{D}_{x}g^{-}\delta x+\mathrm{D}_{t}g^{-}\delta t \tag{17}\] where \(\bar{R}\) and \(\bar{g}\) are the linear maps. Trajectories that are perturbed \(\delta x\) away are labeled as \(\widetilde{x}\). Perturbations can lead to changes in the impact time, which we describe with the infinitesimal time difference \(\delta t:=\bar{t}-t\) where \(t\) is the original impact time and \(\hat{t}\) is the perturbed impact time. If \(\delta t>0\) then the perturbed transition is late and the solution stays in the previous hybrid mode longer, while Fig. 3: An example 2 mode hybrid system where the domains are shown in black circles \(D\), the dynamics are shown with gray arrows \(F\), the guard for the current domain is shown in red dashed \(g\), and the reset from the current mode to the next mode is shown in blue \(R\). if \(\delta t<0\) then the perturbed solution transitions early. For simplicity of notation, assume the perturbed trajectory reaches the guard surface late, but the analysis also works for early transitions, resulting in the same expression (2), as shown in Appendix B. Define the perturbation at the pre-impact time of the nominal trajectory \(t^{-}\) and the post-impact time of the perturbed trajectory \(\widetilde{t}^{+}\) as: \[\delta x(t^{-}) :=\widetilde{x}(t^{-})-x(t^{-}) \tag{18}\] \[\delta x(\widetilde{t}^{+}) :=\widetilde{x}(\widetilde{t}^{+})-x(\widetilde{t}^{+}) \tag{19}\] where \(\widetilde{x}(t^{-})\) is the perturbed trajectory following the previous mode dynamics until time \(t^{-}\). Next, we can write (19) in terms of the nominal trajectory at time of impact \(x(t^{-})\) and just after impact \(x(t^{+})\). Using (18) and (14), \(\widetilde{x}(\widetilde{t}^{-})\) can be written in terms of the flow before impact \(F_{1}^{-}\delta t\) and the perturbation before impact \(\delta x(t^{-})\): \[\widetilde{x}(\widetilde{t}^{-})=x(t^{-})+\delta x(t^{-})+F_{1}^{-}\delta t \tag{20}\] Note that we denote the expression \(\delta x(t^{-})+F_{1}^{-}\delta t\) as \(\vec{v}\) in Fig. 4 Eq. a. By using the linearized reset map (16) and the perturbation expressed in terms of the nominal trajectory (20), the reset at \(\widetilde{x}(\widetilde{t}^{-})\) can be evaluated in terms of the nominal state \(x(t^{-})\), the initial perturbation \(\delta x(t^{-})\), and the difference in impact time \(\delta t\) \[\widetilde{x}(\widetilde{t}^{+})=R(t^{-},x(t^{-}))+\mathrm{D}_{x}R^{-}\left( \delta x(t^{-})+F_{1}^{-}\delta t\right)+\mathrm{D}_{t}R^{-}\delta t \tag{21}\] The final term in (19) is obtained by using the constant flow after the reset (14) to calculate \(x(\widetilde{t}^{+})\): \[x(\widetilde{t}^{+})=R(t^{-},x(t^{-}))+F_{\mathrm{J}}^{+}\delta t \tag{22}\] By combining (19), (21), and (22), \(\delta x(\widetilde{t}^{+})\) can now be written as a linear function of \(\delta x(t^{-})\) and \(\delta t\) : \[\delta x(\widetilde{t}^{+}) =R(t^{-},x(t^{-}))+\mathrm{D}_{x}R^{-}\left(\delta x(t^{-})+F_{1} ^{-}\delta t\right) \tag{23}\] \[\quad+\mathrm{D}_{t}R^{-}\delta t-\left(R(t^{-},x(t^{-}))+F_{ \mathrm{J}}^{+}\delta t\right)\] \[=\mathrm{D}_{x}R^{-}\delta x(t^{-})+\left(\mathrm{D}_{x}R^{-}F_{ 1}^{-}+\mathrm{D}_{t}R^{-}-F_{\mathrm{J}}^{+}\right)\delta t \tag{24}\] This step is highlighted by the vector addition in Fig. 4 Eq. c. Next, we solve for \(\delta t\) as a function of \(\delta x(t^{-})\). The linear property of the guard (17) and the perturbation expressed in terms of the nominal trajectory (20) are used to rewrite the guard evaluated at \(\widetilde{x}(\widetilde{t}^{-})\) as a function of the nominal (and noting that \(g(t^{-},x(t^{-}))=0\)): \[0 =g(t^{-},x(t^{-}))+\mathrm{D}_{x}g^{-}(\delta x(t^{-})+F_{1}^{-} \delta t)+\mathrm{D}_{t}g^{-}\delta t \tag{25}\] \[=\mathrm{D}_{x}g^{-}\delta x(t^{-})+(\mathrm{D}_{x}g^{-}F_{1}^{- }+\mathrm{D}_{t}g^{-})\delta t \tag{26}\] This expansion shows up in Fig. 4 as Eq. b. Writing \(\delta t\) as a function of \(\delta x(t^{-})\) gives: \[\delta t=-\frac{\mathrm{D}_{x}g^{-}}{\mathrm{D}_{x}g^{-}F_{1}^{-}+\mathrm{D} _{t}g^{-}}\delta x(t^{-}) \tag{27}\] Substituting this \(\delta t\) into (24) and solving for \(\delta x(\widetilde{t}^{+})\) in terms of \(\delta x(t^{-})\): \[\delta x(\widetilde{t}^{+})=\mathrm{D}_{x}R^{-}\delta x(t^{-}) \tag{28}\] Fig. 4: Linearizations made about the nominal trajectory shown in black where a perturbation is shown in yellow and the perturbed trajectory is shown in blue. At \(a)\) describes \(\vec{v}=F_{1}^{-}\delta t+\delta x(t^{-})\). At \(b)\) the guard condition is \(0=\mathrm{D}_{x}g^{-}(\delta x(t^{-})+F_{1}^{-}\delta t)\). At \(c)\)\(\delta x(\widetilde{t}^{+})\) is \(\mathrm{D}_{x}R^{-}\vec{v}-F_{\mathrm{J}}^{+}\delta t\). Here \(\delta t\) is positive (late transition) and for the purposes of this figure it is assumed that the system is autonomous, so the \(\mathrm{D}_{t}g\) and \(\mathrm{D}_{t}R\) terms drop out. \[+\frac{\left(F_{\mathrm{J}}^{+}-\mathrm{D}_{x}R^{-}F_{\mathrm{I}}^{-}- \mathrm{D}_{t}R^{-}\right)\mathrm{D}_{x}g^{-}}{\mathrm{D}_{x}g^{-}F_{\mathrm{I} }^{-}+\mathrm{D}_{t}g^{-}}\delta x(t^{-})\] \[=\Xi_{(\mathrm{I},\mathrm{J})}\delta x(t^{-}) \tag{29}\] where \(\Xi\) is the saltation matrix, as in (10). ### _Linear forms for the saltation matrix_ Understanding how perturbed trajectories behave near a trajectory of interest is crucial for many algorithms which rely on linearizations. The sensitivity equation describes how these perturbations evolve over time. For a hybrid system, the time evolution simply applies the standard smooth sensitivity equation, based on the Jacobian of the flow (1), for the smooth dynamics and the saltation matrix equation when a hybrid transition occurs (10). For a transition from mode I to mode J at time \(t^{-}\), the sensitivity is described by: \[\delta\dot{x}(t) =A_{\mathrm{I}}\delta x(t) s.t.\ t\leq t^{-} \tag{30}\] \[\delta x(t^{+}) =\Xi_{(\mathrm{I},\mathrm{J})}\delta x(t^{-}) s.t.\ t=t^{-}\] (31) \[\delta\dot{x}(t) =A_{\mathrm{J}}\delta x(t) s.t.\ t\geq t^{+} \tag{32}\] where \(A_{\mathrm{I}}:=\mathrm{D}_{x}F_{\mathrm{I}}(t,x)\) represents the Jacobian of the dynamics with respect to state. An example is shown in Fig. 5, where the sensitivity is updated only by the saltation matrix because the flows are constant in both modes (\(A\) is identity). Instead, it is the difference in mode timing that determines the change in sensitivity from the initial to final state. If the Jacobian of the reset (which is also identity) is used instead of the saltation matrix, the prediction is incorrect. Sensitivity of hybrid systems is extensively analyzed in [31] and [19]. Many algorithms consider finite, discrete timesteps. This makes the analysis slightly different, since the hybrid transition will most likely not occur exactly at the boundary of a discrete timestep. In this case, a "sandwich" method is utilized, where 3 (or more) smaller discrete updates are applied during a timestep in which has a hybrid transition. Consider a time interval from \(t_{k}\) to \(t_{k+1}:=t_{k}+\Delta\) over which a single reset occurs at time \(t_{k}+\Delta_{1}\). The system spends \(\Delta_{1}\) time in the first mode and \(\Delta_{2}:=\Delta-\Delta_{1}\) in the second mode. Let \(A_{\mathrm{I},\Delta}\) be the Jacobian of the dynamics discretized to time duration \(\Delta\). Then a discrete approximation of the forward dynamics is: \[\boxed{\delta x(t_{k+1})=A_{\mathrm{J},\Delta_{2}}\Xi_{(\mathrm{I},\mathrm{J} )}A_{\mathrm{I},\Delta_{1}}\delta x(t_{k})} \tag{33}\] which holds to first order. This result comes from the fundamental matrix solution [10, Eq. 7.22]. Extending this idea, consider a periodic orbit of period \(T\), such that \(x(t)=x(t+T)\). In this case, the fundamental matrix solution is called the monodromy matrix. If the orbit passes through \(j\) modes labeled \(i=1,2,\ldots,j\), with mode periods \(T_{i}\), then we define the **monodromy matrix**\(\Phi\), [10, Eq. 7.28], [118, Eq. 1], and [55, Eq. 12]: \[\Phi=\Xi_{(j,1)}A_{j,T_{j}}\Xi_{(j-1,j)}A_{j-1,T_{j-1}}\cdots\Xi_ {(1,2)}A_{1,T_{1}} \tag{34}\] \[\delta x(t+T)=\Phi\delta x(t) \tag{35}\] which holds to first order. This monodromy matrix captures the change in perturbations from one cycle through the orbit to the next and the eigenvalues (called **Floquet multipliers**[10]) determine the stability of the trajectory. Namely, if the eigenvalues all have magnitude less than one then the system is asymptotically stable [10]. Related to the monodromy matrix, a common technique to analyze stability of periodic systems is to analyze the return/Poincare map [10]. A **Poincare map**\(\mathcal{P}(x)\) converts the continuous-time system to a discrete map. For an autonomous system with \(n\) states and a limit cycle \(L\), the Poincare map is defined about a fixed point \(x^{*}\) on \(L\) and an \(n-1\) dimensional hyper-plane transverse to the flow \(F\) called the Poincare section \(S\), with \(x^{*}\in S\). The Poincare map captures how points move along the Poincare section after one cycle (\(\mathcal{P}:S\mapsto S\)). Stability of the fixed point is often computed by taking the Jacobian of the Poincare map and analyzing its eigenvalues. If the eigenvalues are less than one (the requirements for stability for a discrete system), the fixed point \(x^{*}\) is stable. Note that for the autonomous case, the dimensionality of the system is reduced by one due to the embedding. For the non-autonomous case, a Poincare section in state space cannot be defined because it does not regard the dependency on time. Instead, the trajectory is augmented with a periodic time coordinate on \(S^{1}\), and the Poincare section is now defined to be at the end of each period \(T\). In this case, the Poincare map and its Jacobian are in the full \(n\) space, as the Poincare section is defined on the added time coordinate. Consider a monodromy matrix for a cycle that starts and ends at the fixed point \(x^{*}\) for one cycle. In the autonomous case, the monodromy matrix has the same eigenvalues as the Jacobian of the Poincare map with an additional eigenvalue equal to one. This is because the monodromy matrix is still in the full \(n\) space, and perturbations along the direction of the flow are invariant. In the non-autonomous case, the monodromy matrix and the Jacobian of the Poincare map are Fig. 5: Constant flow hybrid system with identity reset map. The Jacobian of the reset map \(\mathrm{D}_{x}R\) predicts no variational changes whereas using the saltation matrix \(\Xi\) predicts the correct variational changes. equivalent, so sometimes the monodromy matrix is defined simply to be the Jacobian of the Poincare map [59]. If the system is autonomous and periodic, using the Poincare map might be more practical because the analysis is simplified by the reduction of a state variable, e.g. as shown for passive dynamic walkers [119]. However, the monodromy matrix can be generalized to the fundamental matrix solution for analysis of non-cyclical behaviors, which the Poincare map can not. This is especially important when designing dynamic behaviors that are drastically different like for parkour or dynamic grasps. Also closely related to Floquet multipliers are Lyapunov exponents [92, 93, 94]. For a given Floquet multiplier \(\sigma\), it can be written in the form \(\sigma=e^{\mu T}\) where \(\mu\) is the Floquet exponent and the real part of \(\mu\) is the Lyapunov exponent [120]. If all Lyapunov exponents are negative, \(\sigma<1\) and the trajectory is asymptotically stable. ### _Quadratic forms for the saltation matrix_ Similar to linear forms, quadratic forms are often used in algorithms which rely on linearizations. Examples of such algorithms include the well-known Kalman filter and LQR controller. There are 2 main updates this section covers: the quadratic form of the vector (covariance) and the co-vector (value approximation). For covariances, recall that the update law for covariance \(\Sigma\) through a discretized smooth system, with timesteps \(\Delta\), is: \[\Sigma(t_{k+1})=A_{\Delta}\Sigma(t_{k})A_{\Delta}^{T} \tag{36}\] e.g. as in [121, Eqn. 1.10] or [122, Eqn. 6]. Similarly, at hybrid transitions, the saltation matrix applies in an analogous way (see derivation in Appendix C): \[\boxed{\Sigma(t^{+})=\Xi_{(\text{I},\text{J})}\Sigma(t^{-})\Xi_{(\text{I}, \text{J})}^{T}} \tag{37}\] [110, Eqn. 17], [12, Eqn. 7], which holds to first order. As with linear forms, the sandwich method (33) can be applied to retrieve the covariance propagation for an entire discrete timestep: \[\boxed{\Sigma(t_{k+1})=A_{\text{J},\Delta_{2}}\Xi_{(\text{I},\text{J})}A_{ \text{I},\Delta_{1}}\Sigma(t_{k})A_{\text{I},\Delta_{1}}^{T}\Xi_{(\text{I}, \text{J})}^{T}A_{\text{J},\Delta_{2}}^{T}} \tag{38}\] [12, Eqn. 19]. An example is shown in Fig. 6, where the covariance is once again updated only by the saltation matrix because the flows are constant in both modes (\(A\) is identity). If the Jacobian of the reset is used instead, the incorrect covariance is predicted. Algorithms, such as a Kalman filter [12], that propagate covariances with the dynamics can utilize this update law. In the case of propagating a quadratic form of a co-vector, the matrix transpose terms flip sides similar to how a co-vector quadratic form propagates in the smooth domain: \[P(t_{k})=A_{\Delta}^{T}P(t_{k+1})A_{\Delta} \tag{39}\] as in [123, Eqn. 3.40]. The co-vector propagation law for the hybrid transition uses the saltation matrix in an analogous way (see derivation in Appendix D): \[\boxed{P(t^{-})=\Xi_{(\text{I},\text{J})}^{T}P(t^{+})\Xi_{(\text{I},\text{J} )}} \tag{40}\] [17, Eqn. 23], [13, Eqn. 31]. The main application of the co-vector case is in the update to the Riccati equation or Bellman update, e.g. in LQR [13, 17]. ## IV Example: Calculating the saltation matrix for a ball dropping on a slanted surface One of the simplest examples of a hybrid system is a 2D point mass (ball) falling and hitting a flat surface, as shown in Fig. 1. Intuitively, the impact should eliminate variations normal to the constraint in both position and velocity. This section presents the computation of the saltation matrix for this example and how it confirms the collapse of variations normal to the constraint. ### _Dynamics definition_ Here, the system's dynamics are summarized for the 2D point mass, with an in-depth derivation for a general rigid body system given in Sec. V. The horizontal and vertical positions as well as their velocities are defined to be the states of the system, \(x=[q^{T},\dot{q}^{T}]^{T}=[q_{1},q_{2},\dot{q}_{1},\dot{q}_{2}]^{T}\). The ball has mass \(m\) and acceleration due to gravity \(a_{g}\). For the sake of demonstrating how inputs are handled, the ball is fully actuated with control inputs along the configuration coordinates \([u_{1},u_{2}]^{T}\). Two cases of friction are considered, one that assume frictionless sliding when in contact with the surface (i.e. the kinetic friction coefficient is zero, \(\mu_{k}=0\)) and one where the friction is sufficient to prevent sliding, i.e. the ball sticks to a spot. The ball impacts a sloped surface parameterized by an angle \(\theta\), where the position constraint is defined by the guard function: \[g_{(\text{U},\text{S})}(t,x)=\sin{(\theta)}q_{1}+\cos{(\theta)}q_{2}=0 \tag{41}\] Fig. 6: Constant flow hybrid system with identity reset map. The Jacobian of the reset map \(\text{D}_{x}R\) predicts no covariance change whereas using the saltation matrix \(\Xi\) predicts the correct covariance. where U is the unconstrained mode and S is the constrained sliding mode (the ball can slide tangentially along the constraint surface). The resulting velocity constraint Jacobian \(J_{\mathrm{S}}\) in the sliding mode is: \[J_{\mathrm{S}}(q)=\mathrm{D}_{q}g_{(\mathrm{U,S})}(t,x)=\left[\sin \left(\theta\right)\quad\cos\left(\theta\right)\right],\quad s.t.\ J_{\mathrm{S}}\dot{q}=0 \tag{42}\] The unconstrained mode dynamics are defined by ballistic motion: \[F_{\mathrm{U}}(t,x)=\begin{bmatrix}\dot{q}_{1}&\dot{q}_{2}&\frac{u_{1}}{m}& \frac{u_{2}-a_{g}m}{m}\end{bmatrix}^{T} \tag{43}\] The hybrid guard for impact is defined by the constraint \(g_{(\mathrm{U,S})}(q)\leq 0\), i.e when the constraint is met the impact occurs. The reset map is defined by plastic impact, which enforces the velocity constraint: \[R_{(\mathrm{U,S})}(t,x)=\begin{bmatrix}q_{1}\\ q_{2}\\ \dot{q}_{1}\cos^{2}\left(\theta\right)-\dot{q}_{2}\,\cos\left(\theta\right) \,\sin\left(\theta\right)\\ \dot{q}_{2}\sin^{2}\left(\theta\right)-\dot{q}_{1}\,\sin\left(\theta\right) \,\cos\left(\theta\right)\end{bmatrix} \tag{44}\] The constrained mode dynamics are found by solving the ballistic dynamics while maintaining the velocity constraint: \[F_{\mathrm{S}}(t,x)=\begin{bmatrix}\dot{q}_{1}\\ \dot{q}_{2}\\ \frac{u_{1}\cos^{2}\left(\theta\right)}{m}-\frac{u_{2}\,\cos\left(\theta \right)}{m}\sin\left(\theta\right)\\ -\frac{u_{1}\,\cos\left(\theta\right)}{m}\sin\left(\theta\right)+\frac{u_{2}\, \sin^{2}\left(\theta\right)}{m}-\frac{g\,m\,\sin^{2}\left(\theta\right)}{m} \end{bmatrix} \tag{45}\] In the case of sticking friction in a third mode \(\mathrm{C}\), there is a no slip condition added to (42): \[J_{\mathrm{C}}=\begin{bmatrix}-\cos(\theta)&\sin(\theta)\\ \sin(\theta)&\cos(\theta)\end{bmatrix},\quad s.t.\ J_{\mathrm{C}}\dot{q}=0 \tag{46}\] such that the constrained dynamics become: \[\dot{x}=F_{\mathrm{C}}(t,x)=\begin{bmatrix}\dot{q}_{1}&\dot{q}_{2}&0&0\end{bmatrix} ^{T} \tag{47}\] The reset map eliminates all velocities: \[R(\mathrm{U,C})(t,x)=\begin{bmatrix}q_{1}&q_{2}&0&0\end{bmatrix} ^{T} \tag{48}\] Note that this mode is fully constrained and the ball will just stick to the surface (as \(\dot{q}=0\) after impact). ### _Saltation matrix calculation_ To compute the saltation matrix, the Jacobians of the guard and reset map with respect to state must be computed. The Jacobian of the guard is simply the velocity constraint Jacobian padded with zeros for each velocity coordinate: \[\mathrm{D}_{x}g_{(\mathrm{U,S})}(t,x)=\begin{bmatrix}J_{\mathrm{S}}&0_{1\times 2 }\end{bmatrix}=\begin{bmatrix}\sin(\theta)&\cos(\theta)&0&0\end{bmatrix} \tag{49}\] The Jacobian of the reset map is: \[\mathrm{D}_{x}R_{(\mathrm{U,S})}(t,x)= \tag{50}\] \[\begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&\cos^{2}\left(\theta\right)&-\cos\left(\theta\right)\,\sin\left(\theta \right)\\ 0&0&-\cos\left(\theta\right)\,\sin\left(\theta\right)&\sin^{2}\left(\theta \right)\end{bmatrix}\] The saltation matrix is then computed by substituting in each component, (43)-(50), into the definition, (2), to get: \[\boxed{\Xi_{(\mathrm{U,S})}=\begin{bmatrix}\Omega_{(\mathrm{U,S})}&0_{2\times 2 }\\ 0_{2\times 2}&\Omega_{(\mathrm{U,S})}\end{bmatrix}} \tag{51}\] where \(\Omega_{(\mathrm{U,S})}\) is a block element consisting of: \[\Omega_{(\mathrm{U,S})}=\begin{bmatrix}\cos^{2}\left(\theta\right)&-\cos \left(\theta\right)\,\sin\left(\theta\right)\\ -\cos\left(\theta\right)\,\sin\left(\theta\right)&\sin^{2}\left(\theta\right) \end{bmatrix} \tag{52}\] For the sticking saltation matrix, similar calculations are made as in the sliding case: \[\mathrm{D}_{x}g(\mathrm{U,C})(t,x)=\begin{bmatrix}\sin(\theta)&\cos(\theta)& 0&0\end{bmatrix} \tag{53}\] Note that the guard condition is the same, which results in having the same Jacobian of the guard as the sliding case. The Jacobian of the reset map is: \[\mathrm{D}_{x}R_{(\mathrm{U,C})}(t,x)=\begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&0\\ 0&0&0&0\end{bmatrix} \tag{54}\] The resulting saltation matrix becomes: \[\boxed{\Xi_{(\mathrm{U,C})}=\begin{bmatrix}\Omega_{(\mathrm{U,C})}&0_{2\times 2 }\\ 0_{2\times 2}&0_{2\times 2}\end{bmatrix}} \tag{55}\] where \(\Omega_{(\mathrm{U,C})}\) is a block element consisting of: \[\Omega_{(\mathrm{U,C})}=\frac{1}{\dot{q}_{2}\,\cos\left(\theta \right)+\dot{q}_{1}\,\sin\left(\theta\right)}\begin{bmatrix}\dot{q}_{2}\,\cos \left(\theta\right)&-\dot{q}_{1}\,\cos\left(\theta\right)\\ -\dot{q}_{2}\,\sin\left(\theta\right)&\dot{q}_{1}\,\sin\left(\theta\right) \end{bmatrix} \tag{56}\] ### _Saltation matrix analysis_ Interestingly, the saltation matrix for the sliding case \(\Xi_{(\mathrm{U,S})}\) is a block diagonal matrix with a repeating block element, shown in (51)-(52). This implies that the variations in position are mapped equivalently to variations in velocity. The eigenvalues and corresponding eigenvectors of this block are: \[\lambda_{0} =0, \lambda_{1} =1\] \[v_{0} =\begin{bmatrix}\sin(\theta)\\ \cos(\theta)\end{bmatrix}, v_{1} =\begin{bmatrix}-\cos\theta\\ \sin\theta\end{bmatrix} \tag{57}\] The first eigenvalue is zero, so any variation in the direction of its eigenvector is eliminated. Note that this eigenvector is exactly the velocity constraint Jacobian, \(J_{\mathrm{S}}=[\sin(\theta),\cos(\theta)]\). Thus, variations off the constraint for both position and velocity are zeroed out, i.e. there are no variations normal to the surface once impact is made, as shown in Fig. 7. Note that while the reset map zeros out velocity in this direction (and so this effect arises from the \(\mathrm{D}_{x}R\) term), the reset map has no effect on positions. For the position block, the effect in the constraint direction arises from the \(\mathrm{D}_{x}g\) term in the numerator of the second term in (2), as in (49). The second eigenvalue is identity, so variations in the direction of its eigenvector do not change. This eigenvector is tangent to the constraint direction, \([-\cos(\theta),\sin(\theta)]\). In fact, the saltation matrix is always just a rank one update to \(\mathrm{D}_{x}R\) in the direction of \(\mathrm{D}_{x}g\) and all other directions are unaffected. Although this is a simple example, this block matrix structure exists for all rigid body systems with unilateral constraints, as explored in the next section. For plastic impact into sticking, \(\mathrm{U,C})\), variations in configuration map differently than velocity variations. This is because the tangential constraint is only applied to the velocity and not the position (i.e. it is non-holonomic), whereas in the normal direction, both position and velocity are constrained. The sticking saltation matrix \(\Xi(\mathrm{U,C})\) reflects this change, where there is no longer a repeated element in the block diagonal. Instead, the only nonzero component is how variations in position map onto the constraint surface (55)-(56). The velocity components are all zero because velocity is fully constrained to zero. Again, we analyze the non-zero block by computing the eigenvalues and corresponding eigenvectors: \[\lambda_{0} =0, \lambda_{1} =1\] \[v_{0} =\begin{bmatrix}\dot{q}_{1}\\ \dot{q}_{2}\end{bmatrix}, v_{1} =\begin{bmatrix}-\cos\left(\theta\right)\\ \sin\left(\theta\right)\end{bmatrix} \tag{58}\] Similar to the sliding case, variations tangential to the constraint are preserved. However, the zero eigenvector is different. Configuration variations that are in the same direction as the impact velocity disappear. Fig. 7 illustrates this idea, where position variations in the direction of the pre-impact velocity are eliminated. This is intuitive because the ball impacting earlier or later has no effect if the variation is in line with the impact velocity, it will hit the same contact point and stick. ## V Saltation matrices for generalized rigid body systems with unilateral constraints For rigid body systems with contacts, the hybrid modes are the enumeration of different contact conditions. This section defines the dynamics of these systems and calculates the saltation matrix of all the common mode transitions for a single constraint. This section generalizes much of the intuition developed in Sec. IV. ### _Dynamics derivation_ The following examples consider four modes, illustrated in Fig. 8: the unconstrained mode approaching the constraint surface to be \(\mathrm{U}\), the unconstrained mode leaving the surface \(\mathrm{V}\), a constrained mode \(\mathrm{C}\), and a sliding with friction mode \(\mathrm{S}\). The reason both \(\mathrm{U}\) and \(\mathrm{V}\) are included is to ensure that elastic impact is not defined with a self-reset and to avoid degenerate impacts just after liftoff, when the velocity is not approaching the constraint but the guard condition is satisfied \(g_{\mathrm{n}}\leq 0\), especially when using numerical integration. The states of the system are the configuration coordinates \(q\) and their velocities \(\dot{q}\), such that \(x:=[q,\dot{q}]^{T}\). The dimension of the configuration \(q\) is defined to be \(m\), while the dimension of the state space \(x\) is \(n=2m\). Contacts between rigid bodies are regulated through a unilateral constraint in the normal (n) direction, \(g_{\mathrm{n}}(t,x)\geq 0\). Note that \(g_{\mathrm{n}}(t,x)\) only depends on the configuration \(q\) and not the velocity. When rigid bodies are in contact they must satisfy \(g_{\mathrm{n}}(t,x)=0\). The Jacobian of \(g_{\mathrm{n}}\) with respect to the configuration coordinates is defined to be \(J_{\mathrm{n}}:=\mathrm{D}_{\mathrm{q}}g_{\mathrm{n}}(t,x)\). In the sliding mode, the constraint Jacobian consists of just this normal direction constraint, \(J_{\mathrm{S}}=J_{\mathrm{n}}\). However, if the no slip condition is added, the constrained mode \(\mathrm{C}\) has a constraint Jacobian of: \[J_{\mathrm{C}}=\begin{bmatrix}J_{\mathrm{n}}\\ J_{\mathrm{t}}\end{bmatrix} \tag{59}\] where \(J_{\mathrm{t}}\) is the tangential velocity constraint Jacobian. For unconstrained modes, \(J\) is empty. In any mode, the following acceleration constraint is applied based on \(J\) for that mode to maintain the active constraints until the next guard: \[J(t,x)\ddot{q}+\dot{J}(t,x)\dot{q}=0 \tag{60}\] The equations of motion for each mode are defined by the constrained manipulator dynamics, e.g. [124], where this constraint is combined with Lagrangian dynamics: \[\begin{bmatrix}M&J^{T}\\ J&0\end{bmatrix}\begin{bmatrix}\ddot{q}\\ \dot{f}\end{bmatrix}=\begin{bmatrix}\Upsilon-N\\ 0\end{bmatrix}-\begin{bmatrix}C\\ \dot{J}\end{bmatrix}\dot{q} \tag{61}\] [125, Eqn. 33] where \(f\) is the constraint force vector (Lagrange multiplier), \(M(q)\) is the mass matrix, \(C(q,\dot{q})\) is the Coriolis matrix, \(\Upsilon(u)\) the input vector, and \(N(q,\dot{q})\) are the other nonlinear forces such as gravity and sliding friction. To help with the following equations, the \(\dagger\) notation from [4, Eqn. 8] is adopted here, where in each mode: \[\begin{bmatrix}M^{\dagger}&J^{\dagger T}\\ J^{\dagger}&\Lambda^{\dagger}\end{bmatrix}:=\begin{bmatrix}M&J^{T}\\ J&0\end{bmatrix}^{-1} \tag{62}\] This definition produces a number of identities, in particular: \[M^{\dagger}M=I_{m\times m}-J^{\dagger T}J \tag{63}\] [4, Eqn. 11], which will be helpful in simplifying the saltation matrix expressions. With this notation the state space dynamics can be expressed as: \[\dot{x}=\frac{d}{dt}\begin{bmatrix}q\\ \dot{q}\end{bmatrix}=\begin{bmatrix}\dot{q}\\ M^{\dagger}\left(\Upsilon-N-C\ddot{q}\right)-J^{\dagger T}\dot{J}\dot{q}\end{bmatrix} \tag{64}\] Fig. 7: Ball drop example with sliding friction (left) and sticking friction (right). In sliding, position variations in the direction of the constraint are eliminated. \(v_{0}\) is the eigenvector associated with the zero eigenvalue and \(\theta\) is the angle of the surface. In sticking, position variations in the direction of pre-impact velocity are eliminated. \(q_{1}\) is the horizontal configuration, and \(q_{2}\) is the vertical configuration. [4, Eqn. 75] where each \(\dagger\) component is different depending on the hybrid mode based on \(J\). For the unconstrained case, \(M^{\dagger}=M^{-1}\). Similarly, the constraint forces \(f(t,x)\) are calculated from the bottom row of (61): \[f(t,x)=J^{\dagger}\left(\Upsilon-N-C\hat{q}\right)-\Lambda^{\dagger}\dot{J}\hat {q} \tag{65}\] Coulomb friction is used in the sliding mode - frictional forces in the tangential direction \(f_{\rm t}\) (included in \(N\)) are applied to resist sliding motion proportional to the normal constraint force, \(f_{\rm n}\), and in the direction resisting the sliding velocity, \(v_{\rm t}=J_{\rm t}\hat{q}\): \[f_{\rm t}=\mu_{k}\|f_{\rm n}\|\frac{J_{\rm t}\dot{q}}{\|J_{\rm t}\dot{q}\|}=\mu _{k}\|f_{\rm n}\|\frac{v_{\rm t}}{\|v_{\rm t}\|} \tag{66}\] where \(\mu_{k}\) is the kinetic coefficient of friction. When a contact constraint is added, for example the normal surface constraint \(g_{\rm n}\), an impact law \(J_{\rm n}\hat{q}^{+}=-eV_{\rm n}\hat{q}^{-}\) is applied (where the coefficient of restitution \(e=1\) is perfectly elastic and \(e=0\) is perfectly plastic) along with the impulse momentum equation to get: \[\begin{bmatrix}\hat{q}^{+}\\ \hat{p}\end{bmatrix}=\begin{bmatrix}M&J_{\rm n}^{T}\\ J_{\rm n}&0\end{bmatrix}^{-1}\!\!\begin{bmatrix}M\\ -eJ_{\rm n}\end{bmatrix}\hat{q}^{-}=\begin{bmatrix}M_{\rm n}^{\dagger}&J_{\rm n }^{\dagger T}\\ J_{\rm n}^{\dagger}&\Lambda_{\rm n}^{\dagger}\end{bmatrix}\!\!\begin{bmatrix}M \\ -eJ_{\rm n}\end{bmatrix}\hat{q}^{-} \tag{67}\] [4, Eqn. 23], [126], where \(\hat{p}\) is the impulse magnitude vector. Since the positions do not change instantaneously, the state space reset map for elastic, frictionless impact from mode U to mode V is: \[x^{+}=\begin{bmatrix}q^{+}\\ \dot{q}^{+}\end{bmatrix}=R_{\rm(U,V)}(t,x^{-})=\begin{bmatrix}q^{-}\\ M_{\rm n}^{\dagger}M\dot{q}^{-}-eJ_{\rm n}^{\dagger T}J_{\rm n}\dot{q}^{-}\end{bmatrix} \tag{68}\] The plastic, frictionless impact reset map into mode S follows (68) but with \(e=0\) (and written with \(M_{\rm S}^{\dagger}\) for mode S, though \(M_{\rm S}^{\dagger}=M_{\rm n}^{\dagger}\) since \(J_{\rm S}=J_{\rm n}\)): \[x^{+}=\begin{bmatrix}q^{+}\\ \dot{q}^{+}\end{bmatrix}=R_{\rm(U,S)}(t,x^{-})=\begin{bmatrix}q^{-}\\ M_{\rm S}^{\dagger}M\dot{q}^{-}\end{bmatrix} \tag{69}\] The frictional, plastic impact reset map, \(R_{\rm(U,C)}\), follows (69) but with \(J_{\rm C}\) and \(M_{\rm C}^{\dagger}\) instead of \(J_{\rm S}\) and \(M_{\rm S}^{\dagger}\). Similarly, the liftoff reset maps into modes U or V are the same except that there is no constraint \(J\), and so the reset simplifies to an identity map. Note that the reset map does not depend on the prior mode, so for example \(R_{\rm(S,C)}=R_{\rm(U,C)}\). ### _Apex_ Apex is a "virtual" hybrid event - one that does not have a physical reset map or change in the dynamics - and is triggered when the velocity switches from going away from the constraint to towards the constraint \(\rm(V,U)\). As the reset map is identity, and the dynamics match before and after (since there is not a difference in control at this event) the saltation matrix is identity following (12): \[\boxed{\Xi_{\rm(V,U)}=I_{n\times n}} \tag{70}\] ### _Liftoff_ Litoff is a hybrid transition into mode \(\rm V\) from \(\rm S\) or \(\rm C\) that depends on the constraint force \(f(t,x)\), (65), which is a function of both time and state (and implicitly a function of control input). The guard for liftoff is determined by \(f_{\rm n}\), the constraint force in the \(J_{\rm n}\) direction - if the force becomes non-repulsive, then the contact is released: \[g_{\rm(C,V)}(t,x) =f_{\rm n}(t,x) \tag{71}\] \[g_{\rm(S,V)}(t,x) =f_{\rm n}(t,x) \tag{72}\] Because the hybrid event occurs when the constraint force goes to zero, the dynamics at the boundary are equal. This is true even in the case of sticking friction in mode \(\rm C\), as the friction cone ensures that either the system transitions to sliding mode \(\rm S\) (as discussed in Sec. V-F) or the frictional force goes to zero at the same time. The state does not jump during liftoff, which meaning the reset map for liftoff is an identity transformation. Since both conditions of (12) are met for liftoff, the saltation matrices are identity: \[\boxed{\Xi_{\rm(C,V)}=I_{n\times n}} \tag{73}\] \[\boxed{\Xi_{\rm(S,V)}=I_{n\times n}} \tag{74}\] Due to the smooth nature of liftoff, these events can be safely ignored when considering variations from liftoff. ### _Plastic impact_ Plastic impact occurs when the unconstrained mode \(\rm U\) makes contact and transitions to either the sliding mode \(\rm S\) or the constrained mode \(\rm C\). First, consider plastic impact into sliding \(\rm(U,S)\). For simplicity, frictionless sliding \(\mu_{k}=0\) is assumed to expose the structure in the saltation matrix, but the same calculations can be made with non-zero sliding friction \(\mu_{k}>0\). The dynamics for each mode is from (64): \[F_{\rm U}(t,x^{-})=\begin{bmatrix}\dot{q}^{-}\\ M^{-1}(\Upsilon-C^{-}\dot{q}^{-}-N)\end{bmatrix} \tag{75}\] Fig. 8: Depicting the different rigid body hybrid modes considered where blue arrows depict velocities and red arrows depict forces. \(\rm U\) is the unconstrained mode with approaching velocity to the constraint, \(\rm V\) is the unconstrained mode with separating velocity, \(\rm C\) is the constrained mode, and \(\rm S\) is the sliding mode on the constraint. A single planar point is shown here, but the system may have additional degrees of freedom. \[F_{\rm S}(t,x^{+})=\begin{bmatrix}\dot{q}^{+}\\ M_{\rm S}^{\dagger}(\Upsilon-C^{+}\dot{q}^{+}-N)-J_{\rm S}^{\dagger T}\dot{J}_{ \rm S}^{+}\dot{q}^{+}\end{bmatrix} \tag{76}\] Note that \(-\) or \(+\) on C and \(\dot{J}\) indicates that these functions use the pre- or post-impact velocity, \(\dot{q}^{-}\) or \(\dot{q}^{+}\), respectively. The Jacobian of the reset map for plastic impact, (69), is: \[\mathrm{D}_{x}R_{\rm(U,S)}(t,x^{-})=\begin{bmatrix}I_{m\times m}&0_{m\times m }\\ \mathrm{D}_{q}(M_{\rm S}^{\dagger}M\dot{q}^{-})&M_{\rm S}^{\dagger}M\end{bmatrix} \tag{77}\] The Jacobian of the guard \(\mathrm{D}_{x}g_{\rm(U,S)}(t,x^{-})\) is: \[\mathrm{D}_{x}g_{\rm(U,S)}(t,x^{-})=\begin{bmatrix}J_{\rm S}&0_{1\times m} \end{bmatrix} \tag{78}\] while the denominator of \(\Xi_{\rm(U,S)}\) is the impact velocity: \[\mathrm{D}_{x}g_{\rm(U,S)}(t,x^{-})F_{\rm U}(t,x^{-})=\begin{bmatrix}J_{\rm S} &0_{1\times m}\end{bmatrix}F_{\rm U}(t,x^{-})=J_{\rm S}\dot{q}^{-} \tag{79}\] In this example, the guard and reset map are independent of time, \(\mathrm{D}_{t}R=0_{n\times 1},\mathrm{D}_{t}g=0\). However, in other cases such as a paddle juggler [127], the impact surface can move as a function determined by time, in which case the guard and reset would depend on the prescribed motion. To further simplify the component of the saltation matrix (2) that contains the difference between dynamics, \(F_{\rm S}-\mathrm{D}_{x}RF_{\rm U}\), the following steps are applied. First, substitute in \(\dot{q}^{+}=M_{\rm S}^{\dagger}M\dot{q}^{-}=\dot{q}^{-}-J_{\rm S}^{\dagger T}J _{\rm S}\dot{q}^{-}\) using the reset map (69) and the identity (63). Then, plugging into the difference between dynamics: \[F_{\rm S}(t,x^{+})-\mathrm{D}_{x}R_{\rm(U,S)}(t,x^{-})F_{\rm U}(t,x^{-})= \tag{80}\] \[\begin{bmatrix}-J_{\rm S}^{\dagger T}J_{\rm S}\dot{q}^{-}\\ M_{\rm S}^{\dagger}(C^{-}\dot{q}^{-}-C^{+}\dot{q}^{+})-J_{\rm S}^{\dagger T}\dot{ J}_{\rm S}^{+}\dot{q}^{+}-\mathrm{D}_{q}(M_{\rm S}^{\dagger}M\dot{q}^{-}) \dot{q}^{-}\end{bmatrix} \tag{81}\] The saltation matrix for plastic impact is obtained by inserting all terms into (2) and simplifying (using (63) again): \[\boxed{\Xi_{\rm(U,S)}=\begin{bmatrix}M_{\rm S}^{\dagger}M&0_{m\times m}\\ Z_{\rm S}+\mathrm{D}_{q}(M_{\rm S}^{\dagger}M\dot{q}^{-})&M_{\rm S}^{\dagger}M \end{bmatrix}} \tag{82}\] where: \[Z_{\rm S}=\begin{pmatrix}M_{\rm S}^{\dagger}(C^{-}\dot{q}^{-}-C^{+} \dot{q}^{+})-J_{\rm S}^{\dagger T}\dot{J}_{\rm S}^{+}\dot{q}^{+}\\ -\mathrm{D}_{q}(M_{\rm S}^{\dagger}M\dot{q}^{-})\dot{q}^{-})\ J_{\rm S}/(J_{ \rm S}\dot{q}^{-})\end{pmatrix} \tag{83}\] Note that the difference between the Jacobian of the reset map \(\mathrm{D}_{x}R\), (77), is in the first column of the matrix where the identity matrix is now \(M_{\rm S}^{\dagger}M\) and the element on the lower left differs by the term in (83). When impacting into the frictional constrained mode \(\mathrm{C}\), all steps remain the same except with \(J_{\rm C}\) instead of \(J_{\rm S}\) (and similarly \(M_{\rm C}^{\dagger}\) and \(J_{\rm C}^{\dagger T}\)). However, the upper left block of the saltation matrix no longer simplifies as nicely with the Jacobian of the guard \(\mathrm{D}_{x}g\) terms. This is because \(J_{\rm S}=\mathrm{D}_{x}g=J_{\rm n}\) but \(J_{\rm C}\neq\mathrm{D}_{x}g\). Rather, \(\mathrm{D}_{x}g=J_{\rm n}\) is a row of \(J_{\rm C}\), i.e. the non-penetrating constraint. The resulting saltation matrix is: \[\boxed{\Xi_{\rm(U,C)}=\begin{bmatrix}I_{m\times m}-\frac{J_{\rm C}^{\dagger T}J_{ \rm C}\dot{q}^{-}J_{\rm n}}{J_{\rm n}q^{-}}&0_{m\times m}\\ Z_{\rm C}+\mathrm{D}_{q}(M_{\rm C}^{\dagger}M\dot{q}^{-})&M_{\rm C}^{\dagger}M \end{bmatrix}} \tag{84}\] where: \[Z_{\rm C}=\begin{pmatrix}M_{\rm C}^{\dagger}(C^{-}\dot{q}^{-}-C^{+} \dot{q}^{+})-J_{\rm C}^{\dagger T}\dot{J}_{\rm C}^{+}\dot{q}^{+}\\ -\mathrm{D}_{q}(M_{\rm C}^{\dagger}M\dot{q}^{-})\dot{q}^{-})\ J_{\rm C} /(J_{\rm C}\dot{q}^{-})\end{pmatrix} \tag{85}\] Again, the difference between the saltation matrix and the Jacobian of the reset is in the left column associated with the configuration variations. However, the upper left block no longer maps configuration variations exactly the same as velocity variations in the lower right, because the tangential constraint is only a velocity constraint - the contact point can be anywhere on the contact surface, whereas the velocity of the contact point must be the same everywhere on the surface. Other than the upper left block, the structure of \(\rm(U,S)\) and \(\rm(U,C)\) saltation matrices look remarkably similar, with the interchange of \(J_{\rm S}\) and \(J_{\rm C}\) being the only other difference. In the example in Sec. IV, the lower left block of these saltation matrices was zero. This block is comprised of Coriolis-like terms, so for simple systems like the ball drop, Coriolis terms do not exist in the dynamics and the lower left block of the saltation matrix collapses to zero. However, for systems of appreciable complexity, this does not hold. ### _Elastic impact_ When the coefficient of restitution is non-zero, states in the approaching unconstrained mode \(\mathrm{U}\) transition directly to the separating unconstrained mode \(\mathrm{V}\) through elastic impact. The dynamics for each mode, (64), are: \[F_{\rm U}(t,x^{-})=\begin{bmatrix}\dot{q}\\ M^{-1}(\Upsilon-C^{-}\dot{q}^{-}-N)\end{bmatrix} \tag{86}\] \[F_{\rm V}(t,x^{+})=\begin{bmatrix}\dot{q}^{+}\\ M^{-1}(\Upsilon-C^{+}\dot{q}^{+}-N)\end{bmatrix} \tag{87}\] Again, note that \(-\) or \(+\) on \(\mathrm{C}\) indicates that these functions use the pre- or post-impact velocity, \(\dot{q}^{-}\) or \(\dot{q}^{+}\), respectively. The Jacobian of the reset map for elastic impact, (68), is: \[\mathrm{D}_{x}R_{\rm(U,V)}^{-}=\begin{bmatrix}I_{m\times m}&0_{m\times m}\\ \mathrm{D}_{q}(\!(M_{\rm n}^{\dagger}M-eJ_{\rm n}^{\dagger T}J_{\rm n})\dot{q}^ {-})&M_{\rm n}^{\dagger}M-eJ_{\rm n}^{\dagger T}J_{\rm n}\end{bmatrix} \tag{88}\] The Jacobian of the guard is again \(\mathrm{D}_{x}g=[J_{\rm n},0_{1\times n}]\). Plugging each component back into the full saltation matrix equation results in: \[\boxed{\Xi_{\rm(U,V)}\!=\!\begin{bmatrix}M^{\dagger}M-eJ^{\dagger T}J&0_{m \times m}\\ \!Z_{\rm V}\!+\!\mathrm{D}_{q}(\!(M^{\dagger}M-eJ^{\dagger T}J)\dot{q}^{-})&M^{ \dagger}M\!-eJ^{\dagger T}J\end{bmatrix}} \tag{89}\] where \(J\) and \(M^{\dagger}\) use the normal constraint, \(J_{\rm n}\) and \(M_{\rm n}^{\dagger}\), and: \[Z_{\rm V}=\begin{pmatrix}\left[M^{-1}(C^{-}-C^{+}(M_{\rm n}^{ \dagger T}M-eJ_{\rm n}^{\dagger T}J_{\rm n}))\right.\\ -\mathrm{D}_{q}((M_{\rm n}^{\dagger}M-eJ_{\rm n}^{\dagger T}J_{\rm n})\dot{q}^ {-})\right]\dot{q}^{-}\\ +(1+e)J_{\rm n}^{\dagger T}J_{\rm n}M^{-1}(\Upsilon-C^{-}\dot{q}^{-}-N) \right)J_{\rm n}/(J_{\rm n}\dot{q}^{-})\end{pmatrix} \tag{90}\] Note that the following substitution can be made \(M^{\dagger}M-eJ^{\dagger T}J=I_{m\times m}-(1+e)J^{\dagger T}J\) by (63). ### _Stick-slip friction_ The saltation matrix for stick-slip friction has been calculated in [10, Sec. 7.3]. This section computes this saltation matrix for a generalized system and analyzes its components. When the friction cone is broken, the mode is switched from the constrained mode \(\mathrm{C}\) to the sliding mode \(\mathrm{S}\). The guard to check for slipping is the friction cone: \[g_{\mathrm{(C,S)}}(t,x)=\mu_{s}\|f_{\mathrm{n}}(t,x)\|-\|f_{\mathrm{t}}(t,x)\|=0 \tag{91}\] where \(\mu_{s}\) is the coefficient of static friction. The reset map for these hybrid transitions is an identity transformation \(x^{+}=R_{\mathrm{(C,S)}}(x^{-})=x^{-}\), and therefore \(\mathrm{D}_{x}R_{\mathrm{(C,S)}}=I_{n\times n}\). If the guard \(g_{\mathrm{(C,S)}}\) is met, it can be assumed that slipping will also occur in the direction of the maximum tangential force. Therefore, at the slipping boundary, if both the coefficient of static friction and kinetic friction match, \(\mu_{s}=\mu_{k}\), then \(\Delta F=0\) (as the frictional force reaches and then maintains the value in (66)) and the saltation matrix is identity by (12). Indeed, any friction model (not just Coulomb) where the frictional force matches at the boundary results in an identity saltation matrix. This includes models where \(\mu_{k}\) is a function of velocity, such as Stribeck friction, so long as at \(\|v_{\mathrm{t}}\|=\|J_{\mathrm{t}}\dot{q}\|=0\), \(\mu_{k}(0)=\mu_{s}\), to get: \[\boxed{\begin{array}{l}\mu_{s}=\mu_{k}\implies F_{\mathrm{S}}=F_{\mathrm{C} }\\ \implies\Xi_{\mathrm{(C,S)}}=I_{n\times n}\end{array}} \tag{92}\] If \(\mu_{s}\neq\mu_{k}\), the saltation matrix is not necessarily identity, and the general computations of the saltation matrix can be made to obtain this form: \[\boxed{\begin{array}{l}\mu_{s}\neq\mu_{k}\implies F_{\mathrm{S}}^{+}\neq F _{\mathrm{C}}^{-}\\ \implies\Xi_{\mathrm{(C,S)}}=I_{n\times n}+\frac{(F_{\mathrm{S}}^{+}-F_{ \mathrm{C}}^{-})\mathrm{D}_{x}g_{\mathrm{(C,S)}}}{\mathrm{D}_{t}g_{\mathrm{(C,S)}}+\mathrm{D}_{x}g_{\mathrm{(C,S)}}F_{\mathrm{C}}^{-}}\end{array}}\end{array}} \tag{93}\] For this saltation matrix, position variations do not change because the reset map is identity and the top row of \(F_{\mathrm{S}}\) and \(F_{\mathrm{C}}\) are equal (i.e. the velocity \(\dot{q}\) does not change between modes). However, this saltation matrix will be very prone to modeling errors as it depends on knowing exactly how the sliding and sticking coefficients differ. From a modeling perspective, it may be advantageous to assume that at the boundaries the sliding and sticking coefficients match. ### _Slip-stick friction_ When the tangential velocity in mode \(\mathrm{S}\) goes to zero, the sliding stops and "sticks" into the constrained mode \(\mathrm{C}\). Therefore, the guard at slip-stick friction is just the magnitude of the tangential velocity: \[g_{\mathrm{(S,C)}}(t,x) =\|J_{\mathrm{t}}\dot{q}\|=\|v_{\mathrm{t}}\| \tag{94}\] \[\mathrm{D}_{x}g_{\mathrm{(S,C)}}(t,x) =\left[\dot{J}_{\mathrm{t}}^{-}\quad J_{\mathrm{t}}\right] \tag{95}\] The guard also has the condition \(\|f_{\mathrm{t}}\|<\mu_{s}\|f_{\mathrm{n}}\|\). However, note that the way tangential friction forces are calculated is different in the sliding mode \(\mathrm{S}\) than in the sticking mode \(\mathrm{C}\). In sliding, the tangential force is proportional to the normal force, \(\|f_{\mathrm{t}}\|=\mu_{k}\|f_{\mathrm{n}}\|\), (66). In the constrained sticking mode, the force vector is calculated from Lagrange multipliers as in (65). These generally are not equal and so there is a difference in the tangential force at the transition, and thus a difference in dynamics. The reset is an identity transformation, \(x^{+}=R_{\mathrm{(S,C)}}(x^{-})=x^{-}\), and therefore \(\mathrm{D}_{x}R_{\mathrm{(S,C)}}=I_{n\times n}\), so the saltation matrix is primarily composed of the difference between the dynamics of both modes and the tangential velocity term from the guard. Since the guard is not directly a function of time or control input in this case, \(\mathrm{D}_{t}g_{\mathrm{(S,C)}}=0\) and can be ignored, and the saltation matrix is: \[\boxed{\Xi_{\mathrm{(S,C)}}=I_{n\times n}+\frac{(F_{\mathrm{C}}^{+}-F_{ \mathrm{S}}^{-})\left[\dot{J}_{\mathrm{t}}^{-}\quad J_{\mathrm{t}}\right]}{ \dot{J}_{\mathrm{t}}^{-}\dot{q}^{-}+J_{\mathrm{t}}\dot{q}^{-}}} \tag{96}\] Note that the denominator is the tangential acceleration constraint (60) in mode \(\mathrm{C}\). If this condition is met at the exact moment that the velocity guard is satisfied while in the sliding mode \(\mathrm{S}\), the saltation matrix is not well defined; however, this would violate the transversality assumption (11). For this saltation matrix, as with stick-slip, position variations do not change because the reset map is identity and the top row of \(F_{\mathrm{C}}\) and \(F_{\mathrm{S}}\) are equal (i.e. the velocity \(\dot{q}\) does not change between modes). ### _Analysis of Saltation Matrices for Rigid Bodies_ This section presents saltation matrix derivations for a number of hybrid transitions that occur in rigid body systems with contact, as summarized in Table II. These derivations reveal patterns among many of these saltation matrices. For instance, the upper right block of the saltation matrix is zero for every case presented here. This is due to the second order nature of mechanical systems as a whole (i.e. acceleration is the derivative of velocity, which is the derivative of position). This makes it convenient to perform the eigen-analysis as in Sec. IV. The eigenvalues and eigenvectors of a block triangular matrix are the eigenvalues and eigenvectors of its diagonal block components, and the lower left block does not affect them. In applications where only the eigenvalues of the saltation matrix of interest, knowing the structure of the saltation matrix means the full saltation matrix need not be computed. Four of the saltation matrices analyzed are identity: apex (70), the two liftoff cases (73,74), and stick-slip under constant friction coefficient (93). This occurs when the reset map is identity and the dynamics in each mode are equivalent, as in (12). Outside of these identity cases, the stick-slip with unequal friction coefficients (95) and slip-stick (96) transitions also have an identity reset map because there is no instantaneous change in positions or velocities. An identity reset map allows for further insight into the eigen-properties of these matrices. Both of these saltation matrices can be written as \(\Xi=I_{n\times n}+ab^{T}\) where \(a\) and \(b\) are \(n\times 1\) vectors and \(ab^{T}\) is their outer product. The eigenvalues of a matrix with this structure are all \(1\) except for one eigenvalue of \(1+a^{T}b\) with corresponding eigenvector \(a\). This can be easily shown from the equality: \[(I+ab^{T})a=a+a(b^{T}a)=(1+a^{T}b)a \tag{99}\] This makes it possible to compute the eignvalues of these saltation matrices without performing the full matrix computation. Two non-identity saltation matrices had equivalent diagonal blocks, sliding plastic impact (82) and elastic impact (89). This occurs because the guard surface enforces an equivalent constraint on both position and velocities to be along the guard. When a non-holonomic constraint is added in mode C, this equivalency breaks. Equal diagonal blocks means that the eigenvalues of these saltation matrices are the eigenvalues of a diagonal block repeated twice. Table II summarizes the properties of identity reset map, matching hybrid dynamics, equal diagonal blocks, as well as equation number for each saltation matrix. ## VI Conclusion The saltation matrix is an essential tool when dealing with hybrid systems with state dependent switches. This paper presents a derivation of the saltation matrix with two different methods and demonstrates how the saltation matrix can be used in linear and quadratic forms for hybrid systems. A survey of where saltation matrices are used in other fields is also presented. In the past, it has been heavily utilized for analyzing the stability of periodic systems, but more recently it has been critical for analyzing and designing non-periodic behaviors. This analysis is especially useful for robotics where many important robotic motions are not periodic, but are hybrid due to the discontinuous nature of impact in rigid body systems with unilateral constraints. To further explore the nature of contact and how variations are mapped through them, a simple contact system is considered to compute the saltation matrix for plastic impact and analyze the different components of the resulting saltation matrices. These saltation matrices capture how position variations are mapped through contact, whereas the Jacobian of the reset map does not provide any information on position. In addition to this simple example, saltation matrices are computed for each of the hybrid transitions for a generalized rigid body model and we give insights on their structure. These computations are especially useful because the rigid body model covers a wide variety of systems and will help when getting started using saltation matrices for these systems. Saltation matrices exhibit common structures that can be exploited. In particular, by only using the Jacobian of the reset map instead of the saltation matrix, the entirety of the position variational information is lost. For other hybrid transitions such as stick-slip friction, the Jacobian of the reset map provides no additional information because it is an identity transformation and all the information is contained in the saltation matrix. By using saltation matrices for hybrid systems, efficient analysis, planning, control, and state estimation algorithms can be produced. This is especially important as hybrid systems naturally have combinatoric time complexities and through the use of these tools we can simplify these problems. The hope of this paper is to introduce the topic of saltation matrices to a broader community so that we can, as a whole, develop better methods for dealing with the complexities of hybrid systems and their applications. ## Acknowledgement We would like to thank Professor Sam Burden for his contributions in the conceptualization of this work and for his feedback on early drafts. We would also like to thank Dr. George Council for his comments and suggestions for additional material to cover. ## Appendix A Appendices A and B present the chain rule derivation of the saltation matrix and the early impact case for the geometric derivation. Appendices C and D prove the update laws through hybrid events for both covariance propagation and the Riccati equations. ### _Saltation matrix chain rule derivation_ Define the solutions of the flow in hybrid domains I and J, which integrate the continuous dynamics from an initial state \(x\) at time \(t_{0}\) to a state \(x_{f}\) at time \(t_{f}\), as: \[\phi_{\text{I}}:(t_{0}\in\mathbb{R},t_{f}\in\mathbb{R},x\in D_{ \text{I}})\mapsto x_{f}\in D_{\text{I}} \tag{100}\] \[\phi_{\text{J}}:(t_{0}\in\mathbb{R},t_{f}\in\mathbb{R},x\in D_{ \text{J}})\mapsto x_{f}\in D_{\text{J}} \tag{101}\] such that the vector fields, \[F_{\text{I}}(t_{0},x) =-\mathrm{D}_{t_{0}}\phi_{\text{I}}(t_{0},t_{f},x) \tag{102}\] \[F_{\text{I}}(t_{f},x) =\mathrm{D}_{t_{f}}\phi_{\text{I}}(t_{0},t_{f},x) \tag{103}\] for each mode. Define the solution across a hybrid transition from mode I to J to be: \[\phi(t_{0},t_{f},x):= \phi_{\text{J}}(\tau(x),t_{f},R_{\text{(I,J)}}(\tau(x),\phi_{ \text{I}}(t_{0},\tau(x),x))) \tag{104}\] where \(\tau(x)\) is the time to impact map, such that: \[g_{\text{(I,J)}}(\tau(x),\phi_{\text{I}}(t_{0},\tau(x),x))=0 \tag{105}\] It helps to look at the in between steps of the function composition in (104). Define: \[x^{-}(x) :=\phi_{\text{I}}(t_{0},\tau(x),x) \tag{106}\] \[x^{+}(x) :=R_{\text{(I,J)}}(\tau(x),x^{-}(x))\] (107) \[x_{f}(x) :=\phi_{\text{J}}(\tau(x),t_{f},x^{+}(x)) \tag{108}\] \begin{table} \begin{tabular}{c c c c c} Transition & \(R=I\) & \(F^{+}=F^{-}\) & Equal Diag. Blocks & Eq. \# \\ \hline \(\mathrm{(V,U)}\) & ✓ & ✓ & ✓ & (70) \\ \(\mathrm{(C,V)}\) & ✓ & ✓ & ✓ & (73) \\ \(\mathrm{(S,V)}\) & ✓ & ✓ & ✓ & (74) \\ \(\mathrm{(U,S)}\) & ✗ & ✗ & ✓ & (82) \\ \(\mathrm{(U,C)}\) & ✗ & ✗ & ✗ & (84) \\ \(\mathrm{(U,V)}\) & ✗ & ✗ & ✓ & (89) \\ \(\mathrm{(C,S)}\) & ✓ & if \(\mu_{s}=\mu_{k}\) & if \(\mu_{s}=\mu_{k}\) & (93,95) \\ \(\mathrm{(S,C)}\) & ✓ & ✗ & ✗ & (98) \\ \end{tabular} \end{table} TABLE II: Properties of Saltation Matrix for Different Rigid Body Mode Transitions where \(x_{f}=\phi(t_{0},t_{f},x)\) is the final state in the new mode. To find the derivative of \(\phi\) with respect to \(x\) in (104), the chain rule is used on each of these steps: \[\mathrm{D}_{x}x^{-}(x) =\mathrm{D}_{\tau(x)}\phi_{\mathrm{I}}\mathrm{D}_{x}\tau+\mathrm{D }_{x}\phi_{\mathrm{I}} \tag{109}\] \[\mathrm{D}_{x}x^{+}(x) =\mathrm{D}_{\tau(x)}R_{(\mathrm{I,J})}\mathrm{D}_{x}\tau+ \mathrm{D}_{x^{-}(x)}R_{(\mathrm{I,J})}\mathrm{D}_{x}x^{-}\] (110) \[\mathrm{D}_{x}x_{f}(x) =\mathrm{D}_{\tau(x)}\phi_{\mathrm{J}}\mathrm{D}_{x}\tau+\mathrm{ D}_{x^{+}(x)}\phi_{\mathrm{J}}\mathrm{D}_{x}x^{+} \tag{111}\] where the arguments to each function are suppressed but equal to their corresponding value in (106)-(108). Combining these: \[\mathrm{D}_{x}\phi =\mathrm{D}_{\tau(x)}\phi_{\mathrm{J}}\mathrm{D}_{x}\tau+ \mathrm{D}_{x^{+}(x)}\phi_{\mathrm{J}}[\mathrm{D}_{\tau(x)}R_{(\mathrm{I,J})} \mathrm{D}_{x}\tau \tag{112}\] \[\qquad+\mathrm{D}_{x^{-}(x)}R_{(\mathrm{I,J})}(\mathrm{D}_{\tau( x)}\phi_{\mathrm{I}}\mathrm{D}_{x}\tau+\mathrm{D}_{x}\phi_{\mathrm{I}})]\] As this is a first order approximation, the terms \(\mathrm{D}_{x}\phi_{\mathrm{I}}\) and \(\mathrm{D}_{x^{+}(x)}\phi_{\mathrm{J}}\) can be taken as identity matrices (as they would in a linear system), and so this simplifies to (with additional substitutions for \(F_{\mathrm{I}}\) and \(F_{\mathrm{J}}\) using (102)-(103)): \[\mathrm{D}_{x}\phi =(-F_{\mathrm{J}}+\mathrm{D}_{\tau(x)}R_{(\mathrm{I,J})}+\mathrm{ D}_{x^{-}(x)}R_{(\mathrm{I,J})}F_{\mathrm{I}})\mathrm{D}_{x}\tau \tag{113}\] \[\quad+\mathrm{D}_{x^{-}(x)}R_{(\mathrm{I,J})}\] To obtain \(\mathrm{D}_{x}\tau\), use the implicit function theorem and take the chain rule on the guard condition (105), and using (103) and (109) results in the following relation: \[0 =\mathrm{D}_{\tau(x)}g_{(\mathrm{I,J})}\mathrm{D}_{x}\tau(x)+ \mathrm{D}_{x^{-}(x)}g_{(\mathrm{I,J})}\mathrm{D}_{x}x^{-} \tag{114}\] \[0 =\left(\mathrm{D}_{\tau(x)}g_{(\mathrm{I,J})}+\mathrm{D}_{x^{-} (x)}g_{(\mathrm{I,J})}F_{\mathrm{I}}\right)\mathrm{D}_{x}\tau+\mathrm{D}_{x^{ -}(x)}g_{(\mathrm{I,J})}\] (115) \[\mathrm{D}_{x}\tau(x) =\frac{-\mathrm{D}_{x^{-}(x)}g_{(\mathrm{I,J})}}{\mathrm{D}_{\tau (x)}g_{(\mathrm{I,J})}+\mathrm{D}_{x^{-}(x)}g_{(\mathrm{I,J})}F_{\mathrm{I}}} \tag{116}\] Plugging back into (113), evaluating at the instant of impact, \(t=\tau(x)=0\), substituting the notation from (3)-(9), and simplifying: \[\mathrm{D}_{x}\phi =\mathrm{D}_{x}R^{-}+\left(-F_{\mathrm{J}}^{+}+\mathrm{D}_{t}R^{ -}+\mathrm{D}_{x}R^{-}F_{\mathrm{I}}^{-}\right)\mathrm{D}_{x}\tau \tag{117}\] \[\mathrm{D}_{x}\phi =\mathrm{D}_{x}R^{-}+\frac{\left(F_{\mathrm{J}}^{+}-\mathrm{D}_{ x}R^{-}F_{\mathrm{I}}^{-}-\mathrm{D}_{t}R^{-}\right)\mathrm{D}_{x}g^{-}}{ \mathrm{D}_{t}g^{-}+\mathrm{D}_{x}g^{-}F_{\mathrm{I}}^{-}}\] (118) \[\mathrm{D}_{x}\phi :=\Xi_{(\mathrm{I,J})} \tag{119}\] as in (2), where all terms are evaluated at the time of impact and the state just before impact, except for \(F_{\mathrm{J}}^{+}\) which is evaluated at the state just after impact, as in (3)-(9). ### _Early impact saltation derivation_ In the geometric derivation of the saltation matrix, it was assumed the perturbed trajectory impacted late. This appendix shows that the saltation matrix expression is the same if derived following the same logic as Sec. III-B but with early impact. It may help to visualize Fig. 4 with the roles of the nominal \(x(t)\) and perturbed \(\widetilde{x}(t)\), and the corresponding linearization arrows, flipped. Again, start by assuming the same flow, reset, and guard linearizations as in (14)-(17). The perturbed impact occurs first at time \(\widetilde{t}^{-}\) i.e. \(\widetilde{t}^{-}<t^{-}\) and \(\delta t=\widetilde{t}^{-}-t^{-}<0\). Because the perturbed trajectory impacts first, the aim is to find the mapping from \(\delta x(\widetilde{t}^{-})\) to \(\delta x(t^{+})\) (instead of \(\delta x(t^{-})\) to \(\delta x(\widetilde{t}^{+})\) as in the case of late impact). This allows for comparisons between states (nominal and perturbed) that are in the same hybrid domain. Define \(\delta x(\widetilde{t}^{-})\) and \(\delta x(t^{+})\) to be: \[\delta x(\widetilde{t}^{-}) :=\widetilde{x}(\widetilde{t}^{-})-x(\widetilde{t}^{-}) \tag{120}\] \[\delta x(t^{+}) :=\widetilde{x}(t^{+})-x(t^{+}) \tag{121}\] We would like to write these in terms of the nominal trajectory at that time. Using the linearization of the flow before impact (14) and rearranging (120) we get: \[\widetilde{x}(\widetilde{t}^{-})=x(t^{-})+\delta x(\widetilde{t}^{-})+F_{ \mathrm{I}}^{-}\delta t \tag{122}\] Since the perturbed trajectory impacts earlier, next is to compute where it ends up after the reset map is applied and it flows for \(|\delta t|\) time on the new dynamics. Again, using the linearization of the flow (15): \[\widetilde{x}(t^{+})=R(\widetilde{t}^{-},\widetilde{x}(\widetilde{t}^{-}))-F_{ \mathrm{J}}^{+}\delta t \tag{123}\] Next, \(R(\widetilde{t}^{-},\widetilde{x}(\widetilde{t}^{-}))\) can be solved for as a function of \(x(\widetilde{t}^{-})\) by substituting in \(\widetilde{x}(\widetilde{t}^{-})\) from (122) and using the linearization of the reset map from (16): \[\bar{R}(\widetilde{t}^{-},\widetilde{x}(\widetilde{t}^{-}))=\bar{R}(t^{-}+ \delta t,x(t^{-})+\delta x(\widetilde{t}^{-})+F_{\mathrm{I}}^{-}\delta t) \tag{124}\] \[=R(t^{-},x(t^{-}))+\mathrm{D}_{x}R^{-}\left(\delta x(\widetilde{t} ^{-})+F_{\mathrm{I}}^{-}\delta t\right)+\mathrm{D}_{t}\mathrm{R}^{-}\delta t \tag{125}\] Now plugging back in to (123): \[\widetilde{x}(t^{+})= R(t^{-},x(t^{-}))+\mathrm{D}_{x}R^{-}\delta x(\widetilde{t}^{-}) \tag{126}\] \[+\left(\mathrm{D}_{x}R^{-}F_{\mathrm{I}}^{-}+\mathrm{D}_{t}\mathrm{ R}^{-}-F_{\mathrm{J}}^{+}\right)\delta t\] \(\delta x(t^{+})\) can be written as a function of \(\delta x(\widetilde{t}^{-})\) and \(\delta t\) by subbing \(\widetilde{x}(t^{+})\) into (121): \[\delta x(t^{+})=\mathrm{D}_{x}R^{-}\delta x(\widetilde{t}^{-})+\left(\mathrm{D }_{x}R^{-}F_{\mathrm{I}}^{-}+\mathrm{D}_{t}\mathrm{R}^{-}-F_{\mathrm{J}}^{+} \right)\delta t \tag{127}\] Next, \(\delta t\) can be found as a function of \(\delta x(\widetilde{t}^{-})\) using: \[0=g(\widetilde{t}^{-},\widetilde{x}(\widetilde{t}^{-})) \tag{128}\] Substituting in (122) and expanding using the linearization of the guard (17) (and noting that \(g(t^{-},x(t^{-})=0)\): \[0 =g\left(t^{-}+\delta t,x(t^{-})+\delta x(\widetilde{t}^{-})+F_{ \mathrm{I}}^{-}\delta t\right) \tag{129}\] \[0 =g(t^{-},x(t^{-}))+\mathrm{D}_{x}g^{-}\left(\delta x(\widetilde{t}^{- })+F_{\mathrm{I}}^{-}\delta t\right)+\mathrm{D}_{t}g^{-}\delta t\] (130) \[0 =\mathrm{D}_{x}g^{-}\left(\delta x(\widetilde{t}^{-})+F_{\mathrm{I}}^ {-}\delta t\right)+\mathrm{D}_{t}g^{-}\delta t \tag{131}\] Now, solving for \ ### _Covariance update through a hybrid event_ This appendix presents a derivation for the covariance update through a reset map, (37). Consider the state trajectory as a random variable \(X(t)\) with mean \(\rho(t)=x(t)\), the nominal trajectory, and covariance \(\Sigma(t)\). Define a perturbation as a zero mean random variable \(\delta x(t)\) with the same covariance, such that \(X(t)=x(t)+\delta x(t)\). At a hybrid impact event, define the pre-impact time of the mean to be \(t^{-}\), where \(g(t^{-},\rho(t^{-}))=0\), and the corresponding post-impact time to be \(t^{+}\). Consider how the distribution is updated to find \(X(t^{+})\) based on \(X(t^{-})\). To find the mean, take the expectation of \(X(t^{+})\): \[\rho(t^{+})= \mathbb{E}[X(t^{+})]=\mathbb{E}[x(t^{+})+\delta x(t^{+})] \tag{135}\] \[= x(t^{+})+\mathbb{E}[\delta x(t^{+})] \tag{136}\] where the two terms are separable because expectation is a linear operator, and the expectation of the nominal post-impact state is just its value, \(\mathbb{E}[x(t^{+})]=x(t^{+})=R(x(t^{-}))\). Substituting in \(\delta x(t^{+})=\Xi_{\text{(I,J)}}(t^{-},x(t^{-}))\delta x(t^{-})+\text{h.o.t.}\) from (10): \[\rho(t^{+})=x(t^{+})+\mathbb{E}[\Xi_{\text{(I,J)}}(t^{-},x(t^{-}))\delta x(t^ {-})+\text{h.o.t.}] \tag{137}\] \[\rho(t^{+})=x(t^{+})+\Xi_{\text{(I,J)}}(t^{-},x(t^{-}))\mathbb{E}[\delta x(t ^{-})]+\mathbb{E}[\text{h.o.t.}] \tag{138}\] Because expectation is a linear operator, \(\Xi(t^{-},x(t^{-}))\) can be moved out of the expectation. Then, because \(\delta x(t^{-})\) is centered about zero, \(\mathbb{E}[\delta x(t^{-})]=0\), and for small displacements the higher order terms are negligible, \(\mathbb{E}[\text{h.o.t.}]\approx 0\), which simplifies to: \[\rho(t^{+})\approx x(t^{+})=R(x(t^{-})) \tag{139}\] Covariance is defined as: \[\mathbb{COV}[X]:=\mathbb{E}[(X-\mathbb{E}[X])(X-\mathbb{E}[X])^{T}] \tag{140}\] the post-impact covariance \(\Sigma(t^{+})\) is: \[\Sigma(t^{+})= \mathbb{COV}[X(t^{+})]=\mathbb{COV}[x(t^{+})+\delta x(t^{+})] \tag{141}\] \[= \mathbb{E}\Big{[}\Big{(}(x(t^{+})+\delta x(t^{+})-\rho(t^{+}))\] \[\qquad(x(t^{+})+\delta x(t^{+})-\rho(t^{+})\Big{)}^{T}\Big{]} \tag{142}\] Since \(\rho(t^{+})=x(t^{+})\), this simplifies to: \[\Sigma(t^{+})=\mathbb{E}[\delta x(t^{+})\delta x(t^{+})^{T}] \tag{143}\] Using (10), \(\delta x(t^{+})\) can be expanded as: \[\Sigma(t^{+}) =\mathbb{E}[(\Xi\delta x(t^{-})+\text{h.o.t.})(\Xi\delta x(t^{-} )+\text{h.o.t.})^{T}] \tag{144}\] \[=\Xi\mathbb{E}[\delta x(t^{-})\delta x(t^{-})^{T}]\Xi^{T}\] (145) \[+2\Xi\mathbb{E}[\delta x(t^{-})(\text{h.o.t.})^{T}]+\mathbb{E}[ (\text{h.o.t.})(\text{h.o.t.})^{T}] \tag{146}\] and for small displacements, \(\text{h.o.t.}\approx 0\), which simplifies to: \[\Sigma(t^{+})\approx\Xi\Sigma(t^{-})\Xi^{T} \tag{147}\] as in (37), which holds to first order and is exact for linear hybrid systems. ### _Riccati update through hybrid events_ This appendix derives the update for the Riccati equation through a hybrid event, (40). See [128, Ch. 6.1] for a background on the continuous Riccati update and [129, Ch. 8.3] for an overview of the discrete formulation. Solving the Riccati update along a trajectory yields a locally optimal feedback controller, called the linear quadratic regulator (LQR). The optimality of LQR is conditioned on the balance between penalties on deviations in state \(Q\) and control input \(V\) at each timestep, called the stage cost, and at the final state, called the terminal cost, where \(Q\) is a positive semi-definite matrix and \(V\) is positive-definite. Define the optimal stage cost \(\ell_{t^{-}}^{*}\) for the reference trajectory \((x(t),u(t))\) and the optimal solution \((x^{*},u^{*})\) applied at a hybrid transition at time \(t^{-}\) as: \[\ell_{t^{-}}^{*}=\ell_{t^{-}}(x^{*}(t^{-}),u^{*}(t^{-}))=\\ \frac{1}{2}(x^{*}(t^{-})-x(t^{-}))^{T}Q_{t^{-}}(x^{*}(t^{-})-x(t ^{-}))\\ +\frac{1}{2}(u^{*}(t^{-})-u(t^{-}))^{T}V_{t^{-}}(u^{*}(t^{-})-u( t^{-})) \tag{148}\] where \(Q_{t^{-}}\) and \(V_{t^{-}}\) are the quadratic penalty on state and input respectively at time \(t^{-}\). Define the current state to be \(\widetilde{x}\) and the difference with the optimal solution to be: \[\delta x^{*}(t^{-}):=x^{*}(t^{-})-\widetilde{x}(t^{-}) \tag{149}\] such that (148) becomes: \[\ell_{t^{-}}^{*}=\frac{1}{2}(\delta x^{*})^{T}Q_{t^{-}}(\delta x ^{*})\\ +\frac{1}{2}(u^{*}(t^{-})-u(t^{-}))^{T}V_{t^{-}}(u^{*}(t^{-})-u( t^{-})) \tag{150}\] Because the transition is instantaneous, assume that the input has no effect \(u(t^{-})=u^{*}(t^{-})\) and simplify the optimal stage cost as: \[\ell_{t^{-}}^{*}=\frac{1}{2}(\delta x^{*})^{T}Q_{t^{-}}(\delta x^{*}) \tag{151}\] The Hamiltonian [128, Ch. 2.4] for the hybrid transition: \[H_{t^{-}} :=H(x^{*}(t^{-}),u^{*}(t^{-}),p^{*}(t^{+}))\] \[:=\ell_{t^{-}}^{*}+R_{\text{(I,J)}}^{T}(t^{-},x^{*}(t^{-}))p^{*}( t^{+}) \tag{152}\] where \(p^{*}(t^{+})\) is the optimal costate [128, Ch. 3.4]. Using the expansion (10) about \(R_{\text{(I,J)}}(t^{-},\widetilde{x}(t^{-})+\delta x^{*}(t^{-}))\) : \[R_{\text{(I,J)}}(t^{-},x^{*}(t^{-}))=R_{\text{(I,J)}}(t^{-},\widetilde{x}(t^{- }))+\Xi\delta x^{*}(t^{-})+\text{h.o.t.} \tag{153}\] where \(\Xi=\Xi_{\text{(I,J)}}(t^{-},\widetilde{x}(t^{-}))\). The Hamiltonian for the hybrid transition is then: \[H_{t^{-}}=\frac{1}{2}(\delta x^{*}(t^{-}))^{T}Q_{t^{-}}\delta x ^{*}(t^{-})+\left(R_{\text{(I,J)}}(t^{-},\widetilde{x}(t^{-}))\right.\\ +\left.\Xi\delta x^{*}(t^{-})+\text{h.o.t.}\right)^{T}p^{*}(t^{+}) \tag{154}\] Using Pontryagin's Maximum principle [128, Ch. 4.1], derive the optimal state update and costate update: \[x^{*}(t^{+})= \text{D}_{p^{*}}H_{t^{-}}=R_{\text{(I,J)}}(t^{-},\widetilde{x}( t^{-}))+\Xi\delta x^{*}(t^{-}) \tag{155}\] \[p^{*}(t^{-})= \text{D}_{x^{*}}H_{t^{-}}=Q_{t^{-}}\delta x^{*}+\Xi^{T}p^{*}(t^{+})+ \text{h.o.t.} \tag{156}\] Given the standard costate guess of \(p(t^{+})=P(t^{+})\delta x(t^{-})\)[129], we can derive the hybrid update for the matrix \(P\), which defines the boundary conditions for the optimal control problem.: \[P(t^{-})\delta x^{*}(t^{-})=Q_{t^{-}}\delta x^{*}(t^{-})+\Xi^{T}P(t^{+})\delta x ^{*}(t^{+})+\text{h.o.t.} \tag{157}\] Substitute \(\delta x^{*}(t^{+})=\Xi\delta x^{*}(t^{-})+\text{h.o.t.}\): \[P(t^{-})\delta x^{*}(t^{-})=Q_{t^{-}}\delta x^{*}(t^{-})\\ +\Xi^{T}P(t^{+})(\Xi\delta x^{*}(t^{-})+\text{h.o.t.})+\text{h.o.t.} \tag{158}\] The update for \(P(t^{-})\) is recursive and cannot be computed as is. However, when higher order terms are small, we cancel \(\delta x^{*}(t^{-})\) from both sides and write the Bellman update for \(P(t^{-})\): \[P(t^{-})\delta x^{*}(t^{-})\approx Q_{t^{-}}\delta x^{*}(t^{-})+\Xi^{T}P(t^{+})\Xi\delta x^{*}(t^{-}) \tag{159}\] \[P(t^{-})\approx Q_{t^{-}}+\Xi^{T}P(t^{+})\Xi \tag{160}\]
2304.10879
Effects of clustered nuclear geometry on the anisotropic flow in O-O collisions at the LHC within a multiphase transport model framework
To understand the true origin of flowlike signatures and applicability of hydrodynamics in small collision systems, effects of soft QCD dynamics, the sensitivity of jetlike correlations, and nonequilibrium effects, efforts are being made to perform \textit{p}-O and O-O collisions at the LHC and RHIC energies. It is equally interesting to look into the possible signatures of an $\alpha$-clustered nuclear geometry in $^{16}$O-$^{16}$O collisions by studying the initial-state effects on the final-state observables. In this work, within a multiphase transport model, we implement an $\alpha$-cluster tetrahedral density profile in the oxygen nucleus along with the default Woods-Saxon density profile. We study the eccentricity ($\epsilon_2$), triangularity ($\epsilon_3$), normalized symmetric cumulants [NSC(2,3)], elliptic flow ($v_2$), and triangular flow ($v_3$) in $^{16}$O-$^{16}$O collisions at $\sqrt{s_{\rm NN}} = 7~$TeV. The constituent quark number scaling of the elliptic flow is also reported. For the most central collisions, enhanced effects in $\langle \epsilon_3 \rangle/ \langle \epsilon_2 \rangle$ and $\langle v_3 \rangle/ \langle v_2 \rangle$ with a negative value of NSC(2,3), and an away-side broadening in the two-particle azimuthal correlation function [$C(\Delta \phi)$] of the identified particles are observed in the presence of an $\alpha$-clustered geometry.
Debadatta Behera, Suraj Prasad, Neelkamal Mallick, Raghunath Sahoo
2023-04-21T10:46:17Z
http://arxiv.org/abs/2304.10879v2
Effects of clustered nuclear geometry on the anisotropic flow in O-O collisions at the LHC within a multiphase transport model framework ###### Abstract To understand the true origin of flow-like signatures and applicability of hydrodynamics in small collision systems, effects of soft QCD dynamics, the sensitivity of jet-like correlations, and non-equilibrium effects, efforts are being made to perform \(p\)-O and O-O collisions at the LHC and RHIC energies. It is equally interesting to look into the possible signatures of an \(\alpha\)-clustered nuclear geometry in \({}^{16}\)O-\({}^{16}\)O collisions by studying the initial-state effects on the final-state observables. In this work, within a multiphase transport model, we implement an \(\alpha\)-cluster tetrahedral density profile in the Oxygen nucleus along with the default Woods-Saxon density profile. We study the eccentricity (\(\epsilon_{2}\)), triangularity (\(\epsilon_{3}\)), normalized symmetric cumulants (NCS(2,3)), elliptic flow (\(v_{2}\)), and triangular flow (\(v_{3}\)) in \({}^{16}\)O-\({}^{16}\)O collisions at \(\sqrt{s_{\rm NN}}=7\) TeV. The constituent quark number scaling of the elliptic flow is also reported. For the most central collisions, enhanced effects in \(\langle\epsilon_{3}\rangle/\langle\epsilon_{2}\rangle\) and \(\langle v_{3}\rangle/\langle v_{2}\rangle\) with a negative value of NSC(2,3), and an away-side broadening in the two-particle azimuthal correlation function (\(C(\Delta\phi)\)) of the identified particles are observed in the presence of an \(\alpha\)-clustered geometry. ## I Introduction Ultrarelativistic heavy-ion collisions at the Large Hadron Collider (LHC) and the Relativistic Heavy Ion Collider (RHIC) create high temperature and density, which provide suitable conditions for producing a locally thermalized and deconfined partonic medium. This hot and dense fireball is made up of QCD matter, i.e., quarks and gluons, and thus called quark-gluon plasma (QGP). Studies related to QGP investigate all the indirect signatures as QGP is a highly short-lived state due to the behavior of strongly interacting matter. QGP expands rapidly, and its evolution is well understood through relativistic viscous hydrodynamics with dissipative effects. Thus, the initial-state collision geometry and the fluctuations in energy and entropy density are embedded in the final-state multi-particle correlations through this collective expansion of the QGP [1; 2; 3]. Usually, this is studied as the medium response to the initial eccentricity (\(\epsilon_{2}\)) and triangularity (\(\epsilon_{3}\)) by quantifying the Fourier coefficients (\(v_{2}\) and \(v_{3}\)) of the azimuthal momentum distribution of the final-state hadrons [4]. Experimental measurements of these flow coefficients agree with the predictions from hydrodynamic calculations suggesting that QGP behaves like a perfect fluid [5]. Thus, the presence of finite flow coefficients is considered a signature of the hydrodynamic behavior of the QGP and hence, the thermalization in the early stages of the collision. Recently, similar signatures have been observed in small collision systems such as the high-multiplicity \(pp\) collisions, where hydrodynamic expansion or collectivity is usually not expected [6]. These observations also raise questions on the applicability of hydrodynamics in small collision systems formed in ultrarelativistic nuclear collisions. As the system size of \({}^{16}\)O-\({}^{16}\)O overlaps high-multiplicity \(pp\) and peripheral Pb-Pb collisions, it provides an opportunity to explore the origin of flow-like signatures in small collision systems. Another interesting direction is to explore how the final-state observables are affected by the initial-state nuclear structure, nuclear shape deformation, or even the presence of \({}^{4}\)He-nuclei (known as \(\alpha\)-clusters) inside the nucleus of elements having \(4n\)-number of nucleons, such as \({}^{8}\)Be, \({}^{12}\)C, and \({}^{16}\)O, to name a few. Studies related to nuclear shape deformation have been carried out at the RHIC [7; 8; 9] and at the LHC with Xe-Xe collisions at \(\sqrt{s_{\rm NN}}=5.44\) TeV [10; 11; 12]. Results show quadruple deformation in \({}^{129}\)Xe nucleus. The presence of four \({}^{4}\)He-clusters inside the \({}^{16}\)O nucleus was first proposed by Gamow back in 1930s [13] and then by Wheeler [14]. Although there is evidence for the existence of such a clustered structure [15; 16; 17; 18], the contribution of the clustered states in the ground state of \({}^{16}\)O was found to be less than 30% [19]. Recently, there are proposals for dedicated runs for \({}^{16}\)O-\({}^{16}\)O collisions at both RHIC and LHC [20; 21]. This could clarify the origin of collectivity on small systems and the effects of clustered nuclear geometry on the final-state observables. In recent years, there have been several theoretical studies reported on Oxygen collisions based on Glauber Monte Carlo [22; 23; 24], different hydrodynamic models [25; 26; 27], global observables [28], parton energy loss [29], and jet quenching effects across small to large collision systems [30]. Some observables showing evidence of the signatures of \(\alpha\)-clusters are also reported in \({}^{16}\)O-\({}^{16}\)O collisions [31; 32; 33; 34; 35]. In this work, within a multiphase transport model, we implement an \(\alpha\)-cluster tetrahedral density profile in the Oxygen nucleus along with the default Woods-Saxon density profile. We study the eccentricity (\(\epsilon_{2}\)), triangularity (\(\epsilon_{3}\)), normalized symmetric cumulants (NCS(2,3)), elliptic flow (\(v_{2}\)), and triangular flow (\(v_{3}\)) in \({}^{16}\)O-\({}^{16}\)O collisions at \(\sqrt{s_{\rm NN}}=7\) TeV. In addition, the elliptic flow coefficients as a function of transverse momentum (\(v_{2}(p_{\mathrm{T}})\)) for the light-flavor hadrons such as \(\pi^{\pm}\), \(K^{\pm}\), and \(p+\bar{p}\) are studied for nuclear collisions with default Woods-Saxon and \(\alpha\)-cluster tetrahedral density profiles. The appearance of hadronic collectivity is believed to have originated from the early deconfined partonic phase and is subsequently transferred to the hadrons via the quark recombination mechanism of hadronization. This is also known as the quark coalescence model [36]. This behavior leads to the observation of a higher flow of baryons than mesons in the intermediate \(p_{\mathrm{T}}\), and ideally to the number-of-constituent-quark (NCQ) scaling [37; 38; 39]. Experimentally, at RHIC, the NCQ scaling seems to be valid as seen in Au-Au collisions at \(\sqrt{s_{\mathrm{NN}}}=200\) GeV [40; 41]. However, at the LHC energies, the scaling is only approximate [42; 43; 44]. Using AMPT in string melting mode, the NCQ scaling is observed at the top RHIC energies in Au-Au collisions [45]. However, NCQ scaling seems to be violated using the same model at the LHC energies in Pb-Pb. The breaking of NCQ scaling in AMPT string melting mode is found to be independent of the magnitude of parton-parton cross sections and hadron cascade time [45]. However, the breaking of scaling is understood as the increase in the partonic density at the LHC energy in Pb-Pb collisions. Further, Si-Si collisions at this energy show NCQ scaling, which adds to this understanding [45]. This makes the case appealing to look for the validation of NCQ scaling in \({}^{16}\)O-\({}^{16}\)O collisions at \(\sqrt{s_{\mathrm{NN}}}=7\) TeV. In a recent event-shape dependent study of NCQ scaling using transverse spherocity (\(S_{0}\)) in heavy-ion collisions in AMPT, it is reported that low-\(S_{0}\) (jetty-like) events show more deviation from the NCQ scaling than the \(S_{0}\)-integrated (unbiased) events [46]. In Pb-Pb collisions, the deviation appears in the \(S_{0}\)-integrated events and gets enhanced in low-\(S_{0}\) events, whereas in Au-Au collisions, the scaling violation appears only in the low-\(S_{0}\) events and not in the \(S_{0}\)-integrated events. These results show the dependence of NCQ scaling on the event shapes, and it awaits experimental confirmation. For the time being, we proceed to study the NCQ scaling behavior in \({}^{16}\)O-\({}^{16}\)O collisions at \(\sqrt{s_{\mathrm{NN}}}=7\) TeV using the AMPT string melting model, and explore the possible role of density profiles on the NCQ scaling in small collision systems. Further, for the most central collisions, we observe enhanced effects in \(\langle\epsilon_{3}\rangle/\langle\epsilon_{2}\rangle\) and \(\langle v_{3}\rangle/\langle v_{2}\rangle\) with a negative value of NSC(2,3), and an away-side broadening in the two-particle azimuthal correlation function (\(C(\Delta\phi)\)) of the identified particles. Here onwards, for the sake of simplicity, we write O-O instead of \({}^{16}\)O-\({}^{16}\)O throughout the text. The paper is organized as follows. It begins with a brief introduction to the event generator, a multiphase transport model, the \(\alpha\)-cluster geometry implementation, and estimation of anisotropic flow coefficients via the two-particle correlation method in Sec. II. The paper then shows and describes the results for eccentricity, triangularity, elliptic flow, triangular flow, and the number-of-constituent-quarks scaling of the elliptic flow in Sec. III. Finally, the paper concludes with the important findings summarized in Sec. IV. ## II Event generation and analysis methodology In this section, we briefly introduce the components of the AMPT model, the tuning used to generate the collisions, and the implementation of the \(\alpha\)-cluster geometry in the Oxygen nucleus. The two-particle correlation method used to estimate the flow coefficients is also discussed. ### A multiphase transport model A multiphase transport model (AMPT) is a Monte Carlo-based transport model for heavy-ion collision. It consists of four main stages: initialization of the collisions, parton cascade, hadronization, and hadron transport [47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63]. The initialisation of the collisions is performed by HIJING, where the differential cross-section of produced miniplets in pp collisions is converted into AA and p-A collisions [59]. The parton cascade or the parton transport is performed by Zhang's Parton Cascade (ZPC) model [60]. In the string-melting version of AMPT, the coloured strings are transformed into the low momentum partons. The transported partons are hadronized using a spatial coalescence mechanism in the string-melting version of AMPT [61; 53]; however, in the default version of AMPT, the Lund string fragmentation mechanism is used to perform the hadronisation. A relativistic transport model is used for the evolution of the produced hadrons [62; 63]. In the current work, we have used the string melting mode of AMPT (version 2.26t9b) since the quark coalescence mechanism well describes the particle flow and spectra at the mid-transverse momentum region [64; 65]. The AMPT settings for the O-O system in the current work are the same as reported in Ref. [28]. In heavy-ion collisions, the typical density profile for a nucleus is considered to be the Woods-Saxon distribution. The Wood-Saxon charge density is given in terms of a 3pF distribution as, \[\rho(r)=\frac{\rho_{0}(1+w(\frac{r}{r_{0}})^{2})}{1+\exp(\frac{r-r_{0}}{a})}. \tag{1}\] Here, \(r\) is the radial distance from the center of the nucleus, \(a\) is the skin depth of the nucleus, \(r_{0}\) is the mean radius of the nucleus, and \(w\) is the deformation parameter. In the oxygen nucleus, \(r_{0}\) = 2.608 fm, \(a\) is 0.513 fm and \(w\) is -0.051 [66]. \(\rho_{0}\) is the nuclear density constant at \(r=0\). We also implement the \(\alpha\)-cluster structure inside the Oxygen nucleus using the AMPT model. The implementation is done numerically by creating a geometric distribution of a regular tetrahedral structure having \({}^{4}\)He nuclei placed at the vertices. For the \({}^{4}\)He nucleus, the distribution of nucleons follows the Wood-Saxon density profile described in Eq. 1 with the parameters \(r_{0}=0.964\) fm, \(a=0.322\) fm, and \(w=0.517\). This leads to the rms radius for \({}^{4}\)He nucleus to be 1.676 fm. These \(\alpha\)-clustered nuclei are positioned on the vertices of a standard tetrahedron with a side length of 3.42 fm. In this configuration, the rms radius for \({}^{16}\)O is calculated to be 2.699 fm [28; 31]. The orientation of the tetrahedron is randomized for each projectile and target on an event-by-event basis. ### Two-particle correlation method In noncentral heavy-ion collisions, the collision overlap region is anisotropic in space. The pressure gradient of the thermalized partonic medium formed in such collisions can transform the initial spatial anisotropies into the momentum space azimuthal anisotropies. These azimuthal anisotropies of different orders can be quantified by the coefficients of Fourier series decomposition of the momentum distribution of final-state particles, given as: \[E\frac{d^{3}N}{dp^{3}}=\frac{d^{2}N}{2\pi p_{\rm T}dp_{\rm T}dy }\bigg{(}1+2\sum_{n=1}^{\infty}v_{n}\cos[n(\phi-\psi_{n})]\bigg{)}\,. \tag{2}\] Here, \(\phi\) represents the azimuthal angle of the final-state particles in the transverse plane, and \(\psi_{n}\) represents the \(n\)th harmonic event plane angle [67]. \(v_{n}\) is the \(n\)th-order anisotropic flow coefficient where \(n=1\) stands for directed flow, \(n=2\) is the elliptic flow and \(n=3\) quantifies the triangular flow. Anisotropic flow coefficients of different orders can be estimated as follows: \[v_{n}=\langle\cos[n(\phi-\psi_{n})]\rangle \tag{3}\] In experiments, obtaining the event plane angle is not trivial, and Eq. 3 includes the non-flow effects, such as contributions from resonance decays and jets. On the other hand, a two-particle correlation method to estimate the flow coefficients can, not only efficiently reduce the non-flow contribution by implementing a proper pseudo-rapidity cut but also does not require the event-plane angle. In this study, we have ignored the pseudorapidity dependence of \(\psi_{n}\), which is observed in the experiments. To estimate the anisotropic flow coefficients using the two-particle correlation function, one requires the two-particle correlation function, which can be determined using the following steps [68]: 1. Two sets of particles are formed based on their transverse momenta, namely, 'a' and 'b'. 'a' denotes the trigger particles, whereas 'b' represents the associated particle set. 2. Each particle from trigger group ('a') pairs with each particle from associate group ('b') and the relative pseudorapidities (\(\Delta\eta=\eta_{a}-\eta_{b}\)) and relative azimuthal angles (\(\Delta\phi=\phi_{a}-\phi_{b}\)) are determined. 3. Same event pairs (\(S(\Delta\eta,\Delta\phi)\)) and mixed event pairs are (\(B(\Delta\eta,\Delta\phi)\)) are determined. In the same event pair, both 'a' and 'b' belong to the same event; however, in the mixed event pair, 'a' and 'b' are from different events where 'a' pairs with 'b' from five randomly selected events to remove physical correlations. 4. Two particle correlation function (\(C(\Delta\eta,\Delta\phi)\)) is determined by taking the ratio of \(S(\Delta\eta,\Delta\phi)\) to \(B(\Delta\eta,\Delta\phi)\). In this analysis, we use final-state charged hadrons with kinematic cuts as \(|\eta|<2.5\) and \(p_{\rm T}>0.4\) GeV/\(c\) to encompass a broader spectrum of particles. By omitting the jet peak region seen in the \(C(\Delta\eta,\Delta\phi)\) distribution, the \(\Delta\eta\) interval is carefully chosen. The interval, in our case, is implemented to be \(1.0<|\Delta\eta|<4.8\) to obtain 1D correlation \(C(\Delta\phi)\), given as: \[C(\Delta\phi)=\frac{dN_{\rm pairs}}{d\Delta\phi}=A\times \frac{\int S(\Delta\eta,\Delta\phi)d\Delta\eta}{\int B(\Delta\eta,\Delta\phi)d \Delta\eta}. \tag{4}\] Here, the normalization constant \(A\) ensures that at a given \(\Delta\eta\) interval, there is the same number of pairs in the same events and mixed events. The pair distribution (\(N_{\rm pairs}\)) or 1D correlation function can be expanded into a Fourier transform in \(\Delta\phi\) as follows: \[C(\Delta\phi)=\frac{dN_{\rm pairs}}{d\Delta\phi}\propto\bigg{[}1 +2\sum_{n=1}^{\infty}v_{n,n}(p_{\rm T}^{a},p_{\rm T}^{b})\cos n\Delta\phi \bigg{]}. \tag{5}\] Here, \(v_{n,n}\) is the two-particle flow coefficient. In this definition, the convolution of particle pairs removes the event plane angle. Now, \(v_{n,n}\) can be obtained as: \[v_{n,n}(p_{\rm T}^{a},p_{\rm T}^{b})=\langle cos(n\Delta\phi)\rangle \tag{6}\] In terms of \(p_{\rm T}^{a}\) and \(p_{\rm T}^{b}\), \(v_{n,n}\) are symmetric functions. The definition of harmonics in Eq. 2 enters to Eq. 5, which can be written as: \[\frac{dN_{\rm pairs}}{d\Delta\phi}\propto\bigg{[}1+2\sum_{n=1}^{ \infty}v_{n}(p_{\rm T}^{a})v_{n}(p_{\rm T}^{b})\cos n\Delta\phi\bigg{]}. \tag{7}\] If collective expansion is what causes azimuthal anisotropy, then \(v_{n,n}\) can be factorized into the product of two single-particle harmonic coefficients. \[v_{n,n}(p_{\rm T}^{a},p_{\rm T}^{b})=v_{n}(p_{\rm T}^{a})v_{n}(p_{\rm T}^{b}). \tag{8}\] From Eq. 8, \(v_{n}\) can be estimated as: \[v_{n}(p_{\rm T}^{a})=v_{n,n}(p_{\rm T}^{a},p_{\rm T}^{b})/\sqrt{v_{n,n}(p_{\rm T }^{b},p_{\rm T}^{b})} \tag{9}\] Following the above steps, one can obtain the azimuthal anisotropy of all-charged particles along with identified particles such as \(\pi^{\pm}\), \(K^{\pm}\), and \(p+\bar{p}\) for the O-O collision system at the LHC energies using the AMPT model. ## III Results and Discussions In this section, we start by discussing the results of the participant eccentricity, triangularity, and the correlations among them using normalized symmetric cumulants for both the Woods-Saxon density profile and the \(\alpha\)-clustered structure. Then we discuss the evolution of elliptic and triangular flow with centrality, their ratios, and their scalings with the initial eccentricities of the same order. We discuss the two-particle azimuthal correlation function for the identified hadrons. Finally, the elliptic flow as a function of transverse momentum and their NCQ scaling with transverse kinetic energy is discussed for different centralities and nuclear profiles. ### Eccentricity and triangularity In a collision of two nuclei, the overlap region of the colliding nucleons is not spherical and isotropic. It majorly depends upon the colliding nuclei species, the centrality of the collision, and the distribution of the nucleons inside the nucleus. Eccentricity represents the elliptic shape of the overlap region of the colliding nucleons and is purely geometric; however, triangularity represents the triangular shape of the region, and it arises due to event-by-event density fluctuations in the collision overlap region [69]. The anisotropic flow coefficients of the final-state hadrons have a significant contribution from the initial geometrical anisotropies. Eccentricity greatly influences the elliptic flow; however, the influence of triangularity on triangular flow is limited only to (65-70)% for a minimally viscous fluid [70]. The study of eccentricity, triangularity, elliptic flow, and triangular flow may unveil information about the medium response to different harmonic flow coefficients. It is not trivial to determine the eccentricity and triangularity in experiments; however, it can be estimated in the AMPT model using the following Figure 2: (Color online) Eccentricity (\(\langle\epsilon_{2}\rangle\)) and triangularity (\(\langle\epsilon_{3}\rangle\)) distribution for the most central case in O–O collisions at \(\sqrt{s_{\rm NN}}=7\) TeV in Woods-Saxon (top) and \(\alpha\)-cluster (bottom) type nuclear density profiles. Figure 1: (Color online) Centrality dependence of average eccentricity (\(\langle\epsilon_{2}\rangle\)), triangularity (\(\langle\epsilon_{3}\rangle\)), and \(\langle\epsilon_{3}\rangle/\langle\epsilon_{2}\rangle\) for O-O collisions at \(\sqrt{s_{\rm NN}}=7\) TeV in AMPT string melting model using both Woods-Saxon and \(\alpha\)-cluster type nuclear density profiles. expression [69; 71]: \[\epsilon_{\rm n}=\frac{\sqrt{\langle r^{\rm n}\cos(n\phi_{\rm part})\rangle^{2}+ \langle r^{\rm n}\sin(n\phi_{\rm part})\rangle^{2}}}{\langle r^{\rm n}\rangle} \tag{10}\] where \(r\) and \(\phi_{\rm part}\) are the polar co-ordinates of the participants. In \(\epsilon_{\rm n}\), n = 2 corresponds to eccentricity (\(\epsilon_{2}\)) and n = 3 corresponds to triangularity (\(\epsilon_{3}\)). In Fig. 1, the event averaged eccentricity (\(\langle\epsilon_{2}\rangle\)) (left), triangularity (\(\langle\epsilon_{3}\rangle\)) (middle), their ratios (right) determined from AMPT for the Woods-Saxon density profile and \(\alpha\)-clustered structure in O-O collisions at \(\sqrt{s_{\rm NN}}\) = 7 TeV are shown. As traditionally observed in heavy-ion collisions, both initial nucleon distributions have similar behavior of \(\langle\epsilon_{2}\rangle\) with centrality. The value of \(\langle\epsilon_{2}\rangle\) is observed to be increasing towards the peripheral collisions as the overlap region gets largely elliptic with increasing the impact parameter of the collisions. However, for a given centrality class, \(\langle\epsilon_{2}\rangle\) is lower for \(\alpha\)-cluster case compared to Woods-Saxon nuclear density profile except for the mid-central cases where both of the profiles have similar values of \(\langle\epsilon_{2}\rangle\). This indicates that even if the number of participants in a collision is similar, the distribution of the nucleons inside the nucleus significantly contributes to the eccentricity, which is expected to finally be reflected in the anisotropic flow coefficients given the hydrodynamical behavior of the medium formed. A similar trend of \(\langle\epsilon_{3}\rangle\) is observed as a function of centrality where the mean triangularity for both Woods-Saxon density profiles is increasing towards the peripheral collisions owing to the appearance of a more triangular shape. This trend of \(\langle\epsilon_{3}\rangle\) as a function of centrality has a peculiar behavior for the \(\alpha\)-cluster case where the value decreases from central to mid-central collisions, attains a minimum and then starts to rise again towards the peripheral collisions. Nevertheless, the value for the Woods-Saxon nuclear density profile dominates over the \(\alpha\)-cluster structure throughout the centrality selection except for the most central cases, i.e. (0-5)% and (5-10)%. The \(\alpha\)-cluster structure thus can have more significant event-by-event fluctuations in the participant distribution due to its larger triangularity in the most central collisions. It is to be noted that, due to a smaller collision system, the number of sources that contribute to \(\epsilon_{n}\) decreases, which can make \(\epsilon_{n}\) more significant in O-O collisions compared to Pb-Pb or Au-Au collision systems [72]. The right-most panel of Fig. 1 shows the ratio of mean triangularity to the eccentricity, i.e., \(\langle\epsilon_{3}\rangle/\langle\epsilon_{2}\rangle\) as a function of centrality. Here the ratio is plotted for both Woods-Saxon nuclear density profile and \(\alpha\)-clustered structure in O-O collisions at \(\sqrt{s_{\rm NN}}\) = 7 TeV using AMPT. The value of this ratio is explicitly higher for the \(\alpha\)-clustered structure in the most central case and fluctuates around the Woods-Saxon profile, consistent around unity in the mid-central to peripheral collisions. This demonstrates a balance between the geometry of the collisions and the fluctuations in the corresponding nucleon distributions. The exceptionally high value of \(\langle\epsilon_{3}\rangle/\langle\epsilon_{2}\rangle\) in the most central collisions is limited to the \(\alpha\)-clustered structure. Therefore, for a hydrodynamical evolution, observation of unprecedented high value of \(\langle v_{3}\rangle/\langle v_{2}\rangle\) in the most central O-O collisions could be a possible signature of such \(\alpha\)-clustered structure in the Oxygen nuclei. The observed high value of \(\langle\epsilon_{3}\rangle/\langle\epsilon_{2}\rangle\) in the most central O-O collisions for \(\alpha\)-clustered structure compared to Woods-Saxon nuclear density profile can be understood by studying the distribution of \(\epsilon_{2}\) and \(\epsilon_{3}\) for both the nuclear profiles separately. Figure 2 shows the eccentricity and triangularity distribution of most central (0-5)% cases in O-O collisions at \(\sqrt{s_{\rm NN}}\) = 7 TeV for both Woods-Saxon and \(\alpha\)-cluster density profiles estimated in AMPT. In Fig. 2, the eccentricity distribution is represented as the solid markers, and the triangularity distribution is represented as open markers. At the same time, the top and bottom panels of Fig. 2 show the cases with the Woods-Saxon and \(\alpha\)-cluster density profiles, respectively. In the Woods-Saxon case, one observes that both the eccentricity and triangularity have their peaks shifted towards the lower values, indicating relatively isotropic distributions of the participants in the transverse plane. The eccentricity distribution for the \(\alpha\)-clustered structure has comparatively less mean and standard deviation compared to the Woods-Saxon case, showing a relatively more isotropic distribution of participants in the \(\alpha\)-clustered structure than the Woods-Saxon case. On the other hand, the distribution of triangularity in the case of the \(\alpha\)-clustered structure is broader compared to the distribution of triangularity in the Woods-Saxon profile. It has a more considerable mean value and standard deviation. This implies that even if the participant distribution in the \(\alpha\)-clustered structure is more isotropic in shape, it has more in-built fluctuations inside. These features of the interplay between eccentricity and triangularity with respect to different nucleon distribution profiles could be studied using different correlation functions, such as the normalized symmetric cumulants discussed in the following subsection. Figure 3: (Color online) The normalized symmetric cumulants coefficient NSC(2,3) as a function of centrality for both Woods-Saxon and \(\alpha\)-cluster type nuclear density profiles in O–O collisions at \(\sqrt{s_{\rm NN}}\) = 7 TeV in AMPT string melting model. ### Normalized symmetric cumulants NSC(n,m) To quantify the positive or negative correlations between eccentricity and triangularity for different nuclear density profiles with respect to collision centrality, we define normalized symmetric cumulants coefficient NSC(n,m), which is given by [73]: \[\text{NSC(n,m)}=\frac{\langle{\varepsilon_{\text{n}}}^{2}{\varepsilon_{\text{m} }}^{2}\rangle-\langle{\varepsilon_{\text{n}}}^{2}\rangle\langle{\varepsilon_{ \text{m}}}^{2}\rangle}{\langle{\varepsilon_{\text{n}}^{2}}\rangle\langle{ \varepsilon_{\text{m}}^{2}}\rangle}. \tag{11}\] Figure 3 shows the normalized symmetric cumulants coefficient as a function of centrality for both Woods-Saxon and \(\alpha-\)clustered cases in O-O collisions at \(\sqrt{s_{\text{NN}}}=7\) TeV in AMPT string melting model. The negative coefficient values represent the anti-correlation between the two variables. We see a negative correlation for the \(\alpha\)-clustered case up to mid-central (20-30%). This suggests that there is an anti-correlation between \(\langle\epsilon_{2}\rangle\) and \(\langle\epsilon_{3}\rangle\) only in the case of \(\alpha\)-clustering. However, we got positive correlations for the Woods-Saxon density profile for all centralities. It should be noted that several recent studies investigate the properties of the initial-state eccentricities and final hadron flow observables from the collisions of clustered carbon and heavy ions at various beam energies in the event-by-event framework [73, 74]. ### Elliptic flow and triangular flow Figure 4 shows the \(p_{\text{T}}\)-integrated elliptic (left) and triangular flow (middle) and the ratio of triangular to the elliptic flow coefficient (right) as a function of collision centrality in O-O collisions at \(\sqrt{s_{\text{NN}}}=7\) TeV for both Woods-Saxon density profile and \(\alpha\)-clustered structure using AMPT. The elliptic flow for the Woods-Saxon density profile does not have a strong centrality dependence, whereas the \(\alpha\)-clustered structure is observed to have significant centrality dependence. Unlike the Woods-Saxon profile, where the elliptic flow is finite yet almost flat with centrality, in \(\alpha\)-clustered structure, the elliptic flow value increases as one moves initially from central to mid-central collisions, and then it remains flat upto (30-40)% centrality class. Thereafter, the value decreases towards the peripheral collisions. On the other hand, triangular flow for both Woods-Saxon density profile and \(\alpha\)-clustered structure have similar trends, i.e., maximum at the central collisions and decreases towards the peripheral collisions. This structure of triangular flow is peculiar to observe, considering the heavy-ion-like behavior where the value for the triangular flow peaks at the mid-central collisions. The cause of the peculiar behavior of elliptic and triangular flow may be due to the fact that the smaller system size and shorter lifetime of the fireball do not fully help transform the initial eccentricities to the final-state anisotropic flow coefficients. It is to be noted that the \(\alpha\)-clustered structure has a more significant triangular flow compared to the Woods-Saxon density profile throughout the centrality classes. In addition, in the right plot of Fig. 4, where \(\langle v_{3}\rangle/\langle v_{2}\rangle\) as a function of centrality is shown, one observes the ratio to be below one throughout the centrality classes and for both the density profiles and as one goes towards the peripheral collisions, the ratio seems to decrease for both the nuclear profiles. The value of \(\langle v_{3}\rangle/\langle v_{2}\rangle\) for \(\alpha\)-clustered structure is larger than the Woods-Saxon density profile towards the most central and peripheral cases. Interestingly, one observes a sharp like in the \(\langle v_{3}\rangle/\langle v_{2}\rangle\) value for the most central case, which is inferred from the right panel of Fig. 1, i.e., \(\langle\epsilon_{3}\rangle/\langle\epsilon_{2}\rangle\) vs. centrality. This might be a possible signature of \(\alpha\)-clustered structure of Oxygen nuclei in O-O collisions which can be verified in future experimental studies. In Fig. 5, \(\langle v_{2}\rangle/\langle\epsilon_{2}\rangle\) (top) and \(\langle v_{3}\rangle/\langle\epsilon_{3}\rangle\) (bottom) as a function of centrality for \(\alpha\)-clustered structure and Woods-Saxon density profile for O-O collisions at \(\sqrt{s_{\text{NN}}}\) = 7 TeV are shown. In Fig. 5, \(\langle v_{2}\rangle/\langle\epsilon_{2}\rangle\) and \(\langle v_{3}\rangle/\langle\epsilon_{3}\rangle\) is observed to be decreasing towards the peripheral collisions for both the nuclear profiles; however, both these ratios for the \(\alpha\)-clustered structure are larger as compared to the Woods-Saxon density profile. Both \(\langle v_{2}\rangle/\langle\epsilon_{2}\rangle\) and \(\langle v_{3}\rangle/\langle\epsilon_{3}\rangle\) tell about the effect of the medium on the evolution of the flow coefficients, i.e., \(\langle v_{2}\rangle\) and \(\langle v_{3}\rangle\) from initial eccentricities, i.e., \(\langle\epsilon_{2}\rangle\) and \(\langle\epsilon_{3}\rangle\), respectively. As discussed by the authors in Ref. [75], it is known that anisotropic flow coefficients of different order are affected differently by the medium formed, and as the order of the flow coefficients increase, their sensitivity to the viscosity of the medium increase. Thus the observed enhanced values of \(\langle v_{2}\rangle/\langle\epsilon_{2}\rangle\) and \(\langle v_{3}\rangle/\langle\epsilon_{3}\rangle\) for \(\alpha\)-clustered structure compared to Woods-Saxon density profile may signify that the choice of initial density profile can have an effect on the viscosity of the medium formed. ### Elliptic flow of light-flavor hadrons and NCQ scaling Figure 6 shows the two-particle azimuthal correlation function (\(C(\Delta\phi)\)) for \(\pi^{\pm}\), \(K^{\pm}\), and \(p+\bar{p}\) in the most central O-O collisions at \(\sqrt{s_{\text{NN}}}=7\) TeV in the relative azimuthal angle \(\Delta\phi\in[-\pi/2,3\pi/2]\). The blue dots and the red triangles represent the cases for the nucleus having the Woods-Saxon and tetrahedral \(\alpha\)-cluster density profiles respectively. The correlation function is constructed in the transverse momentum range, \(0.5<p_{\text{T}}^{\text{n}},p_{\text{T}}^{\text{b}}<5.0\) GeV/\(c\), in the relative pseudorapidity cut \(1.0<|\Delta\eta|<4.8\). This pseudorapidity cut ensures the removal of short-range resonance decays and mini-jets contributing to the non-flow effects. The magnitude of the peak of the correlation function is related to the magnitude of the anisotropic flow coefficients. Both the density profiles show similar magnitudes of peaks at the near side (\(\Delta\phi\simeq 0\)). However, for the case with \(\alpha\)-cluster, there is an away-side (\(\Delta\phi\simeq\pi\)) broadening and suppression in the two-particle azimuthal correlation function. This effect gets more pronounced as one moves from pion to kaon and then to proton. This away-side valley may arise due to the more violent interactions among the hadrons caused due to the more compact and denser fireball created in nuclear collisions having \(\alpha\)-clusters, which also results in higher multiplicity than the Woods-Saxon case in similar centrality bins [28]. The presence of two peaks on the away side adds to this understanding as it leads to an enhanced contribution to the triangular flow [69; 46]. In short, by comparing the \(C(\Delta\phi)\) distributions of an ordinary Woods-Saxon nucleus with the \(\alpha\)-clustered nucleus, one can observe a dependence of the azimuthal correlation function on the initial density profile of the nucleus. These results are in line with the observations reported in Ref. [33]. It can be noted that there might be residual jet-like correlations leading to the away-side signal suppression. Proton being the massive one, shows a relatively higher suppression in the medium than kaon and pion. Figure 7 shows centrality dependence of single particle elliptic flow coefficients for \(\pi^{\pm}\), \(K^{\pm}\), and \(p+\bar{p}\) in O-O collisions at \(\sqrt{s_{\rm NN}}=7\) TeV, for Woods-Saxon and \(\alpha\)-clustered density profiles. Three centrality bins are chosen for this study, the most central (0-5)%, intermediate (20-30)%, and noncentral (40-50)%. For the Woods-Saxon case, there is a very weak dependency of \(v_{2}(p_{\rm T})\) on centrality for the three particle types. But for the \(\alpha\)-clustered case, in (20-30)% centrality, there is a higher \(v_{2}(p_{\rm T})\) as compared to the other centrality bins. In the Woods-Saxon case, we argue that the smaller system size does not allow much variation in \(v_{2}\) as a function of centrality, irrespective of an increasing \(\epsilon_{2}\); however, for the \(\alpha\)-cluster case, the more compact geometry tends to produce comparatively a denser medium, and thus the variation of \(v_{2}\) with respect to centrality comes into picture. Now moving onto the particle types, at low \(p_{\rm T}\), there is a distinct mass ordering in the elliptic flow of \(\pi^{\pm}\), \(K^{\pm}\), and \(p+\bar{p}\). This is understood to have originated from the competing effects of radial (symmetric) flow and anisotropic flow. In the intermediate \(p_{\rm T}\), the baryon-meson flow separation occurs, with baryon \(v_{2}\) being greater than that of the meson. This comes into existence due to the quark coalescence mechanism of hadronization embedded in the AMPT string melting model. Figure 8 shows the centrality dependence of \(v_{2}(p_{\rm T}^{a})/n_{q}\) scaling as a function of \((m_{\rm T}-m_{0})/n_{q}\) for \(\pi^{\pm}\), \(K^{\pm}\) and \(p+\bar{p}\) in O-O collisions at \(\sqrt{s_{\rm NN}}=7\) TeV for both Woods-Saxon and \(\alpha\)-cluster type nuclear density profiles. Here, \(n_{q}=2\) for mesons, \(n_{q}=3\) for baryons, and the transverse kinetic energy, \(KE_{\rm T}=(m_{\rm T}-m_{0})\). These plots quantitatively show the elliptic flow of the constituent quarks as a function of their transverse kinetic energy. As discussed earlier, within the AMPT framework, in Pb-Pb collisions at the LHC energies, the NCQ scaling is violated. However, at the same energy in Si-Si collision Figure 4: (Color online) Integrated elliptic flow (\(\langle v_{2}\rangle\)) (left), triangular flow (\(\langle v_{3}\rangle\)) (middle), and the ratio (\(\langle v_{3}\rangle/\langle v_{2}\rangle\)) (right) as a function of centrality for both Woods-Saxon and \(\alpha\)-cluster type nuclear density profiles in O–O collisions at \(\sqrt{s_{\rm NN}}=7\) TeV from AMPT model. Figure 5: (Color online) The ratio \(\langle v_{2}\rangle/\langle\epsilon_{2}\rangle\) (top) and \(\langle v_{3}\rangle/\langle\epsilon_{3}\rangle\) (bottom) for O–O collisions at \(\sqrt{s_{\rm NN}}=7\) TeV for both Woods-Saxon and \(\alpha\)-cluster type nuclear density profiles in AMPT. system, the NCQ scaling is found to be valid. In O-O collisions, which is an even smaller system, the scaling is valid for all centrality classes irrespective of the Woods-Saxon or \(\alpha\)-clustered type nucleus. Although the presence of different nuclear geometry affected the azimuthal correlation function, as seen in Fig. 6, it does not seem to play any role in the partonic level collectivity. Hence, the away-side broadening seen in Fig. 6, may be developed later in the hadronic phase due to the hadron-hadron interactions, leading to signal suppression. If this is the case, in a denser hadronic medium, the suppression will be higher, and the same is observed in the \(\alpha\)-clustered nuclear collisions. ## IV Summary In this paper, we have investigated the effect of Woods-Saxon and \(\alpha\)-clustered nuclear geometry on the eccentricity and triangularity along with their correlations, elliptic flow, triangularity flow, and NCQ scaling in O-O collisions at \(\sqrt{s_{\rm NN}}=7\) TeV in the framework of a multi-phase transport model. The key findings are summarized below: Figure 6: (Color online) Two-particle azimuthal correlation function for \(\pi^{\pm}\), \(K^{\pm}\), and \(p+\bar{p}\) in the most central O–O collisions at \(\sqrt{s_{\rm NN}}=7\) TeV using Woods-Saxon and \(\alpha\)-cluster type nuclear density profiles. Figure 7: (Color online) Transverse momentum (\(p_{\rm T}\)) dependence of \(v_{2}(p_{\rm T})\) for \(\pi^{\pm}\), K\({}^{\pm}\), and p + \(\bar{\rm p}\) in O–O collisions at \(\sqrt{s_{\rm NN}}=7\) TeV. Results include Woods-Saxon and \(\alpha\)-cluster type nuclear charge density profiles for the oxygen nucleus. * Eccentricity and triangularity are found to vary with a change in the density profiles. However, the effects are more pronounced in the most central case, where the initial state has more triangularity than eccentricity for an \(\alpha\)-clustered Oxygen nucleus as compared to the normal Woods-Saxon type distribution. * Employing the normalized symmetric cumulants, we observe that the strength of the correlation between eccentricity and triangularity for the Woods-Saxon density profile is more than for the \(\alpha\)-clustered structure. Also, the appearance of negative NSC (2,3) value for the \(\alpha-\)clustered nucleus in the most central cases is observed. * In the Woods-Saxon type nucleus, the elliptic flow is found to depend weakly on the centrality of the collision. However, in the \(\alpha-\)clustered nucleus, the elliptic flow increases from central to mid-central collisions and then decreases while moving from mid to peripheral collisions. * We report an enhancement in the \(\langle v_{3}\rangle/\langle v_{2}\rangle\) towards the most central collisions for the \(\alpha-\)clustered nucleus than the Woods-Saxon case. * The two-particle azimuthal correlation function (\(C(\Delta\phi)\)) of the identified particles shows an away-side broadening for the \(\alpha\)-clustered type nucleus. This hints towards a denser and more compact system formation in the \(\alpha\)-clustered nucleus. * The NCQ scaling is valid for all centrality classes for both Woods-Saxon and \(\alpha-\)clustered type of nucleus. This observation is crucial as it hints towards the existence of a deconfined partonic medium in O-O collisions at \(\sqrt{s_{\rm NN}}=7\) TeV and the appearance of partonic collectivity. * The observation of similar partonic collectivity with the presence of an away-side broadening hint towards the fact that the nuclear density profile has a greater influence in the hadronic phase collectivity than the initial partonic level due to the formation of a more compact and denser system in an \(\alpha\)-clustered nucleus. It would be interesting to compare these findings to experimental observations when experimental data are available in order to determine the density profile of the oxygen nucleus that is best suited to describe ultrarelativistic nuclear collisions. Although probing the nuclear density profile is a matter of low-energy nuclear scattering experiments, some of the observables in TeV nuclear collisions may be sensitive to the nuclear density profiles. In this study, we report a few such observables in heavy-ion collisions which could be sensitive to the nuclear density profiles and should be studied in experimental data. ## Acknowledgements D.B. acknowledges the financial support from CSIR, the Government of India. S.P. acknowledges the financial support from UGC, the Government of India. R. S. sincerely acknowledges the DAE-DST, Government of India funding under the Mega-Science Project - "Indian participation in the ALICE experiment at CERN" bearing Project No. SR/MF/PS-02/2021-IITI (E-37123). The authors gratefully acknowledge the usage of resources of the LHC grid Tier-3 computing facility at IIT Indore.
2301.05314
Light-controlled multi-phase structuring of perovskite crystal enabled by thermoplasmonic metasurface
Halide perovskites belong to an important family of semiconducting materials with unique electronic properties that enable a myriad of applications, especially in photovoltaics and optoelectronics. Their optical properties, including photoluminescence quantum yield, are affected and notably enhanced at crystal imperfections where the symmetry is broken and the density of states increases. These lattice distortions can be introduced through structural phase transitions, allowing charge gradients to appear near the interfaces between phase structures. In this work, we demonstrate controlled multi-phase structuring in a single perovskite crystal. The concept uses cesium lead bromine (CsPbBr3) placed on a thermoplasmonic TiN/Si metasurface and enables single, double and triple phase structures to form on demand above the room temperature. This approach opens up application horizons of dynamically controlled heterostructures with distinctive electronic and enhanced optical properties.
Sergey S. Kharintsev, Elina I. Battalova, Timur A. Mukhametzyanov, Anatoly P. Pushkarev, Ivan G. Scheblykin, Sergey V. Makarov, Eric O. Potma, Dmitry A. Fishman
2023-01-12T21:53:28Z
http://arxiv.org/abs/2301.05314v1
Light-controlled multi-phase structuring of perovskite crystal enabled by thermoplasmonic metasurface ###### Abstract Halide perovskites belong to an important family of semiconducting materials with unique electronic properties that enable a myriad of applications, especially in photovoltaics and optoelectronics. Their optical properties, including photoluminescence quantum yield, are affected and notably enhanced at crystal imperfections where the symmetry is broken and the density of states increases. These lattice distortions can be introduced through structural phase transitions, allowing charge gradients to appear near the interfaces between phase structures. In this work, we demonstrate controlled multi-phase structuring in a single perovskite crystal. The concept uses cesium lead bromine (CsPbBr\({}_{3}\)) placed on a thermoplasmonic TiN/Si metasurface and enables single, double and triple phase structures to form on demand above the room temperature. This approach opens up application horizons of dynamically controlled heterostructures with distinctive electronic and enhanced optical properties. halide perovskite, thermoplasmonics, metasurface, optical heating, phase transition, twin domains, Raman scattering, photoluminescence, fast differential scanning calorimetry, piezoresponse reflection microscopy. ## I Introduction Perovskite-structured direct bandgap semiconductors form an important class of materials with equally important applications. The presence of antibonding states near the maximum of the valence band gives rise to a defect-tolerant semiconductor material with unique electronic and optical properties, including fast charge transport, an extended free carrier diffusion length, a high exciton binding energy, and bandgap tunability.[1, 2, 3, 4, 5, 6] These properties have already enabled promising applications in photovoltaics and solar energy conversion with \(>\)20% efficiency[7], as well as lasing and light-emitting devices.[3, 4, 6, 8] The properties of perovskite derive from its specific ABX\({}_{3}\) architecture, where A and B are cations, and X are anions arranged into chemically stable corner-sharing octahedral BX\({}_{6}\) frameworks. There are three main crystallographic phases in which perovskites exist, namely: orthorhombic (\(\gamma\)), tetragonal (\(\beta\)), and cubic (\(\alpha\)). Transitions between aforementioned phases contribute to the formation of multiple structural domains and lattice imperfections, specifically through crystal twinning. This latter process can be understood as an immunity response to the loss of symmetry, when, upon minimizing the Gibbs free energy, the system relaxes into a thermodynamically stable state by forming ferroelastic, near-orthogonal domains. These formations have been widely studied at the nano-, micro- and mesoscales [9, 10, 11, 12, 13]. In perovskites, crystal twinning is associated with lowering of the lattice symmetry by tilting and contortion of octahedrons, which causes spontaneous intrinsic stress. The introduction of such structural distortions will significantly affect local electronic structure, hence transition probabilities. Moreover, the density of states is considerably larger at these sites, where the latter can be viewed as an optical nanoantenna [14]. Temperature [15, 16] and pressure [17] perturbations have been utilized to alter the system's properties through manipulation of the crystal structure and defect density. For many perovskite systems, the bandgap for the tetragonal phase is slightly lower compared to that of the orthorhombic and cubic phases [18, 19]. This causes free carriers to migrate from high to low bandgap areas, an effect that is most pronounced near phase transition sites. It has been hypothesized that such spatially non-uniform transport leads to a local build-up of free carriers in tetragonal domains [20, 21]. Such an accumulation of free carriers increases the probability for electrons and holes to radiatively recombine, affecting and, ultimately, enhancing the Raman and photoluminescence efficiencies near the lattice distortions sites. _If temporally and spatially controlled, this effect could be used to actively tune the optoelectronic properties of the material, such as boosting the brightness of light-emitting devices [1, 3] or modulating the lasing efficiency_[22, 23]. Control of such enhanced emission requires precise manipulation of phase structuring within the single crystal. This manipulation, in turn, necessitates control of the phase transitions and thus dynamic management of the local temperature within the material. There are two major prerequisites for producing multi-phase structures in a dynamically controlled manner - (1) a mechanism for rapid and efficient heating at the nano- to micro-scales and (2) a mechanism for heat release from the heated location. In this context, the heating mechanism at small spatial scales can benefit from the thermoplasmonic effect, through which heat can be locally generated via absorption of incident light by a plasmonically resonant structure [24, 25, 26]. This approach has been shown useful for efficient heat generation (up to a few thousands of K), followed by the rapid directional heat transfer to the material of interest [26, 27]. Note that all-inorganic halide perovskites have remarkably small thermal conductivity (0.42 W m\({}^{-1}\)K-1) [28], yet possess high photo- and thermal stability. The latter underlines the possibility of maintaining spatial and temporal stability of the heat pattern and gradients across a single crystal, resulting in a stable multi-phase semiconducting system. In this work, we demonstrate controlled multi-phase structuring of a single crystal of cesium lead bromine (CsPbBr\({}_{3}\)). We achieve this level of control by placing the perovskite crystal on a thermoplasmonic metasurface that consists of a 2D array of stacked titanium nitride (TiN) plasmonic nano-pads on top of silicon (Si) nano-pillars (Figure 1a) [29]. When irradiated with visible light at a wavelength resonant with the TiN structure, the plasmonic nano-pad serves as an _optically switchable heater_, while the Si pillar provides a channel for heat dissipation. This geometry produces sub-wavelength thermal gradients across the perovskite microplate, triggering on demand formation of stable phase domains within the original single crystal. ## 2 Results and discussion ### Device concept CsPbBr\({}_{3}\) perovskite undergoes two reversible phase transitions above the room temperature. These are the orthorhombic-to-tetragonal (361 K) and the tetragonal-to-cubic (403 K) phase transitions as determined with fast scanning calorimetry (FSC, Supplementary Figure S1 and Supplementary Section 1). As the rate of the temperature sweep increases, the data clearly shows the lack of mirror symmetry between heating and cooling experiments. This observation points to defect-induced spatial heterogeneity within the crystal and reflects an imbalance in the potential energy barriers associated with the conversion of structure from lower to higher symmetries and _vice versa_. This makes the FSC method highly sensitive to the crystalline imperfection content and density. The presence of these phase transitions at high temperatures underlines the possibility to form a combination of different crystal phases in the material if steep and steady temperature gradients are introduced. Figure 1a schematically illustrates such a device concept that operates at ambient laboratory conditions. The CsPbBr\({}_{3}\) perovskite platelet (10 \(\upmu\)m x 14 \(\upmu\)m x 1 \(\upmu\)m) is placed on a metasurface that is comprised of a hexagonal 2D array of Si pillars with a subwavelength base (\(L\)\(<\)\(\lambda\)). Each nanopillar is capped by a TiN plasmonic pad on top (Figures 1a and 1b). Upon illumination (continuous wave, 633 nm, 16 mW, 0.6 \(\upmu\)m spot size, NA=0.7), the TiN pad functions as a photothermal heater, while the Si pillar transfers heat down to the bulk substrate. Silicon was chosen as the thermostat material because of its large thermal conductivity (148 Figure 1: (a) Schematic representation of a CsPbBr\({}_{3}\) platelet mounted on a metasurface array. (b) A 72\({}^{\circ}\) tilted SEM image of the edge facet of the CsPbBr\({}_{3}\) platelet on the TiN metasurface. (c) Optical heating of halide perovskite crystal by the TiN/Si nanostructure. Color areas within the temperature gradient represent the \(\gamma\) phase of the Pnma space group (blue), the \(\beta\) phase of the P4/mbm space group (green) and the \(\alpha\) phase of the Pm3m space group (red). (d) Finite-difference time-domain (FDTD) and finite element method (FEM) simulations of the axial temperature distribution across the TiN/Si and CsPbBr\({}_{3}\) crystal. W m-1K-1) and strong Raman response (Si-Si 521 cm-1). Moreover, its Raman activity is temperature sensitive, permitting its use as a probe for Raman-based thermometry. The light-to-heat conversion is expected to be maximum at the plasmonic absorption resonance of the TiN structure, as characterized by the absorption power \(P=\sigma_{abs}I_{0}\), where \(\sigma_{abs}\) is the absorption cross section and \(I_{0}\) is the incident intensity [24, 25]. The accessible temperature range at a thermal stationary state of the system will depend on several factors, namely the effective thermal conductivity of Si, the pillar's lateral and axial dimensions, the permittivity \(\varepsilon\) of the TiN pad and the incident flux \(I_{0}\). The pillar geometry, defined by base lateral size \(L\) and height \(h\), governs the heat dissipation efficiency and its effect has been discussed previously for composite TiN/Si rods [29, 30, 31], tubes and trenches [32]. Taking into account the Frohlich resonance condition, we can derive the temperature change at the top of the Si pillar as a function of structure height \(h\) and incident light intensity \(I_{0}\) as follows [29, 31]: \[\varepsilon_{\text{TiN}}^{\prime}(\lambda_{0})=-2\varepsilon_{\text{Si}},\] \[\Delta T_{L}(h,I_{0})\approx\frac{3\,\frac{L^{2}Q^{2}}{\beta A_{0}}}{\varepsilon _{\text{TiN}}^{\prime}}\left[\frac{1}{\kappa_{\text{Si}}}I_{0}-\frac{\sigma_{ \text{abs}}}{\kappa_{\text{Si}}^{2}}\frac{\partial\kappa_{\text{Si}}}{ \partial T}hI_{0}^{2}\right], \tag{1}\] where \(\lambda_{0}\) is the wavelength at the plasmonic resonance, \(\varepsilon_{\text{TiN}}\)=\(\varepsilon^{\prime}\)_TiN_+ \(\varepsilon^{\prime\prime}\)_TiN_ is the complex permittivity of a TiN heater, \(Q\)=-\(\varepsilon^{\prime}\)_TiN_+ \(\varepsilon^{\prime\prime}\)_TiN_ is a Q-factor for the plasmon resonance, \(\kappa_{\text{Si}}\) is the temperature-dependent thermal conductivity of bulk Si and \(\beta\) is the geometry-dependent dimensionless thermal capacity of TiN [25]. For smaller pillar heights (\(<\)200 nm), the first term dominates and \(\Delta T_{L}\) is expected to show a linear dependence on \(I_{0}\) (Figure S2, see Supplementary Information) [29]. For taller pillars, the contribution of the second term increases accordingly, resulting in a quadratic dependence of the temperature on \(I_{0}\). Moreover, \(\Delta T_{L}\) is now dependent on the first temperature derivative of \(\kappa_{\text{Si}}\). For bulk Si, this derivative has a negative sign above room temperature, and, thus, \(\Delta T_{L}\) should monotonically increase with the incident intensity. For structures with height exceeding \(h\)\(>\)500 nm, significant deviation from experimental observations have been reported and explained in terms of thermal anisotropy [32]. Because the thermal conductivity of Si ( \(\kappa_{\text{Si}}=\) 148 W m-1K-1) significantly exceeds that both of air (\(\kappa_{\text{air}}\)=0.0263 W m-1K-1) and CsPbBr3 perovskite (\(\kappa_{\text{CsPbBr3}}=\) 0.42 W m-1K-1), the pillar structure becomes the dominant channel for heat dissipation with its geometry being the key factor in determining the steady state temperature profile. _Hence, pillars of a specific height provide access to specific temperature ranges, while fine control within this range can be realized by varying the incident light intensity I\({}_{0}\). When irradiated, an array of such nano-heaters can generate a two-dimensional temperature pattern formed by sub-wavelength hot spots (\(L\)\(<\)\(\lambda\))_. Induced thermal gradients along the axial direction in the perovskite allow particular phase domains to be formed, subject to the distance from the heating TiN pad (Figure 1c). Figure 1d shows a combined finite-difference time-domain (FTDT) and finite element (FEM) method (ANSYS/Lumerical) simulation of the axial temperature distribution. The simulation reveals the axial heat distribution within the layered system, comprised of a 1 \(\mu\)m Si pillar, a 50 nm TiN pad and a 1 \(\mu\)m CsPbBr3 crystal. A pillar of this height is associated with a steady state temperature range of 320-520 K or a 0.23 K/nm thermal gradient within the Si material. The maximum temperature at the plasmonic structure is chosen to be 630 K, a critical temperature point beyond which the CsPbBr3 optoelectronic properties drastically change [33]. The temperature gradient in the perovskite interior follows a \(\left|\,z\right|^{-0.54}\) dependence (\(R^{2}\)=0.997), which is mainly determined by crystal thermal conductivity. In these simulations, the surrounding medium is assumed to be air. Depending on the initial nano-pad temperature (i.e. input light flux), the crystal interior can be comprised of a single \(\gamma\) phase or a structure of two or all three phases, as shown for \(T_{0}\)=630 K in Figure 1c. ### Optical visualization of phase transitions and crystal twinning The real time dynamics of domain formation and twinning in perovskite crystal on a microscale is shown in Supplementary Movie SM1. In this experiment, the crystal was placed on a hot plate to perform temperature sweeps from 340 to 410 K and back, spanning the orthorhombic-to-tetragonal-to-cubic phase transitions. The heating and cooling rates were sufficiently slow to allow a uniform temperature to establish itself throughout the crystal. Optical imaging and other spectroscopic experiments were performed with the aid of a sample piezo positioning feedback system, as described in Section 3 of Supplementary Information. This solution overcomes experimental obstacles such as the thermal expansion of the sample and setup elements. It also corrects for beam defocusing by the Bragg-like grating formed through crystal twinning within the sample volume (see _Methods_ and _Supplementary Information_, Figure S3 and Section 3). Figure 2 depicts confocal reflection images of the crystal surface at the selected steady state temperatures. At 303 K (Figure 2a), the crystal consists of parallel domains of the \(\gamma\) phase that are oriented at a 45\({}^{\circ}\) angle (\(<\)110\(>\)) relative to the lab frame. The transition to the tetragonal phase occurs around 393 K (Figure 2b). As the temperature is increased to 408 K, the stripes disappear completely, indicating the formation of a homogeneous cubic crystal (\(\alpha\) phase, Figure 2c). As the system cools down and crosses the cubic-to-tetragonal transition, crystal twinning triggers the formation of multiple tetragonal domains (Figure 2d) and a further temperature decrease brings the crystal back into the orthorhombic phase (Figure 2e). The data shows clear differences between the images of the crystal at the same temperature points, but opposite ends of the temperature cycle (Figures 2a and 2e). This difference in the patterns further confirms the results of the FSC experiments (Figure S2), indicating that the potential energy barriers are different when the phase transition proceeds along different directions of the temperature sweep. _The symmetries of the original and final crystallographic phases associated with a transition are the key factors in the evolution of the crystal twinning._ Upon careful examination, it is clear that the resultant orthorhombic phase reveals a 7\({}^{\circ}\) deviation angle relative to the previously orthogonal orientation of the domains in the tetragonal phase (Figure 2d and 2e). This is in excellent agreement with previous calculations by density functional theory (DFT) that yielded a \(\phi\)-13\({}^{\circ}\) octahedral tilt for orthorhombic Figure 2: Confocal reflection images of a CsPbBr\({}_{3}\) crystal using polarized light at (a) 303 K (orthorhombic phase), (b) 393 K (tetragonal phase), (c) 408 K (cubic phase) and back (d) 393 K (tetragonal phase), (e) 303 K (orthorhombic phase). CsPbBr\({}_{3}\)[18]. The rotation of the corner-sharing Br atoms of the [PbBr\({}_{6}\)]\({}^{4}\)- octahedron in the equatorial plane by \(\phi\)/2\(\sim\) 6.5\({}^{0}\) should result in a relative reorientation of the domains to 90\({}^{0}\)-\(\phi\)/2\(\sim\) 83.5\({}^{0}\), as demonstrated in Figure 3f. ### Temperature dependence of Raman and photoluminescence signatures Both the FSC and the optical imaging experiments reveal information about the phase transitions in the perovskite crystal, and both measurements point to the importance of lattice imperfections and distortions. To examine their role at the microscopic level, we performed Raman and photoluminescence experiments, which are particularly sensitive to the electronic structure near defects and phase interfaces. The Raman spectrum of CsPbBr\({}_{3}\) perovskite features two main low energy vibrational modes, namely the 127 cm-1 TO (first-order transverse optical) and the 312 cm-1 2LO (second-order longitudinal optical) Pb-Br stretching phonon modes (Figure 3a-d)[34]. It is important to note that the presence of the 312 cm-1 peak in the Raman spectrum is evidence for the more pristine CsPbBr\({}_{3}\) structure relative to the presence of CsPb\({}_{2}\)Br\({}_{5}\), with the latter being the result of exposure to water[35]. The temperature dependence of the Raman spectra for both the TO and 2LO modes are different for various directions of the temperature sweep (Figures 3e and 3f) of a uniformly heated crystal. As the temperature is increased, the intensity of the TO phonon line (127 cm-1) undergoes two extrema corresponding to the orthorhombic-to-tetragonal (361 K) and tetragonal-to-cubic (403 K) phase transitions (Figure 3a and 3e). The \(\beta\)-CsPbBr\({}_{3}\) phase reveals an expected trend, namely the decrease of the Stokes intensity with temperature, caused by the bandgap widening and the depletion of carriers in the valence band[18, 19]. Meanwhile, the temperature dependence of the Stokes bands ascribed to the \(\alpha\) and \(\gamma\) phases shows the opposite trend (Figure 3e). This observation can be explained by the interplay between the thermal volumetric expansion and the tilt of the [PbBr\({}_{6}\)]\({}^{4}\)-octahedra[18]. It has been predicted that both mechanisms are capable of significant widening of the bandgap[19], estimated to be \(<\)2.0 eV for \(\beta\)-CsPbBr\({}_{3}\)_versus_\(\sim\)2.36 eV for \(\gamma\)-CsPbBr\({}_{3}\) and \(\sim\)2.4 eV for \(\alpha\)-CsPbBr\({}_{3}\). These bandgap variations offer possible explanations for the observed positive temperature trends. For example, for the given experiments the Raman process in \(\beta\)-phase is closer to the resonance for the used excitation photon energy (633 nm, 1.96 eV). This may lead to a signal increase when more \(\beta\)-phase sites are introduced. Another potential mechanism derives from the contribution of the shallow and deep states to the free carriers population at the conduction band is expected to increase with temperature[36], enabling to change the Raman polarizability[37]. When the temperature is lowered, the Raman intensity of the TO mode decreases continuously and does not exhibit any extrema in this temperature range. We speculate that such behavior can be understood from the dominant role of the crystallographic deformation of the [PbBr\({}_{6}\)]\({}^{4}\)-backbones while cooling down. Spatially resolved Raman intensity maps for each phonon mode at different temperatures are presented in Figure S4. The images agree well with the confocal reflection images (Figure 2). They demonstrate a clear variation in the domain pattern as a function of the directionality of the temperature sweep across particular phase transitions - a result of the unequal potential energy barriers of the high-to-low and low-to-high symmetry conversions. The difference in crystal twinning also impacts the temperature trend of the TO and 2LO phonon lines, resulting in their characteristically different behavior (Figure 3e and 3f). For the TO phonon mode, electron-phonon scattering at the \(\Gamma\) point is more sensitive to twin domain formation due to the overall momentum restrictions for the one-phonon process. This is opposite for the 2LO mode, for which there is a simpler path to fulfill momentum conservation due to the involvement of two phonons to scatter light inelastically. While the 2LO line clearly shows the \(\gamma\)\(\rightarrow\)\(\beta\) transition (red curve, Figure 3f), at the same time it appears insensitive to the \(\beta\)\(\rightarrow\)\(\alpha\) transition. The cooling curve exhibits similar behavior for both transitions. Whereas the multi-phonon mode can be utilized as a temperature probe for a defect-free crystal, the single-phonon TO mode is more sensitive to the orthorhombic-to-tetragonal and tetragonal-to-cubic phase transitions. Photoluminescence (PL) microspectroscopy provides additional information on the carrier dynamics and the origin of the emission mechanism. The latter has been investigated through power dependence and fluorescence lifetime studies and is discussed in detail in Supplementary Information, Section 5. Here, we focus primarily on the temperature trends of perovskite photoemission. When the temperature is raised, the PL intensity drops dramatically (Figure 4a), reaching minimum at \(T_{\rightarrow\beta}\) at 361 K. Further increase of the lattice temperature gives rise to a higher PL intensity for the tetragonal (\(\beta\)) and cubic (\(\alpha\)) phases (red curve, Figure 4c). The overall PL spectral shape reveals complex behavior through the sweep, showing splitting-like behavior at high temperatures (Figure 4b). First, a blueshift of the mean of the spectral distribution (\(\sim\)16 meV) is observed (Figure 4d), indicating the bandgap expansion of the cubic phase at 423 K [19, 37]. Second, a red-shifted signature (\(\sim\)18 meV) is observed, which is suggested to originate from the competition between surface and interior contributions of the crystals (Figure 4d) [37, 38]. A radically different trend is observed when cooling is performed, with fluorescence showing a strong local maximum at the \(\gamma\)\(\rightarrow\)\(\beta\) transition (Figure 4b and Figure 4c). A similar observation has been reported for methylammonium lead triiodide (CH\({}_{3}\)NH\({}_{3}\)PbI\({}_{3}\) or MAPbI\({}_{3}\)), upon cooling from 160 K to 140 K [20]. The nature of the PL enhancement across this phase transition can be understood as resulting from the funneling effect [20], when mobile carriers migrate to the "defect-free" low-bandgap tetragonal phase. The observed hysteresis agrees well with the confocal reflection and Raman studies, and can be understood in a similar manner - lattice reconstruction and the dependence of crystal twinning on the sign of the temperature change \(\Delta T\). Since twinning requires the base of one domain to be matched to and shared with the side of another, its probability will strongly depend on the presence of inherent crystal imperfections and the geometry of the original and resultant phases. This leads to significant differences in the overall pattern of the multi-domain assembly as a function of the sign of the temperature trend, and, in turn, the number of structural defects and phase interfaces being formed. This phenomenon also explains the striking contrast in the PL quantum efficiency. It suggests that the presence of point Figure 3: Temperature-dependent Raman spectroscopy of a CsPbBr\({}_{3}\) crystal at thermal equilibrium for TO phonon mode at 127 cm-1 (a, c) and LO two-phonon mode at 312 cm-1 (b, d) upon heating and cooling at a rate of 0.4 K/s. (e,f) Cross-sections at peak center for TO and 2LO modes (dashed lines in (a-d). defects and crystal twinning favor the \(\beta\)\(\rightarrow\)\(\gamma\) transition and hinder PL for the reverse direction of the transition. ### Optical properties of multiple phase single crystal. After the optical characterization of pervoskite platelets held at a uniform temperature, using reflection, Raman and photoluminescence microspectroscopy, we next used these optical tools to study crystals subjected to a temperature gradient. For this purpose, we employed the metasurface heating device discussed in Section 2.1 to maintain stable temperature gradients in the crystal and control the distribution of phase domains in the axial (z) direction. Figure 5a shows a scanning electron microscopy (SEM) image of a CsPbBr\({}_{3}\) microplate placed on the metasurface. Figures 5b1 to 5b3 (cyan, yellow and magenta) depict 72\({}^{\circ}\) tilted images of the corners marked with an arrow of the corresponding color (Figure 5a). As is clear from the images, the structure is formed by two stacked crystal plates, most clearly observed through their exfoliation at one of the corners (Figure 5b1 and Supplementary Figure S6). The metasurface is comprised of a 2D hexagonal array of TiN/Si voxels with a pillar height estimated to be approximately 900 nm as shown in Supplementary Figure S7. The top of the voxel is visualized in the inset of Figure 5a. It is clear that, upon illumination, the TiN pads become damaged for intensities exceeding 3 MW/cm\({}^{2}\) (green and blue contoured images in the inset of Figure 5a). The confocal reflection image at 633 nm of the perovskite platelet is shown in Figure 5c. In this image, the uncovered TiN/Si voxels have been placed at the focal plane of the objective. For such an arrangement, the voxels that are covered by the perovskite appear out of focus as light has to penetrate through the 1 \(\upmu\)m-thick material of refractive index n=2.5 [37]. This effect not only prevents efficient heating of the TiN pads, but also limits the efficient collection of the Raman signal from the voxel. The collection efficiency is instrumental, as the Raman response was utilized as a remote temperature probe. In all further Figure 4: (a,b) False color PL maps for different sign of temperature sweep. (c) Cross-sections along the vertical dashed straight lines at the center of PL spectrum. (d) PL spectra for different temperatures. experiments, the light was focused on the top of the TiN/Si voxels that are under the CsPbBr\({}_{3}\) microplate. Figure 5d displays the results of confocal PL imaging. PL spectra as a function of spectral position on the sample were collected using 1.7 W cm\({}^{-2}\) of 473 nm excitation. It is important to note that such low fluxes did not introduce any meaningful temperature gradients. In addition, the excitation wavelength used is far away from the absorption resonance of the plasmonic structures. The false color PL map can be divided into three characteristic regions according to PL spectral shape and central frequency position (Figure 5e) - blue (522 nm, 2.375 eV), red (531 nm, 2.335 eV) and an intermediate green region. We observe a clear correlation between the PL spectrum and the sample thickness and/or stacking. Higher energy PL, centered around 522 nm (2.375 eV), is observed in areas where two thin \(\sim\)400 nm plates are stacked (blue spot in Figure 5d and Figure 5b2). However, the spectrum is red-shifted by 40 meV at the position where the sample appears to consist of a 1 \(\upmu\)m-thick single plate (red spot in Figure 5d and Figure 5: CsPbBr\({}_{3}\) platelet on the thermoplasmonic TiN/Si metasurface. (a) SEM image of the CsPbBr\({}_{3}\) plate over the metasurface. The insets show TiN/Si voxels, marked with the red, green and blue squares, exposed to 633 nm cw illumination with the intensity of 0, 3.5 and 5.0 MW/cm\({}^{2}\). (b1)-(b3) SEM images (side views at the tilt angles of 72\({}^{\circ}\) (b1) and 48\({}^{\circ}\) (b2), (b3) from the sides marked with cyan, yellow and magenta arrows in Figure 5a. (c) A confocal reflection image at 633 nm. (d) False color PL spectra central frequency map. (e) PL spectra taken at spots marked in Figure 5d with red, blue and green filled circles, respectively. (f, g) Raman maps at 521 cm\({}^{-1}\) (c-Si) and 127 cm\({}^{-1}\) (Pb-Br mode). (h) Raman spectra of Si pillar as a function of input light intensity. The inset shows a cross section along a dashed white line and numerical deconvolution of the composite band into Lorentzian and Gaussian components. (i) Temperature map measured based on Raman thermometry. (j) Simulated cumulative Raman signal from phase-structured crystal of different thicknesses. (k) Raman intensity _vs_ the pumping intensity or gradient initial temperature \(T_{0}\) temperature for 127 cm\({}^{-1}\) (green) and 312 cm\({}^{-1}\) (red) of perovskite phonon modes and 521 cm\({}^{-1}\) Si line. Figure 5b3). It has been suggested that the observed phenomenon is caused by the excitation of waveguide modes within the Fabry-Perot resonator through the absorption-emission-absorption mechanism [34]. If true, monitoring of the PL spectral position offers a means to probe the distribution of the perovskite thickness. Figures 5f and 5g show confocal Raman maps for the 521 cm-1 (c-Si peak of the pillar) and 127 cm-1 (TO Pb-Br phonon mode of CsPbBr\({}_{3}\)) lines. It is evident that not all voxels under the platelet can be clearly differentiated in the image. This is caused by the damage while positioning the perovskite on the metasurface and/or by the poor contact at certain positions. The enhanced Raman scattering of the TO mode at the crystal edges originates from structural inhomogeneities, where the density of surface states is higher (Figure 5g). For quantitative monitoring and visualization of the temperature at the voxel, we used Raman thermometry. This method, thoroughly described elsewhere [29, 30, 31] (see _Supplementary Information, Section 8_), utilizes the temperature dependent behavior of the c-Si Raman signal (521 cm-1) as a remote probe. Through the use of an Echelle grating, the spectral resolution of the imaging system reaches 0.1 cm-1 and enables temperature measurements with 5 K accuracy. A detailed analysis of the open voxel temperature (blue voxel, Figure 5a) versus input flux is shown in Figure 5h. Note that the c-Si mode is asymmetrically broadened (inset, Figure 5h). This effect originates from the non-uniform heat distribution in the structure, resulting in the presence of contributions from both hot and cold portions of the material [31]. To further simplify the analysis, the spectrum was fitted with Lorentzian (hot medium contribution) and Gaussian (cold medium contribution) spectral line shapes, using a regularized least squares method (\(R^{2}\)=0.998). The intensity map in Figure 5i clearly indicates that the contribution from hot domains deviates significantly from a linear incident intensity dependence for intensities exceeding 4 MW cm-2 (550 K). We attribute this effect to temperature dependent changes in the TiN permittivity, which affects the plasmon resonance frequency. In addition, the thermal conductivity of Si decreases when the temperature is raised [39]. The signs of degradation of TiN pad appear at 750 K, where the Raman intensity peaks at about 5 MW cm-2 and then shifts back to the higher energy side. This is also confirmed by previous experiments using ellipsometry on TiN films [29]. Thus, in our experiments, the incident light flux enabled access to the 293 K to 473 K temperature range - sufficient to activate all necessary structural transitions in CsPbBr\({}_{3}\) while preventing photo-damage of the plasmonic structures. Figure 5h shows the resulting temperature map derived from the Raman shift using Equation S1 (see Figure S9) and measured at 3.5 MW cm-2. The local generation of hot spots produced thermal gradients throughout the perovskite crystal. This effect resulted in the simultaneous formation of multiple phase domains, as illustrated in Figure 1. However, there are significant and fundamental differences between the temperature trends of the Raman signals when (1) heating of the whole crystal by a hot plate to achieve a uniform temperature profile, as opposed to (2) establishing a temperature gradient in the crystal with the metasurface. For the first case, the upward temperature trend discussed in _Section 2.3_ is shown by the red curve in Figures 3e. This profile shows clear extrema at phase transitions with an overall signal intensity decrease across the tetragonal phase. In the second case, when the crystal is locally heated by the plasmonic structures, the temperature gradient induces multiple phases in the axial direction. For this case, _the Raman response is the cumulative signal from all the phases in the collection volume_. The trend of the cumulative Raman signal \(I_{R}\)_versus_ the plasmonic pad temperature \(T_{m}(0)\) should directly reflect the process of multi-phase structuring of the perovskite. The trend can be modeled as discussed in Supplementary Information, Section 10. For simplicity, one can assume one-dimensional heat dissipation in a homogeneous perovskite crystal, in which the temperature profile obeys a \(T_{m}(z)=T_{m}(0)\mid z\mid^{-0.54}\) power law in the axial direction (Figure S10). The resulting Raman response can then written be as: \[I_{R}=\int\limits_{0}^{\infty}\left\langle I(z)\right\rangle dz\ \ (2),\] where \(\left\langle I(z)\right\rangle\) is the average Raman signal of the homogeneous media at a given z-plane (Figure S10) and \(\Delta\mathbf{z}\) is the total crystal thickness. Figure 5j shows plots of \(I_{R}\)_vs_\(I_{\theta}\) for different \(\Delta\mathbf{z}\) of perovskite. As expected, for very thin crystals (\(<\)200 nm), the temperature trend of Raman signal closely follows the one previously observed for a thermally equilibrated crystal on a hot plate (green curve Figure 5j, red curve Figure 3e). For thicker crystals, multiple phases can contribute to the observed Raman signal. The local maximum remains highly pronounced over the monotonically increasing Raman response, indicating the formation and growth of a two-phase structure (\(\gamma\) and \(\beta\)) in the axial direction. This model agrees well with experimental observations. Figure 5k shows the intensity evolution of the phonon modes (TO 127 cm-1, 2LO 312 cm-1) along with the c-Si line (521 cm-1) as a function of the incident light flux/pad temperature. As expected, the trends for the 127 cm-1 (Pb-Br) and 312 cm-1 (Pb-Br) modes are inherently different from the case of the thermally equilibrated system (red and green curves, Figures 5k). For the TO mode, a clear presence of local maximum around the temperature of \(\gamma\)-to-\(\beta\) transition is observed, indicating the formation of a two-phase structure. Upon further increase of the pad temperature \(T_{m}(0)\), another shallow bump at \(\Delta T\sim\) 140 K indicates triple phase formation (\(\alpha\), \(\beta\) and \(\gamma\)). To visually emphasize these signatures, the linear contribution to the Raman-temperature trend has been subtracted in Figure 6a. The linear contribution has been determined from a simple linear fit over 0 - 2.5 MW cm-2 range (Figure 6a point b, Figure 6b), where the whole perovskite crystal remains in the single orthogonal \(\gamma\) phase and the intensity of the TO Raman peak should linearly increase with the incident excitation flux and temperature (Figure 3e). Upon increasing the incident intensity, the formation of the tetragonal phase at the interface of the perovskite and Figure 6: (a) Linearly corrected Raman response _versus_ the incident light flux. (b, c, d) represents simulated temperature points where single, double and triple phase structure occur as seen in Raman signatures from (a). The blue dashed line represents the subtracted linear contribution. TiN is expected to occur at point c (Figure 6a). At this point of the trend, the steep temperature gradient creates a spatially sharp defect area where the crystal is in a transitional form between the orthorhombic and tetragonal phase (Figure 6c). This scenario manifests itself as a shallow Raman intensity maximum. Further increase of the plasmonic pad temperature drives the \(\beta\)-phase deeper into the crystal bulk. At \(\Delta T\)\(\sim\)130 K another shallow maximum of the TO Raman peak indicates the formation of the \(\alpha\)-phase in close proximity to the TiN structure. Upon subsequent increase of the incident light flux, the \(\alpha\) and \(\beta\) phases extend further into the crystal and significantly broaden (Figure 6d). This is expected to result in the smearing of the boundaries between the different phases. This effect is spatially asymmetric, i.e. different for the left (\(\alpha\)-\(\beta\)) and the right (\(\beta\)-\(\gamma\)) sides of \(\beta\) phase, following the highly nonlinear temperature gradient. At higher temperatures, the signal is an interplay of several contributions, in particular the phase layer thickness, the temperature, position along the gradient, and the sharpness of the phase boundary. We hypothesize the third maximum at \(\Delta T\) = 170 K is the result of such a cumulative effect and may be associated with the delocalized (disordered or randomly located) phase boundary. Among other instrumental contributors to the spatial phase formation are the crystal intrinsic defects. Their presence can trigger the spontaneous formation of different phases, resulting in a highly irregular structural front. At higher TiN temperatures, the \(\alpha\)-\(\beta\) and \(\beta\)-\(\gamma\) boundaries broaden significantly and may capture more defects into these areas where the phases are highly mixed. These experiments demonstrate that the Raman signal from perovskite subjected to a stable temperature gradient, as shown in Figure 5k and emphasized in Figure 6, reveals distinct behavior that is in contrast to a bulk crystal held at uniform temperature. The dependence of the Raman signal on the incident intensity shows clear signatures of particular phase formations, their extension into the bulk of the material, and, overall, the multi-phase structuring process. Moreover, it shows that the detected Raman signal exhibits notable gain across \(\Delta T\)=150 K. Using Raman microspectroscopy as a probe, these results indicate that it is possible to generate on demand a distribution of single, double and triple phase structures in perovskite by simply controlling the incident light intensity. **3. Conclusions** In this work, we have demonstrated the proof-of-principle multiphase structured single crystal CsPbBr\({}_{3}\) halide perovskite. We have shown, the single, double and triple phase systems can be created in optically controlled fashion on the thermoplasmonic metasurface using the continuous wave illumination of modest intensities. Light-induced heat from plasmonic TiN nanopads forms strong temperature gradients within crystal bulk that are followed by sequence of corresponding phase transitions. Lattice distortions, defects and impurities operate like an optical nanoantenna, increasing the density of states. Thus, multi-phase perovskite structures hold many interesting properties and open exciting possibilities. In such system, charge carriers migrate from lower symmetry lattice with large bandgap (orthorhombic and cubic) to higher symmetry, but lower bandgap crystal parts (tetragonal). There, highly concentrated and in close proximity to the boundaries, carriers efficiently recombine, leading to areas with significant enhancement of the optical emission. This multi-structured system promises to be highly beneficial to the development of next-generation ultracompact broadband light-emitting diodes showing high PL quantum yields above room temperatures. **Methods** _Synthesis of CsPbBr\({}_{3}\) structures_ Perovskite microcrystals on glass substrates were synthesized by using a protocol similar to the previously reported[40]. PbBr\({}_{2}\) (110 mg) and CsBr (62 mg) were mixed and dissolved in 3 ml of anhydrous dimethyl sulfoxide (DMSO) inside a nitrogen-filled glovebox. Droplet of the prepared solution (volume 2 ul) was drop-casted on the substrate at ambient conditions. After that, the substrate was sealed in a preheated up to 60 \({}^{\circ}\)C Petri dish containing 200 ul of liquid mixture. The solution was dried in the presence of azeotropic vapor for 5 min. As a result, the randomly oriented separate CsPbBr3 microcrystals were formed on the substrate. _Synthesis, nanofabrication and characterization of a TiN/Si metasurface_ TiN thin films on c-Si (100) substrates were DC magnetron sputtered from a Ti target in the Ar/N\({}_{2}\) environment with a volume proportion of 30:70 at elevated temperature of 350 \({}^{\circ}\)C and base pressure of 3\(\cdot\)10\({}^{-9}\) mbar and power of 200 W. Prior to the film growth, the c-Si substrate was sonicated in acetone for 15 min. The thickness of the TiN films, equal to \(50\pm 5\) nm, was measured with a contact profilometer Alpha Step 200. A 2D array of TiN/Si voxels were engraved with the help of focused ion beam (FIB) milling at a lower current of 1 pA by using Quanta 3D FEG (FEI). Since the higher TiN/Si voxels are exposed to FIB for a longer time, their lateral size is reduced due to edge melting. To avoid this detrimental effect, we used different mask templates for short and long voxels so that their lateral size is the same regardless of height. The temperature-dependent permittivity of TiN thin films were measured with a spectroscopic ellipsometer (VASE, J. A. Woollam) within the spectral range of 250-2500 nm. The incident angle was 70\({}^{\circ}\). The TiN sample was exposed to thermal annealing at the fixed temperature, whereas its permittivity was probed at room temperature. The temperature increment for each subsequent cycle was 100\({}^{\circ}\)C. The temperature ranged from 25 \({}^{\circ}\)C to 600 \({}^{\circ}\)C. The samples were annealed at ambient air for 30 min using a heating stage (Linkam Scientific Model THMS600). The heating and cooling rates were 150 \({}^{\circ}\)C/min and 100 \({}^{\circ}\)C/min, respectively. _Fast Scanning Calorimetry_ The Fast Scanning Calorimetry (FSC) curves were registered on FlashDSC2+ (Metller-Toledo, Greifensee, Switzerland) equipped with TC100MT intracooler with UFH1 sensor. The temperature calibration was performed using biphenyl ( \(T_{m}=69.2\)\({}^{\circ}\)C ) and benzoic acid ( \(T_{m}=122.3\)\({}^{\circ}\)C ) as standards to \(\pm 1\)\({}^{\circ}\)C. A single perovskite crystal was placed in the center of the calorimetric sensor. To improve signal-to-noise ratio the crystal size was chosen such as to almost match the active area of the sensor. Within the temperature range from 20 \({}^{\circ}\)C to 180 \({}^{\circ}\)C the sample was chemically stable, and the curves were repeatable, which allowed for averaging multiple scans to further improve signal-to-noise ratio. _Atomic force microscopy_ The multimode scanning probe microscope NTEGRA PRIMA (NT-MDT) was utilized for visualizing a topography of the CsPbBr\({}_{3}\) microplate surface and the thermoplasmonic metasurface. The AFM probes of the "VIT_P" series with resonant frequencies around 350 kHz were used in AFM measurements. The CsPbBr\({}_{3}\) microplate mounted on the metasurface fabricated by focused ion beam milling was measured in tapping mode with a free amplitude \(\mathcal{A}_{0}\) of 10-20 nm and a set-point value of \(\mathcal{A}_{0}/2\). _Far- and near-field Raman spectroscopy and microscopy_ Raman spectra and maps were captured with a multi-purpose analytical instrument NTEGRA SPECTRA(NT-MDT) in inverted configuration. The confocal spectrometer was wavelength calibrated with a crystalline silicon (100) wafer by registering the first-order Raman band at 521 cm-1. A sensitivity of the spectrometer was as high as ca. 3000 photon counts per 0.1 s provided that we used a 100\(\times\) objective (N.A.=0.7), an exit slit (pinhole) of 100 \(\upmu\)m and a linearly polarized light with the wavelength of 632.8 nm and the power at the sample of 16 mW. No signal amplification regimes of a Newton EMCCD camera (ANDOR) was used. 128x128 pixel Raman maps were raster scanned with an exposure time per pixel of 0.1 s and were finally collected with the EMCCD camera cooled down to -95\({}^{\circ}\)C. Raman spectra within the range of from -2000 to 2000 cm-1 were registered with a spectral resolution of 0.1 cm-1 using the Echelle grating. _Fluorescence Lifetime Imaging Microscopy_ To measure the PL decay time we used a system built-in the confocal optical spectrometer (NTEGRA SPECTRA) that includes a picosecond diode laser (BDL-SMN) generatig pulses of 473 nm wavelength, 30 ps pulse duration, and 80, 50, or 20 MHz repetition rate, a Simple-Tau 150 TCSPC FLIM module (Becker&Hickl), and a HPM-100-40 GaAsP hybrid detector (Becker&Hickl). The detector has a detection efficiency of about 50% and is free of afterpulsing. #### FDTD/FEM calculation 3D simulation of optical absorption of a TiN/Si voxels consisting of stacked TiN and Si cylinders under cw illumination was performed by using an Ansys/Lumerical FDTD solver. The height of the TiN pad was 50 nm, whereas the height of the Si pillar 900 nm. To avoid anomalous electric fields near the TiN pad edge we used disks with rounded edges (10 nm rounding). A mesh overlayer of 1 nm was utilized around the TiN pad and a rougher 10 nm mesh for the rest of the structure. Perfectly matching layers were used as boundary conditions for three directions. The optical and thermal properties of Si and air were imported from the Ansys/Lumerical material database. The TiN pad was exposed to a 632.8 nm focused laser light (NA\(=0.7\)) with the intensity of 5 MW/cm\({}^{2}\). The temperature profile was calculated through an Ansys/Lumerical FEM solver in the steady state regime. The thermal conductivity of all constituents is assumed to be temperature-independent. The boundary condition of \(T=300\) K was set at the \(z_{\text{min}}=-20000\) nm of the 20\(\times\)20\(\times\)5 \(\upmu\)m\({}^{3}\) simulation region. #### Conflicts of interests There are no conflicts to declare. #### Acknowledgement This work was supported by grant No. 19-12-00066-P of the RSF. The PL decay time measurements were granted by the Kazan Federal University Strategic Academic Leadership Program (PRIORITY-2030). The authors acknowledge a technical support from our industrial partners: SCANSENS (GmbH, Germany) and NT-MDT BV (The Netherlands).
2308.09025
SR-GAN for SR-gamma: super resolution of photon calorimeter images at collider experiments
We study single-image super-resolution algorithms for photons at collider experiments based on generative adversarial networks. We treat the energy depositions of simulated electromagnetic showers of photons and neutral-pion decays in a toy electromagnetic calorimeter as 2D images and we train super-resolution networks to generate images with an artificially increased resolution by a factor of four in each dimension. The generated images are able to reproduce features of the electromagnetic showers that are not obvious from the images at nominal resolution. Using the artificially-enhanced images for the reconstruction of shower-shape variables and of the position of the shower center results in significant improvements. We additionally investigate the utilization of the generated images as a pre-processing step for deep-learning photon-identification algorithms and observe improvements in the case of training samples of small size.
Johannes Erdmann, Aaron van der Graaf, Florian Mausolf, Olaf Nackenhorst
2023-08-17T14:55:23Z
http://arxiv.org/abs/2308.09025v2
# SR-GAN for SR-gamma: ###### Abstract We study single-image super-resolution algorithms for photons at collider experiments based on generative adversarial networks. We treat the energy depositions of simulated electromagnetic showers of photons and neutral-pion decays in a toy electromagnetic calorimeter as 2D images and we train super-resolution networks to generate images with an artificially increased resolution by a factor of four in each dimension. The generated images are able to reproduce features of the electromagnetic showers that are not obvious from the images at nominal resolution. Using the artificially-enhanced images for the reconstruction of shower-shape variables and of the position of the shower center results in significant improvements. We additionally investigate the utilization of the generated images as a pre-processing step for deep-learning photon-identification algorithms and observe improvements in the case of low training statistics. ## 1 Introduction The interaction of high-energy particles with matter results in complex signatures in the detectors at particle colliders, such as the LHC [1]. The reconstruction and identification of the original particle type from the detector signatures are crucial for analyzing the data. An important particle is the photon, which appears for example in the diphoton decay of the Higgs boson at the ATLAS and CMS experiments [2, 3], as a probe of heavy-ion collisions at the ALICE experiment [4] or as a decay product of rare \(B\)-meson decays at the LHCb experiment [5]. The main signature of a high-energy photon is an electromagnetic shower in the calorimeters. At hadron colliders, a main background source for photons are electromagnetic decays of high-energy mesons. Most prominently, this is the decay \(\pi^{0}\to\gamma\gamma\), as neutral pions are copiously produced in the fragmentation of quarks and gluons. The signature of such a high-energy meson decay often produces a "fake photon", because the large Lorentz boost leads to a small average distance between the photons from its decay. This results in a signature that is very similar to the signature of a real photon. Distinguishing real from fake photons is hence challenging and an important design consideration for electromagnetic calorimeters. Key to distinguishing these two signatures is a high spatial resolution that is achieved by segmenting the calorimeter along pseudorapidity1\(\eta\) and azimuth angle \(\phi\). Footnote 1: The pseudorapidity is defined as \(\eta=-\ln\left(\tan\left(\theta/2\right)\right)\), where \(\theta\) is the polar angle. In this work, we study how single-image super resolution (SR) [6] based on deep neural networks [7] can help in the reconstruction of photon and \(\pi^{0}\) signatures. Such deep-learning algorithms were pioneered [8] in the field of image processing and further developed [9] using the concept of generative adversarial networks (GAN) [10]. They aim at learning an SR version of a low-resolution (LR) image based on its high-resolution (HR) counterpart. We use a neural network inspired by the Enhanced Super-Resolution Generative Adversarial Networks (ESRGAN) [11]. While the generator of the GAN produces artificial SR images from input LR images, the discriminator of the GAN aims to distinguish SR and HR images. By combining the generator and discriminator loss into a common loss term, the generated SR images are expected to become more and more realistic during the GAN training. We treat the calorimeter signatures of photons and neutral pions as the LR images, i.e. the LR images correspond to the granularity of an actual calorimeter. We use simulations of LR images and their corresponding HR counterparts, which have a finer calorimeter segmentation, to train the ESRGAN. Previous applications of super resolution in the field of particle physics focussed on energy and directional reconstruction of charged and neutral pions [12] and on the reconstruction of jet substructure [13]. We focus on the particularly relevant use case of photon identification and reconstruction. We use a toy calorimeter inspired by the electromagnetic calorimeter of the CMS detector [14] with a realistic simulation of the particle interaction with matter using Geant4[15]. We study whether the generated SR images provide advantages compared to only using their LR counterparts for benchmark applications in photon-pion separation and in the directional reconstruction of the photons. The latter application is especially important for the reconstruction of invariant masses from photon signatures, such as in \(H\to\gamma\gamma\). We comment on useful strategies for a stable GAN training and on how the additional physics information from the HR images may help in stabilizing photon classifier training in cases of limited training statistics. ## 2 Simulated samples We simulate a toy calorimeter that is inspired by the electromagnetic barrel calorimeter of the CMS detector. We use the Geant4-based framework of the CaloGAN paper [16] to simulate PbWO\({}_{4}\) scintillating crystals with a length of 230 mm and a front face of \(22\times 22\) mm\({}^{2}\). The front of the calorimeter is placed at a distance of 1.29 m from a Geant4 particle gun. The particle gun produces mono-energetic photons and neutral pions with their direction perpendicular to the calorimeter front face. In order to avoid that all particles are directed at the exact center of the calorimeter, the position of the source is smeared in the plane parallel to the calorimeter front using a Gaussian distribution of width 44 mm, which corresponds to the size of two crystals. Two different energies of 20 GeV and 50 GeV are simulated, which are chosen to be at the lower end of reconstructable photon energies at the LHC and of the order of typical photon energies from Higgs-boson decays. The \(\pi^{0}\) mesons decay into a pair of photons with an angular separation between them as shown in Fig. 1. In both setups, the majority of pion decays produces photons closer to each other than 1 deg, which results in a separation at the calorimeter front of less than one crystal width. Due to the larger Lorentz boost, the decays at an energy of 50 GeV are more collimated on average than in the 20 GeV case. We remove simulated pions where the angle between the photons exceeds 2 deg, because their decays often lead to two well-separated photons even in the LR case. This angular selection retains around 94 % and 99 % of the simulated pions with an energy of 20 GeV and 50 GeV, respectively. We did not simulate the calorimeter noise, a magnetic field or material upstream of the calorimeter. Effects from the limited lateral size of the toy calorimeter are included in the simulation. The LR images consist of a grid of \(24\times 24\) crystals. The HR images have a segmentation that is \(4\times 4\) finer, i.e. \(96\times 96\) smaller crystals. In order to maintain a one-to-one correspondence of LR and HR images, only HR images are simulated and the LR images are obtained by down-sampling the HR images. Before being passed to the networks, the calorimeter images are pre-processed. The two pre-processing steps are visualized in Fig. 2 for a HR pion image and its corresponding LR counterpart. In a first step (going from the first to the second row in the figure), the size of the images is reduced in order to decrease the computational complexity of the super-resolution networks. The width of \(2.2\,\mathrm{cm}\) of the LR calorimeter crystals corresponds to approximately one Moliere radius in PbWO\({}_{4}\), causing photons to deposit most of their energy within a small number of crystals. Therefore, we select the \(6\times 6\) sub-image that contains the largest sum of energy within our LR simulation of \(24\times 24\) crystals. For the HR images, the corresponding sub-image is selected. This procedure keeps on average approximately \(99\,\mathrm{\char 37}\) of the total simulated energy. In a second step (going from the second to the third row in the figure), each energy deposition is crystal-wise divided by the sum of the energy falling into the selected part of the image, and a power-scaling of \(E^{0.3}\) is applied to the normalized crystal energies to reduce the sparsity of the images [13]. ## 3 Super resolution network A successful application of GANs to the SR task was achieved by the SRGAN [9]. It uses a deep convolutional neural network based on residual learning [17] as generator and showed the capability of restoring realistic textures with an upsampling factor of four from downsampled LR images with the help of a new perceptual loss term [18]. Our network architecture builds upon the architecture of the ESRGAN [11]. The ESRGAN is an enhanced version Figure 1: Normalized distributions of the angle between the photons of the \(\pi^{0}\to\gamma\gamma\) decays at \(20\,\mathrm{GeV}\) and \(50\,\mathrm{GeV}\) in the lab frame. The overflow is not included. For both energies, the majority of pions decay into photons that are closer to each other than \(1\) deg, which corresponds to a separation at the calorimeter front of less than one crystal width in the LR case. Figure 2: Visualization of the pre-processing of the calorimeter images, shown for a 20 GeV pion example. In the first row, the simulated image is shown in HR (left) and in LR (right) with a logarithmic colorbar. The selected sub-images are marked in red and displayed in the second row with a linear colorbar. The third row shows the normalized and power-scaled images with a linear colorbar. of the SRGAN, which uses a relativistic loss in the discriminator, a more effective perceptual loss and a deeper generator network constructed with residual-in-residual dense blocks (RRDBs) as its fundamental component. The RRDBs, shown in Fig. 3, consist of three dense blocks [19] connected by residual connections. Additionally, a residual connection is used to link the input of the RRDB to its output. The dense blocks comprise five convolutional layers, where each layer incorporates the outputs of all preceding layers within the block as its inputs. The architecture of our generator network is illustrated in Fig. 4. The LR input images are first processed by a convolutional layer, after which they are passed through five RRDBs and another convolutional layer to extract high-level features. The output of this layer is then combined with the output of the first layer via a skip connection [20]. In contrast to the original design, we use Swish [21] instead of Leaky ReLU as activation functions inside the RRDBs, as this improved the training stability. The upsampling of the LR images is done with two upsampling blocks, each containing an upsampling layer that doubles the number of pixels along the \(x\)- and \(y\)-axes using nearest-neighbor interpolation, followed by a convolutional layer with Swish activation. As in the original ESRGAN architecture, two additional convolutional layers are employed after the upsampling blocks, the first is activated using Swish and the latter using ReLU, which avoids the generation of negative energies. Each convolutional layer in the generator consists of 32 filters with \(3\times 3\) kernels. The striding is set to one and zero-padding is used to preserve the resolution of the images when applying convolutions. In total, the generator network has around 2.1 million trainable parameters. We train the generator to perform realistic upsampling using the Wasserstein-GAN (WGAN) approach [22], which aims to minimize the Wasserstein-1 distance between the probability distributions \(\mathcal{P}\) of the real HR images and Figure 3: Structure of the RRDB blocks used in this study, consisting of three dense blocks, which each contain five convolutional layers with Swish activation. The residual connections are scaled by a free parameter \(\beta\). the generated SR images. We can write the Wasserstein distance between these distributions as \[W(\mathcal{P}_{\mathrm{HR}},\mathcal{P}_{\mathrm{SR}})=\sup_{||f||_{L}\leq 1} \left(\mathbb{E}_{x\,\in\,\mathcal{P}_{\mathrm{HR}}}\left[f(x)\right]-\mathbb{E}_ {\tilde{x}\,\in\,\mathcal{P}_{\mathrm{SR}}}\left[f(\tilde{x})\right]\right)\,, \tag{1}\] with \(||f||_{L}\leq 1\) denoting the set of Lipschitz continuous functions applied to our calorimeter images and \(\mathbb{E}\) denoting the expectation value. The function \(f\) that maximizes the expression in Eq. (1) is approximated by training the critic network while at the same time forcing it to fulfill the Lipschitz condition. Several techniques exist to constrain the critic to be Lipschitz continuous, and we use the gradient penalty (GP) proposed in Ref. [23]. The GP introduces an additional term in the critic loss that penalizes the network to obtain gradient norms, with respect to its inputs, that deviate from one. In this setup, the loss function for a critic network \(C\) can be written as \[\mathcal{L}_{\mathrm{C}}=\mathbb{E}_{\tilde{x}\,\in\,\mathcal{P}_{\mathrm{SR}} }\left[C(\tilde{x})\right]-\mathbb{E}_{x\,\in\,\mathcal{P}_{\mathrm{HR}}} \left[C(x)\right]+\lambda_{\mathrm{GP}}\mathbb{E}_{\tilde{x}\,\in\,\mathcal{P} _{\tilde{x}}}\left[\,\left(||\nabla_{\tilde{x}}C(\tilde{x})||_{2}-1\right)^{2 }\,\right]. \tag{2}\] The last term describes the gradient penalty with strength parameter \(\lambda_{\mathrm{GP}}\) and is calculated along straight lines \(\hat{x}\) that are randomly sampled between given pairs of HR images \(x\) and SR images \(\tilde{x}\) as \(\hat{x}=x+\alpha(\tilde{x}-x)\), where \(\alpha\) is randomly sampled from a uniform distribution between \(0\) and \(1\). The structure of our critic network is shown in Fig. 5 and is similar to the discriminators used in the original SRGAN and ESRGAN. The network receives either HR or SR images as input and outputs a single number discriminating between these image classes. It consists of Figure 4: Schematic representation of the generator architecture. The low-resolution input images are fed into a convolutional layer. The extracted features are passed into a block of five RRDBs followed by a convolutional layer. A residual connection adds the output of the first convolutional layer. The upsampling takes place in the two upsampling layers, each of which doubles the number of pixels along the \(x\)- and \(y\)-axis of the images, which is followed by two additional convolutional layers. and two dense layers. The convolutional layers are placed in an alternating structure with strides of \(s=1\) and \(s=2\). Each layer with stride convolutions (\(s=2\)) halves the dimension in the \(x\)- and \(y\)-direction of its input. The number of filters is doubled in the third and fourth convolutional layer (64 filters) and again doubled in the fifth and sixth layer (128 filters). All convolutional layers use \(3\times 3\) kernels and zero-padding. After each convolutional layer, we use Layer Normalization [24], as recommended in Ref. [23], instead of the originally proposed Batch Normalization [25], and we use the Swish activation function. The output of the last convolutional layer is flattened and passed to a dense layer with 64 nodes and Swish activation function, followed by the last layer with a single node. In addition to the adversarial loss, which uses the critic's output to improve the generated images, we use the concept of perceptual loss [18] to train the generator. In contrast to a crystal-wise comparison of energy depositions between a SR calorimeter image and the reference HR image, the feature representations extracted from a hidden layer of a pre-trained convolutional neural network (CNN) are compared between image pairs. The ESRGAN uses the VGG19 network [26] trained on the ImageNet [27] dataset and calculates the Euclidean distance between the features extracted from the last convolutional layer. Since our calorimeter images strongly differ from the ImageNet examples, we use a CNN trained to separate single-photon from neutral-pion-decay calorimeter images for the perceptual loss. This network is discussed in more detail in Sec. 5. Similar to the ESRGAN, we use the features extracted from the last (third) convolutional layer, corresponding to a high-level representation of the input images. The generator is hence trained to retain features of the images that are important for the classification as photon or pion. The full generator loss is the sum of the adversarial loss and Figure 5: Illustration of the critic architecture, consisting of six convolutional layers, each followed by Layer Normalization and Swish activation function, and two dense layers. The number of filters _nf_ and the striding parameters \(s\) of the convolutional layers are given, as well as the number of nodes of the dense layers. the perceptual loss, weighted by the parameters \(\lambda_{\rm adv.}\) and \(\lambda_{\rm per.}\), \[{\cal L}_{G}=\lambda_{\rm adv.}\,(\mathbb{E}_{\tilde{x}\,\in\,{\cal P}_{\rm SR}} \,[C(\tilde{x})])+\lambda_{\rm per.}\Bigg{(}\!\sum_{(x,\,\tilde{x})}(\Phi(x)- \Phi(\tilde{x}))^{2}\Bigg{)}, \tag{3}\] where \(\Phi\) denotes the feature representations of SR images \(\tilde{x}\) and HR images \(x\). ## 4 Network training The super-resolving GANs are trained using 100,000 photon and 100,000 neutral pion images. We adapt several recommendations from Ref. [23] for the training of the WGAN: We use the Adam optimizer [28] with learning rate \(10^{-4}\) and decay parameters \(\beta_{1}=0\) and \(\beta_{2}=0.9\) and train the critic for five mini-batches before training the generator for one mini-batch. We use a batch-size of 32. In the 20 GeV setup, the perceptual loss is scaled by \(\lambda_{\rm per.}=3\cdot 10^{-2}\), while \(\lambda_{\rm per.}=3\cdot 10^{-1}\) is used for the 50 GeV network. The adversarial term of the generator loss is scaled by \(\lambda_{\rm adv.}=10^{-5}\). The critic networks are trained with a gradient-penalty strength of \(\lambda_{\rm GP}=1\). The hyperparameters are optimized as follows: In a first step, the capacities of the networks are varied, in particular the number of RRDBs in the generator. At the same time, different values for the scaling parameters of the generator and critic loss terms, \(\lambda_{\rm adv.}\) and \(\lambda_{\rm GP}\) are studied. These parameters are fixed to the above mentioned values taking in particular the training stability and convergence together with the visual quality of the SR images into account. In order to decrease the complexity of the hyperparameter optimization, the perceptual loss is not included in these first studies, i.e. \(\lambda_{\rm per.}=0\) is used. The performance depends only marginally on the generator capacity in the tested range of 1-10 RRDBs, hence an intermediate value of 5 is chosen. The smaller dimension of our HR and SR images requires a reduction of the number of convolutional layers in the critic compared to the architecture used in the original ESRGAN from eight to six, since the layers with strided convolutions (\(s=2\)) each halve the number of pixels along both image axes. In addition, the number of nodes in the first dense layer in the critic is reduced from 1024 to 64, which significantly reduces the training time while no differences in the performance are found. With this setup, the GAN trainings run stably for both particle energies and produce realistic SR images where no obvious artefacts are observed. In a second step, the perceptual loss is included in the training with the particular goal to penalize the generator for confusing the two particle types. To evaluate and optimize its impact, we monitor the capability of the CNNs pre-trained on the HR images to distinguish between the SR photon and pion examples and analyze the impact on shapes of the electromagnetic shower and the differences between photons and pions. We determine the distribution of the shower width in the SR images and compare it to the distribution obtained from the HR images. In LHC experiments, similar variables describing the shower shape are used to discriminate between photons and other signatures from hadronic activity [29, 30]. We define the width of a shower image with crystal indices \(i\) as \[W=\frac{\sum_{i}\Delta R_{i}E_{i}}{\sum_{i}E_{i}}, \tag{4}\] where \(E_{i}\) denotes the energy measured in a crystal and \(\Delta R_{i}\) is its angular distance to the barycenter of the shower. We obtain the distributions separately for photons and pions and monitor the Kolmogorov-Smirnov (KS) statistic between each HR and SR width distribution during the training. The values obtained for the KS statistics are shown in Fig. 6. The epoch with the lowest mean of the statistics for pions and photons is finally selected. Since the perceptual loss uses individual CNNs in the \(20\,\mathrm{GeV}\) and \(50\,\mathrm{GeV}\) setups, different values of the corresponding relative weight are found to yield the best performance. We observe that including this additional loss term with the optimized weight improves the pion rejection2 obtained from the pre-trained CNNs applied to the SR images compared to trainings without perceptual loss by up to a factor of five, depending on the photon identification efficiency. Footnote 2: The rejection is defined as the inverse of the efficiency. In Fig. 7, the evolution of the different parts of the loss functions during training as well as several metrics are shown for the example of the \(50\,\mathrm{GeV}\) network. At the start of the training, the critic network is able to discriminate between the original HR and the generated SR images with an accuracy of \(100\,\mathrm{\char 37}\). It can be seen that during the training, the critic accuracy approaches a value slightly above \(50\,\mathrm{\char 37}\), while the critic loss--which approximates the Wasserstein distance--tends towards zero. In addition, the evolution of pion rejections for fixed values of the photon efficiency is shown, which is evaluated on SR images with the CNN that was pre-trained on HR images. The pion rejections increase as the perceptual loss decreases. The training progress is also visualized in Fig. 8. In the initial stages of the training, distinct artefacts are evident in the SR images. By averaging over all images, biases in the spatial distribution of the predicted energy depositions become visible, which largely disappear after around 100 training epochs. Similarly, the network learns to generate photons and pions with Figure 6: Kolmogorov-Smirnov statistic calculated between the HR and SR shower width distributions as a function of the training epoch, separately for photons and pions. The 20 GeV setup is shown on the left, the 50 GeV setup is shown on the right. The stronger lines are obtained by smoothing the original values shown with the lighter colors. We select the epoch where the mean of the photon and pion statistics is at its minimum, indicated by the black dashed line. Figure 7: Different parts of the loss functions and metrics during the training of the 50 GeV network, where “train.” (“val.”) refers to loss/metrics evaluated on the training (validation) sample. Left: losses of the critic network and its accuracy in discriminating between HR and SR images. Right: perceptual loss used for the generator training and the pion rejection at several fixed photon efficiencies obtained with the pre-trained CNN. shower widths almost matching the HR distributions within these initial 100 epochs. However, we still observe improvements in the generated widths and in other metrics like the critic accuracy or pion rejections up to around 5,000 training epochs. ## 5 Results After training the SR networks, we study the properties of the upsampled images and discuss possible use cases at hadron-collider experiments. Example predictions of the generator network are shown in Fig. 9 for the 20 GeV network and in Fig. 10 for the 50 GeV network, respectively. For each energy, two randomly picked examples for each particle type are included, comparing the LR image, which was passed to the SR network, to the corresponding HR image and the generated SR version. In general, we observe that the obtained SR images have a high perceptual similarity with the HR simulation. Figure 8: Evolution of the image quality during the training of the 50 GeV network. The top row shows the average across all photon and pion images in the validation sample. From left to right, the average SR image after one epoch, after 100 epochs, at the selected best epoch, and the simulated HR average are displayed. In the middle and bottom row, the SR shower widths obtained after the same epochs of training are compared to the HR shower widths for photons and pions, respectively. Figure 9: Example SR images (right column) with their corresponding LR (left) and HR (middle) versions for the \(20\,\mathrm{Ge\kern-1.0ptV}\) network. The first two rows show photon examples, the bottom two rows show pion examples. Figure 10: Example SR images (right column) with their corresponding LR (left) and HR (middle) versions for the \(50\,\mathrm{GeV}\) network. The first two rows show photon examples, the bottom two rows show pion examples. Typically, the main visual properties of the HR images are also found in the generated SR versions. In particular, we find clear single peaks in the SR photon images and typically two distinct peaks in the pion SR images. Furthermore, the position and orientation of these peaks often matches the one of the simulated HR images well, although this information is often difficult to extract from the LR images by eye. The main difference between the 20 GeV and 50 GeV examples is the angle between the photons from the pion decays. Comparing the pion examples in Fig. 9 and Fig. 10, the 20 GeV pions appear as a single merged shower in the LR calorimeter, while they are well resolved as two photons in the HR calorimeter. However, asymmetries in the LR calorimeter pion images allow the SR network to generate separate peaks in SR images that often coincide with the peaks in their HR counterparts. The decay products of the 50 GeV pions typically appear as two overlapping showers even in the HR calorimeter. Also in the case of these merged showers, the SR network often reproduces the main features of the HR images. As an example of a "shower-shape variable", which are often used as features in photon identification algorithms at LHC experiments, we show the shower width in Fig. 11, as defined in Eq. (4). For the 20 GeV particles, the LR calorimeter can resolve significant differences between photon and pion shower widths, however, with a binning as in Fig. 11, the fraction of overlapping area between the photon and pion width histograms is around 52 %. Comparing to the corresponding HR distributions, it is clearly visible that the higher spatial resolution allows for a better measurement of this quantity. Hence, shower-shape variables have a much better power to discriminate between photons and pions with the HR calorimeter. The fraction of overlapping area reduces in the HR histograms to approximately 0.53 %. Although we train our SR networks on mixed datasets containing photon and pion examples, the shower width distribution obtained from the SR photons and pions closely follow the HR distributions. Here, the overlapping area is around 0.90 % and thus heavily reduced compared to the LR case. At 50 GeV, the LR width distributions for photons and pions become more similar and the overlapping area increases to 85 %. Here, the typical distance between the two photons from the pion decays is much smaller than one crystal width. Also in the HR calorimeter, the width distributions appear closer together, but this variable still provides a good separation with an overlap of around 19 %. The SR distributions match the HR widths less precisely than in the 20 GeV case, because the discrimination of the classes is more difficult. However, the overlapping area of around 29 % is still much lower than in the LR case. Thus for both energies, the separation between photons and pions that can be achieved by such a shower shape variable is significantly improved by using the SR image. In addition to the identification of photon candidates, the measurement of the photon position is a crucial step in the reconstruction chain. Often, the barycenter position of the cluster of energy depositions is determined and taken as the photon positions' estimate. The precision in the localization of the barycenter is limited by the granularity of the calorimeter and is important, for example, for the resolution of invariant masses of diphoton decays, such as \(H\rightarrow\gamma\gamma\). To study the effect of the SR technique on the localization of showers, we compare the distances of the barycenter positions of either the SR or LR images and the barycenters of the HR images in Fig. 12. We observe that the localization of the photons and pions is significantly improved in SR compared to LR. From the HR simulation, the generator learns realistic interpolations between the crystals and this leads to an improved determination of the position. The actual impact of an improved localization of the photons on the invariant mass resolution of diphoton decays in an experiment depends on further quantities, which we cannot evaluate in our simplified setup, such as the energy resolution of the individual pho Figure 11: Normalized distribution of the shower widths for the 20 GeV particles (left) and for the 50 GeV particles (right). Pion shower widths are shown with the dashed lines, while the solid lines show the photon distributions. In addition, the ratio of the SR and the HR distribution is shown. Arrows indicate that the value is out of the chosen \(y\)-axis range of the ratio plot. The error bars indicate the statical uncertainties. tons and the resolution in the determination of the position of the primary vertex [31, 32]. Since we observe that differences between the photon and pion images are more prominent in SR than in LR, we study the impact of using SR as a pre-processing step before training classifiers to separate real photons from fakes induced by neutral-pion decays. We train CNNs on a dataset of 100,000 examples, half photons and half pions, which are independent from the samples used for the GAN training. The CNNs have a comparably simple architecture, beginning with three convolutional layers consisting of 32 filters with a kernel size of \(3\times 3\). In these layers, a stride of one and zero-padding are used to conserve the lateral dimensions of the image. For the HR and SR case, we place a max-pooling layer after each of these layers, which halves the number of pixels in the \(x\)- and \(y\)-direction. In the LR case, we use only one max-pooling layer after the last convolutional layer and leave out the ones after the first and the second convolutional layer, while the remaining structure is the same as in the HR and SR CNNs. The output of the last layer is flattened and fed to a dense layer with 10 nodes and ReLU activation, followed by a dense layer with a single node activated by the sigmoid function. The number of trainable parameters is identical for the CNNs used for the HR or SR images and the LR images. We train the CNNs using the Adam optimizer with an initial learning rate of \(10^{-3}\) and with the binary cross-entropy as loss function. The trainings are stabilized using L2 regularization with strength of \(\mathcal{O}(10^{-4})\), where the exact values are chosen in each training to achieve the best network performance. The CNNs trained on the HR images are those Figure 12: Normalized distribution of the distance of the barycenter positions of the SR and LR showers from the HR barycenter in units of crystal widths, for the 20 GeV (left) and 50 GeV (right) cases. that are also used as "pre-trained CNNs" for the perceptual loss term in the GAN training. As expected from the opening angle distributions of the photon from the pion decays (Fig. 1), large differences are found between the 20 GeV and the 50 GeV setups for the separation of photons from pions. CNNs trained on 20 GeV images have tiny failure rates in the classification task. For a given photon efficiency, the pion rejections factors achieved by the 20 GeV CNNs are two orders of magnitude higher than in the 50 GeV case. Comparing the CNNs trained on SR images with the ones trained on LR images, we observe that differences arise depending on the number of samples available for the CNN training. This is illustrated in Fig. 13, which shows the discrimination achieved by CNNs trained on either the full set of 100,000 samples or reduced sets of 10,000 and 1,000 samples. The evaluation is done on independent test datasets, which were not used for the GAN or CNN trainings.3 When training the CNN on small datasets, we observe notable improvements when SR is used to enhance the training data. For both energies, an improvement by a factor of two or more is found in the achieved pion rejections for the case of 1,000 training samples, over a wide range of photon efficiencies. In the setup with 10,000 training samples, an improvement of around 40 % remains in the 50 GeV case, while for the 20 GeV images, the SR CNNs only outperform the LR ones for high photon efficiencies (\(>95\,\%\)). When training on 100,000 samples, the performance of the SR and LR CNNs is similar for both energies. Footnote 3: We deploy 50,000 samples in the 50 GeV setup, equally photons and pions, but increase the dataset to 1,000,000 pions and 100,000 photons in the 20 GeV setup, because otherwise the statical uncertainty in the pion rejections is large due to the high rejection values. In an actual experiment, using SR as a pre-processing step for training a photon-identification classifier can indeed be useful. While real-photon signatures can easily be simulated with high statistics (for example from \(H\to\gamma\gamma\) decays), this is typically not the case for fake-photon candidates. Only a tiny fraction of simulated jets leads to signatures which are photon-like, characterized by sharp energy depositions in the ECAL, low hadronic activity close-by and no matched tracks (or a tracker signature compatible with a photon conversion). Hence, the fraction of simulated jets passing typical photon pre-selection criteria based on shower-shape variables as well as requirements on the photon isolation, i.e., the activity around the photon candidate, is typically very small. Therefore, the fake-photon datasets that are available for the classifier trainings are often small. However, particle-gun simulations of photons and neutral pions, such as those that we used for these studies, can be easily produced in large amounts also with a realistic detector simulation. If SR networks that are trained on such particle-gun simulations are found to be universal in the sense that they capture the main properties of the electromagnetic showers, they could be used as a pre-processing step for the classifier trainings based on real and fake photons in the experiment. We hence propose further studies in this direction. ## 6 Conclusions We used simulated showers of 20 and 50 GeV single photons and neutral-pion decays to two photons in a toy PbWO\({}_{4}\) calorimeter to train super-resolution networks based on the ESRGAN architecture. We treated the energy depositions in the calorimeter crystals as two-dimensional images and created low-resolution images, corresponding to the nominal resolution, and high-resolution counterparts, which correspond to an artificially increased resolution by a factor of four in both dimensions. We made modifications to the original ESRGAN proposal based on training properties of Wasserstein Generative Adversarial Networks and based on the physics properties of the images. In particular, we found that a physics-inspired perceptual-loss term improves the training, which we based on the features that convolutional neural networks extracted from the high-resolution images. We found that the super-resolution networks are able to reproduce distinct features of the high-resolution images, which were not apparent in the low-resolution images by eye, such as the presence of a second energy max Figure 13: Classification performance for CNNs trained on either LR or SR images for trainings using different numbers of samples (1k, 10k, 100k). The pion rejection is shown as a function of the photon efficiency, for the 20 GeV (left) and the 50 GeV simulation (right). In addition, the ratio of the SR and the LR pion rejections is shown. The error bands represent the statistical uncertainty in the pion rejections. imum for the pion decays. We also found that the networks are able to upsample low-resolution images of photons and pions generally in a convincing way, although the networks are trained on photons and pions together and the label of each image is not explicitly passed to the networks. We then studied possible applications of the super-resolution images at collider experiments and we found that the reconstruction of the shower width (as an example of a shower-shape variable) and of the position of the shower center are much improved compared to the reconstruction from the low-resolution images. We also studied whether the super-resolution images could be used as a pre-processing step for training photon-identification classifiers at collider experiments. When only a low number of samples was available for the classifier training, the training on the super-resolution images outperformed the training on the low-resolution counterparts. We conclude that the additional physics information that is included in the high-resolution images, and hence also in the generated super-resolution images, helps to extract discriminatory features for the classification. In general, we conclude that the application of super resolution based on the proposed modified ESRGAN architecture is promising for the analysis of photon signatures at collider experiments. While the photons' calorimeter signatures are used for several different reconstruction and identification goals, for which typically separate algorithms are trained, the super-resolution is intrinsically multi-purpose and promises to improve several tasks at once. As one example, we stress the challenge in simulating a sufficient number of fake-photon candidates from jets at hadron-collider experiments, and the benefits that a pre-processing with a particle-gun-based super-resolution network could bring. We propose further studies in this direction, in particular on the performance of particle-gun-based super resolution on full collider events. Future studies on super-resolution networks for collider experiments should expand the energy range and use the realistic simulations that are available at the LHC experiments. ## Acknowledgements This research was supported by the Deutsche Forschungsgemeinschaft (DFG) under grants 400140256 - GRK 2497 (The physics of the heaviest particles at the LHC, JE and FM) and 686709 - ER 866/1-1 (Heisenberg Programme, JE), by the Studienstiftung des deutschen Volkes (FM), and by the Bundesministerium fur Bildung und Forschung (BMBF) under grant 05H21PECA1 (AvdG and ON).
2310.14558
AlpaCare:Instruction-tuned Large Language Models for Medical Application
Instruction-finetuning (IFT) has become crucial in aligning Large Language Models (LLMs) with diverse human needs and has shown great potential in medical applications. However, previous studies mainly fine-tune LLMs on biomedical datasets with limited diversity, which often rely on benchmarks or narrow task scopes, and hence significantly limit the effectiveness on their medical instruction-following ability and generalizability. To bridge this gap, we propose creating a diverse, machine-generated medical IFT dataset, MedInstruct-52k, using GPT-4 and ChatGPT with a high-quality expert-curated seed set. We then fine-tune LLaMA-series models on the dataset to develop AlpaCare. Despite using a smaller domain-specific dataset than previous medical LLMs, AlpaCare not only demonstrates superior performance on medical applications, with up to 38.1% absolute gain over best baselines in medical free-form instruction evaluations, but also achieves 6.7% absolute gains averaged over multiple general domain benchmarks. Human evaluation further shows that AlpaCare consistently outperforms best baselines in terms of both correctness and helpfulness. We offer public access to our data, model, and codebase in https://github.com/XZhang97666/AlpaCare.
Xinlu Zhang, Chenxin Tian, Xianjun Yang, Lichang Chen, Zekun Li, Linda Ruth Petzold
2023-10-23T04:22:50Z
http://arxiv.org/abs/2310.14558v5
# AlpaCare:Instruction-tuned Large Language Models for Medical Application ###### Abstract Large Language Models (LLMs) have demonstrated significant enhancements in instruction-following abilities through instruction tuning, achieving notable performances across various tasks. Previous research has focused on fine-tuning medical domain-specific LLMs using an extensive array of medical-specific data, incorporating millions of pieces of biomedical literature to augment their medical capabilities. However, existing medical instruction-tuned LLMs have been constrained by the limited scope of tasks and instructions available, restricting the efficacy of instruction tuning and adversely affecting performance in the general domain. In this paper, we fine-tune LLaMA-series models using 52k diverse, machine-generated, medical instruction-following data, _MedInstruct-52k_, resulting in the model _AlpaCare_. Comprehensive experimental results on both general and medical-specific domain free-form instruction evaluations showcase _AlpaCare_'s strong medical proficiency and generalizability compared to previous instruction-tuned models in both medical and general domains. We provide public access to our _MedInstruct-52k_ dataset and a clinician-crafted free-form instruction test set, _MedInstruct-test_, along with our codebase, to foster further research and development. Our project page is available at [https://github.com/XZhang97666/AlpaCare](https://github.com/XZhang97666/AlpaCare). ## 1 Introduction Recent advancements in Large Language Models (LLMs) have demonstrated remarkable generalization capabilities across a spectrum of applications (OpenAI, 2022; 2023; Touvron et al., 2023a; b). To enable LLMs to follow instructions and be helpful in real-world tasks, instruction tuning has been widely applied to align the models' behaviors (Ouyang et al., 2022; Longpre et al., 2023). Wang et al. (2023b) propose utilizing automatically machine-generated instruction-response pairs for fine-tuning LLMs to align with human intent. Taori et al. (2023); Xu et al. (2023a); Chen et al. (2023) follow this paradigm and further emphasize that increasing the quality or diversity in training data can consistently enhance generalization performance. LLMs have demonstrated significant potential in the medical domain, offering valuable insights and capabilities across various applications (Singhal et al., 2022; Lievin et al., 2023; Zhang et al., 2023). To better align with human intent in the medical domain, models from the LLaMA series have been tuned on medical instructions (Han et al., 2023; Li et al., 2023; Wu et al., 2023; Touvron et al., 2023a). Despite tuning with substantial volumes of medical data, including datasets comprising millions of entries from biomedical literature, the limited diversity of instructions in the data has constrained these models' ability to follow instructions effectively. This limitation not only hampers performance in specialized medical domains but also inadvertently affects efficacy across more generalized domains. Inspired by Wang et al. (2023b); Taori et al. (2023), we propose a semi-automated process for instruction-tuning a medical LM, utilizing diverse instructional signals from teacher LLMs (e.g., GPT-4 and ChatGPT). Initially, we begin with a limited seed set of clinician-crafted tasks that span various medical topics, task types, and difficulty levels, examples of which are shown in 1. To automatically generate a broader array of tasks for training, we prompt GPT-4 to create instructions for new tasks by leveraging the existing clinician-crafted tasks. After the generation of tasks and removal of duplicates, we employ ChatGPT to provide responses to the valid tasks. Consequently, we compile a 52k medical self-instruct dataset, _MedInstruct-52k_, which supervises the instruction tuning, resulting in the model _AlpaCare_. Our experiments within the medical domain-specific and general domain free-form instruction evaluations reveal that _AlpaCare_, solely tuned using the 52k diverse medical instruction-response pairs, exhibits enhanced medical capacity and instruction-following ability compared to existing ones in both medical and general domain, across various backbone models and scale sizes. Our paper makes the following contributions: * We demonstrate the importance of task diversity in instruction tuning for the medical domain. * We conduct comprehensive experiments on free-form instruction evaluation in medical and general domains and show that _AlpaCare_, tuned with a diverse medical self-instruct dataset, can enhance both its medical capacity and generalization ability simultaneously. * We release, _MedInstruct-52K_, a diverse medical task dataset comprising 52K instruction-response pairs and, _MedInstruct-test_, a set of clinician-crafted novel medical tasks, to facilitate the building and evaluation of future domain-specific instruction-following models. ## 2 Method Collecting a large-scale medical instruction dataset is challenging because it necessitates 1) a deep understanding of the specific domain knowledge, and 2) creativity to devise domain-specific novel tasks. To mitigate human effort while maintaining high quality and diversity in the dataset, we follow Wang et al. (2023b), utilizing a small set of clinician-crafted seed tasks with 167 instances to prompt GPT-4 in generating medical task data. Duplicates are removed from the generated medical tasks, preserving 52k instances which are subsequently inputted into ChatGPT for response generation. The curated data is then employed for instruction tuning on the LLaMA, resulting _AlpaCare_ with strong medical capacity and robust generalization ability. ### Clinician-crafted seed dataset A diverse and high-quality seed task is essential for prompting LLMs in automatic task generation Wang et al. (2023b). We focused on four key areas to improve the diversity of seed instructions: _topic_, _view_, _task type_, and _difficulty level_. Specifically, the topics encompass various medical fields, such as Figure 1: **Selected tasks from the clinician crafted seed set. We focus on four perspectives: _topic_, _viewpoint_, _task type_, and _difficulty level_, to improve the seed set diversity. The set is further used to query GPT-4 to generate medical tasks.** radiology, genetics, and psychophysiology. Views were sourced from different medical personnel, including nurses and x-ray technicians, to ensure a broad range of perspectives. For task types, we included various formats, such as summarization, rewriting, single-hop, and multi-hop reasoning. Lastly, tasks were categorized by different difficulty levels, ranging from 1 to 5, to ensure that the seed tasks covered a wide range of expertise levels1. Clinicians crafted each task based on these four perspectives, and each task contains an instruction and may have a corresponding input, which could be a detailed medical example to further elucidate the instruction and enhance task diversity. Examples are shown in Figure 1. Footnote 1: We defer the difficulty level description to the appendix. ### Medical self-instruct generation and LLM instructional tuning We utilize GPT-4 for in-context learning by selecting 3 tasks from the seed set and generating 12 tasks for each run. To ensure task diversity, we instruct GPT-4 to consider the four aspects outlined in 2.1. Instructions with a Rouge-L similarity above 0.7 to any other generated task are discarded to further amplify textual diversity. Due to the lengthy propriety of medical text, we separately generate responses for each task using ChatGPT (gpt-3.5-turbo), which has demonstrated efficacy in the medical domain (Zhang et al., 2023). For LM tuning, we align our data scale with Wang et al. (2023); Taori et al. (2023), incorporating 52K machine-generated instruction-response pairs to fine-tune LLaMA and obtain a medical instcution-tunned model _AlpaCare_. ## 3 Experimental setup ### Free-form instruction evaluation Models trained with medical instruction are exclusively tested in the medical domain, while those for general instructions are tested in the general domains. These non-comprehensive evaluations could introduce bias. To access medical capacity, we conduct evaluation on iClinq (Li et al., 2023), a patient-doctor conversation set, and a medical instruction test set crafted by our clinicians (_MedInstruct-test_). To further evaluate generalization ability, we use general domain test set, AlpacaFarm (Dubois et al., 2023). More information on the test sets is shown in Table 1. ### Baseline We compare _AlpaCare_ with several instruction-tuned LLMs based on the LLaMA models, across different scales and with various tuning datasets. The following models were considered: * **Alpaca**(Taori et al., 2023) are tuned using 52,002 machine-generated samples from a general domain. The responses for these samples were produced by text-davinci-003. * **ChatDoctor** Li et al. (2023) is fine-tuned based on 100k real conversations between patients and doctors. * **MedAlpaca**(Han et al., 2023) are tuned using approximately 230,000 medical instances, including question-answer pairs and doctors-patients conversations. * **PMC-LLaMA**(Wu et al., 2023) are two step tuning models. First, it is fine-tuned using 4.8 million biomedical academic papers and 30,000 medical textbooks for knowledge enrichment. Subsequently, it was further instruction-tuned on a corpus of 202 million tokens. * **Baize-Healthcare**(Xu et al., 2023) is an open-source chat model trained with LoRA in the Healthcare domain. It was trained on around 100k multi-turn dialogues from Quora and MedQuAD. We assume the first turn of the conversation from the human as the instruction. \begin{table} \begin{tabular}{c c c} \hline \hline & \# Sample & Domain \\ \hline iClinq2 & 1,000 & Medical \\ _MedInstruct-test_ & 217 & Medical \\ AlpacaFarm & 805 & General \\ \hline \hline \end{tabular} \end{table} Table 1: Free-form instruction evaluation test set information. ### Evaluation metric We conduct auto-evaluation (Dubois et al., 2023; Chiang et al., 2023; Zheng et al., 2023) for free-form instruction evaluation by employing an LLM API (e.g. gpt-3.5-tubro) to serve as a judge. This judge compares responses from an instruction-tuned LLM with reference responses produced by another LLM API (e.g. text-davinci-003), for each corresponding instruction in the test sets. LLM judges can exhibit positional bias, showing a preference for specific positions in their judgments (Wang et al., 2023). To ensure a fair assessment, we implement a dual-sided scoring system. Each output Figure 3: **Evaluation of Generalization Ability on AlpacaFarm:** A performance comparison between _AlpaCare_ and the instruction-tuned 7B baseline models reveals that _AlpaCare_ demonstrates superior generalization capability over baseline models. Figure 2: **Comparison of medical capacity on (a) iCliniq and (b) MedIstruct-test:** A performance comparison between _AlpaCare_ and instruction-tuned 7B baseline models. _AlpaCare_ consistently outperforms the baselines across four distinct reference models, showcasing its robust medical capability. comparison is evaluated twice, where the sequence of the dual-sided evaluation and we report the winning rate by calculating \(\frac{\sum_{i=1}^{N}\text{score}_{i}}{N}\), where \(N\) is the number of instances in the test set. To ensure fair comparisons, we set the max token length to 2048 for all instruction-tuned models to avoid cut-off responses, and utilize greedy decoding for all reference responses generation by calling APIs. ## 4 Experiment Results ### Main result **Tuning with medical self-instruct data boosts models' medical capacity.** To conduct a holistic medical capacity evaluation, We compare baseline models and _AlpaCare_ to reference outputs generated by four different APIs: text-davinci-003, gpt-3.5-turbo, gpt-4 and Claude-2, respectively. We utilize gpt-3.5-turbo as the judge to conduct the dual-sided scoring evaluation. The results are shown in Figure 2. The _AlpaCare_ model yields better performance than its general domain counterpart, Alpaca, demonstrating that training on domain-specific instruction data enhances domain knowledge. Despite being trained on only 52k medical instruction-response pairs, _AlpaCare_ consistently and significantly surpasses other medical instruction-tuned models -- which are trained on considerably larger datasets -- across various reference models. These results highlight the advantages of improving medical proficiency by training with a diverse, domain-specific instruction dataset. Interestingly, medical instruction-tuned models don't always outperform general domain instructed models in medical tasks. This discrepancy might arise because these models were trained with limited instructions and tasks, potentially compromising their conversational capabilities. **Domain-specific instructions can enhance generalization ability.** We further leverage gpt-3.5-turbo as the judge to evaluate the generalization ability of instruction-tuned models on AlpacaFarm across four reference models. The results are in Figure 3. The general domain model does not always outperform domain-specific models on the AlpacaFarm, and vice versa. However, the _AlpaCare_ consistently and significantly outperforms all baseline models in both the medical domain and general domains across four different reference models, underscoring its robust generalizability. These observations accentuate the pivotal role of data diversity. By tuning models with a diverse self-instruct dataset, even one that is specialized within a particular domain, there is no detriment to the model's generalizability. ### Ablation Study. _AlpaCare_ **achieves superior performance across various LLM backbones.** We further fine-tuned Alpaca-LLaMA2 and _AlpaCare_-LLaMA2 using Alpaca training data and _MedIntsrcut-52k_ on LLaMA2-7B, respectively. These models were then evaluated on three test sets, with gpt-3.5-turbo serving as the judge across four reference models. Figure 4 illustrates the performance comparison Figure 4: **Results on different LLM backbone.Comparing the performance of AlpaCare and Alpaca using different LLM backbones, with gpt-4 as the reference model.** when using gpt-4 as the reference model,. Results from other reference models can be found in the Appendix. Consistent with the results obtained using LLaMA as the backbone, _AlpaCare_-LLaMA2 consistently outperforms Alpaca-LLaMA2 in both the medical and general domains. This underscores the benefit of tuning with a diverse medical-specific instruction dataset, as it not only enhances the model's medical capacity but also bolsters its generalization ability. _AlpaCare_ demonstrates robust performance across different judges. Recent studies have highlighted potential biases in the LLM evaluator (Wang et al., 2023a). In our efforts to robustly assess the efficacy of our method, we introduced an alternative judge, Claude-2, to mitigate the potential biases of relying on a single judge. We adopted the dual-score system detailed in Section, alternating between gpt-3.5-turbo and Claude-2 for evaluations. Figure 5 displays the results when gpt-4 is used as the reference model, and results from other reference models are provided in the Appendix. Upon evaluation by Claude-2, it is observed that _AlpaCare_ consistently outperforms its instruction-tuned counterparts. This aligns with findings from assessments using gpt-3.5-turbo as the judge. Such consistency underscores the superior medical proficiency and generalizability of our approach. _AlpaCare_ consistently delivers superior performance in 13B model comparisons. To explore the impact of scaling up the LLM backbone, we fine-tuned _AlpaCare-13B_ on LLaMA-13B and compared its performance on three test sets against other 13B instruction-tuned baselines. The results judged by gpt-3.5-turbo are presented in Figure 6. _AlpaCare-13B_ consistently outperforms other 13B instruction-tuned models in both the medical and general domain evaluations. This reaffirms the conclusion drawn from the 7B model comparison: tuning models with diverse medical instruction tasks can simultaneously enhance the model's medical capability and its generalization ability. ## 5 Analysis & Case Study ### Instruction Data Statistics Training a model with diverse instructions enhances its ability to follow instructions (Wang et al., 2023b; Chen et al., 2023). Table 2 presents statistics of instructions in the training data for various Figure 5: **Results evaluated by different judges. Comparing the performance of AlpaCare and baselines using different judge for evaluation, with gpt-4 as the reference model.** Figure 6: **Result comparison on 13B instruction-tuned models. Comparing the performance of _AlpaCare-13B_ and its 13B baselines evaluated by gpt-3.5-turbo across 4 distinct reference models.** medical instruction-tuned LLMs. Figure 7 depicts the linguistic diversity in the training data of Baize and _MedInstruct-52k_, both of which contain over 10k unique instructions. This comparison is made by extracting the root verb and the direct-object noun from each instruction, showcasing the 20 most frequent root verbs and their top 4 associated direct noun objects. Our proposed model, _AlpaCare_, training on _MedInstruct-52K_, which contains 52,002 distinct instructions, highly exceeding that of Chat-Doctor, medAlpaca, and PMC-LLM, leading to enhanced performance in following instructions. Although Baize encompasses more unique instructions across diverse chat topics, its training data still lacks linguistic diversity compared to _MedInstruct-52k_. ### Generation case study We randomly selected one case from each test set and present the output of 13B instructed models across medical and general domain in Figure 8 and Figure 9. In Figure 10, _AlpaCare_ provides customized, empathetic medical advice grounded in the patient's symptoms, unlike the other models' incomplete, generic responses. _AlpaCare_'s nuanced guidance validating concerns, prescribing rest, and monitoring for worsening symptoms demonstrates superior better medical capacity. In Figure 10, AlpaCare smoothly followed prompts to thoroughly outline GERD's characteristics, diagnostic criteria, and treatment approaches in an organized, numbered list. Meanwhile, other models overlooked key details or went off on tangents. AlpaCare's comprehensive and structured response highlighted its adeptness at understanding and addressing medical instructions. For the general domain, _AlpaCare_ and its general domain counterpart, Alpaca, successfully followed the instruction to generate skill assessment questions for R (programming language), demonstrating strong instruction following ability. In addition, _AlpaCare_ generated an initial sentence summarizing the instruction and a conclusion statement, while Alpaca lacked this structure. This difference could be attributed to _AlpaCare_ is trained on data generated by gpt-turbo-3.5, which incorporates more human preferences. \begin{table} \begin{tabular}{c c} \hline \hline & \# Unique instruction \\ \hline \hline & 1 \\ \hline & 1 \\ \hline & 70 \\ \hline & 99190 \\ \hline & 52002 \\ \hline \hline \end{tabular} \end{table} Table 2: Instruction statistics comparison between training data used in medical instruction tuned LLMs. Figure 7: **Comparison of Language Diversity in Training Data**: (a) _AlpaCare_, trained on _MedInstruct-52k_, and (b) Baize-HealthCare, trained on Quora and MedQuAD. While Baize uses more unique instructions, our _MedInstruct-52k_ dataset offers superior textual diversity. Instructions: I prove a doctor, please answer the medical questions based on the patient's discretion. Input: it is doctorday that U5G report showed interacting regulatory with 8.8 mm postaction act, no first and just sic. Last week I had brown discharge with and lower abdominal and was supported System 200.0 mg (red). Now, I am being the pain frequently in the right lower abdomen with mild bleeding and pain. It is when 116 m on bed. What to do? Output: Alpace: It is likely that you are experiencing gain and bleeding due to a miscuraging. You should next and monitor the situation. If the bleeding and pain persist or become worse, you should consult a doctor immediately. Rel ## 6 Conclusion This paper demonstrates the effectiveness of tuning medical instruction with machine-generated instruction-response pairs. We release 52K medical instruction-following instances, a medical free-form instruction evaluation test set, as well as model checkpoints fine-tuned from LLaMA models. We hope our empirical observations and resources will benefit the development of open-source and general-purpose, as well as domain-specific LLMs. This represents work in progress, and several directions can be explored: 1. Multi-turn conversation: This work only utilizes the instruction-response pairs as one-turn conversations. However, multi-turn conversations are more realistic in real-world settings. 2. Multimodal training: In a preliminary study on our training data, we noticed that some instructions would be clearer and more informative with picture information. We plan to extend our dataset into a multimodal setting to enrich the vision-large-language model. 3. Data filtering: Machine-generated medical responses could be hallucinated (Zhang et al., 2023). Filtering out lower-quality data could improve model performance and save training time (Chen et al., 2023). ## Acknowledgments We gratefully acknowledge the generous financial support provided by the National Institutes for Health (NIH) grant NIH 7R01HL149670. Figure 9: **Case study on 13B instruction-tuned models (not cherry pick) in general domain. Output comparison of _AlphaCare-13B_ and its 13B baselines on AlpacaFarm.**
2305.10588
A Better Way to Do Masked Language Model Scoring
Estimating the log-likelihood of a given sentence under an autoregressive language model is straightforward: one can simply apply the chain rule and sum the log-likelihood values for each successive token. However, for masked language models (MLMs), there is no direct way to estimate the log-likelihood of a sentence. To address this issue, Salazar et al. (2020) propose to estimate sentence pseudo-log-likelihood (PLL) scores, computed by successively masking each sentence token, retrieving its score using the rest of the sentence as context, and summing the resulting values. Here, we demonstrate that the original PLL method yields inflated scores for out-of-vocabulary words and propose an adapted metric, in which we mask not only the target token, but also all within-word tokens to the right of the target. We show that our adapted metric (PLL-word-l2r) outperforms both the original PLL metric and a PLL metric in which all within-word tokens are masked. In particular, it better satisfies theoretical desiderata and better correlates with scores from autoregressive models. Finally, we show that the choice of metric affects even tightly controlled, minimal pair evaluation benchmarks (such as BLiMP), underscoring the importance of selecting an appropriate scoring metric for evaluating MLM properties.
Carina Kauf, Anna Ivanova
2023-05-17T21:51:58Z
http://arxiv.org/abs/2305.10588v2
# A Better Way to Do Masked Language Model Scoring ###### Abstract Estimating the log-likelihood of a given sentence under an autoregressive language model is straightforward: one can simply apply the chain rule and sum the log-likelihood values for each successive token. However, for masked language models (MLMs), there is no direct way to estimate the log-likelihood of a sentence. To address this issue, Salazar et al. (2020) propose to estimate sentence pseudo-log-likelihood (PLL) scores, computed by successively masking each sentence token, retrieving its score using the rest of the sentence as context, and summing the resulting values. Here, we demonstrate that the original PLL method yields inflated scores for out-of-vocabulary words and propose an adapted metric, in which we mask not only the target token, but also all within-word tokens to the right of the target. We show that our adapted metric (PLL-word-l2r) outperforms both the original PLL metric and a PLL metric in which all within-word tokens are masked. In particular, it better satisfies theoretical desiderata and better correlates with scores from autoregressive models. Finally, we show that the choice of metric affects even tightly controlled, minimal pair evaluation benchmarks (such as BLiMP), underscoring the importance of selecting an appropriate scoring metric for evaluating MLM properties.1 Footnote 1: Our results and code are available at [https://github.com/carina-kauf/better-mlm-scoring](https://github.com/carina-kauf/better-mlm-scoring). ## 1 Introduction Most state-of-the-art transformer-based large language models (LLMs) fall into two classes: unidirectional (or autoregressive) models, where each token is generated based on its left context (e.g., GPT models; Radford et al., 2019), and bidirectional models, where a token is predicted from both left and right context tokens, some of which may be masked (e.g., BERT; Devlin et al., 2018). Often, it is beneficial to compare these models' performance on controlled sentence generation benchmarks. Whereas unidirectional architectures offer a natural way of calculating sentence log-likelihood (summing the log-likelihood scores of each sentence token given its left context), there is no direct way of estimating sentence log-likelihood for a bidirectional model. So far, the best available method to score a sentence under a bidirectional LLM has been the pseudo-log-likelihood (PLL) scoring approach described by Salazar et al. (2020) (and initially used by Shin et al., 2019; Wang and Cho, 2019). The PLL of a sentence is calculated as the sum of PLL scores for each token given all other sentence tokens, thus providing a comparable metric to unidirectional models' log-likelihood (LL) sentence scoring. The PLL metric is extremely popular; it is used extensively in LLM studies tackling topics as diverse as effects of training data Sinha et al. (2021); Zhang et al. (2021), model fluency Laban et al. (2021), syntactic and conceptual knowledge Sinclair et al. (2022); Bhatia and Richie (2022), social biases Nangia et al. (2020), and others. Some of these studies have already accrued dozens of citations. Here, we show that the metric proposed by Salazar et al. (PLL-original) has important shortcomings that limit its utility. Specifically, PLL-original overestimates the PLL of out-of-vocabulary (OOV) words, which LLM tokenizers split into multiple tokens. As a result, PLL-original scores fail on several theoretically Figure 1: Three different ways to compute the PLL score of a multi-token word (e.g., souvenir) during masked language modeling. _Purple_: target token, _pink_: within-word tokens that are available during inference, _turquoise_: within-word tokens that are masked during inference. Sentence tokens that do not belong to the current word are always available during inference. desired property tests: a robust inverse relationship between sentence length and sentence PLL (Section 4.1), a robust positive correlation between a word's frequency and its PLL score (4.2), and a positive correlation between unidirectional and bidirectional model scores for the same sentences (Section 5). To remedy these issues, we propose an adjusted PLL metric, \(\mathtt{PLL\text{-}word\text{-}12r}\) (l2r: left-to-right), which estimates token PLL when future within-word tokens are also masked (Figure 1). We show that the \(\mathtt{PLL\text{-}word\text{-}12r}\) metric outperforms both \(\mathtt{PLL\text{-}original}\) and alternative \(\mathtt{PLL\text{-}based}\) metrics. We therefore recommend to use the \(\mathtt{PLL\text{-}word\text{-}12r}\) metric when estimating sentence PLL under a bidirectional LLM. ## 2 Motivation: score inflation for multi-token words The \(\mathtt{PLL\text{-}original}\) metric grossly overestimates the probability of OOV lexical items, such as _souvenir_ (Figure 2). This is because OOV words are tokenized into subword tokens (e.g., _so ##uven #ir_), and each subword token is predicted using the token's bidirectional context, which crucially includes the remaining tokens that make up the OOV word. Thus, even though the OOV word itself may be surprising given the sentence context, the individual parts of the OOV word are not surprising to a bidirectional model given a sentence context that includes all other subtokens of that word (e.g., it is easy to predict _so_ given _#uven #ir_; see Appendix A for additional examples). To mitigate this bias, we adjust the PLL sentence scoring algorithm such that the model cannot access future within-word tokens (\(\mathtt{PLL\text{-}word\text{-}12r}\)) or any within-word tokens (\(\mathtt{PLL\text{-}whole\text{-}word}\)) when predicting the target. Below, we conduct a rigorous investigation of our modified metrics to determine whether this intuitive benefit holds quantitatively. ## 3 Methods For our analysis, we adapt the scorer module of the minicons library (Misra, 2022), an open-source wrapper library around HuggingFace transformers(Wolf et al., 2020) that enables efficient extraction of word- and sentence-level probabilities from LLMs. The MLM scoring procedure of the minicons library follows the procedure originally proposed by Salazar et al. (2020). For details on sentence preprocessing, see Appendix B. ### PLL metrics \(\mathtt{PLL\text{-}original}\). In this metric, each sentence token \(s_{t}\) of a sentence \(S\) with \(n\) tokens is consecutively replaced with a \(\mathtt{[MASK]}\) and is predicted using all past and future tokens, irrespective of whether the context tokens belong to the same or a different word than the target token. Thus, inference is conditioned on the context \(S_{\backslash t}:=(s_{1},\dots,s_{t-1},s_{t+1},\dots,s_{n})\). The final sentence score is obtained as the sum of the log probabilities of each sentence token given its context: \[\mathtt{PLL\text{orig}}(S):=\sum_{t=1}^{n}\log P_{\mathrm{MLM}}(s_{t}\mid S_ {\backslash t}) \tag{1}\] \(\mathtt{PLL\text{-}word\text{-}12r}\). In this metric, a \(\mathtt{[MASK]}\) is placed not only over the current target token (now: \(s_{w_{t}}\)), but also over all future sentence tokens that belong to the same word \(s_{w}\) as the target. Inference is then conditioned on a context that includes all preceding sentence tokens (including those belonging to the current word) and all sentence tokens from future words. The final score of a sentence \(S\) is obtained as the sum of the log probabilities of each of the \(\left|w\right|\) tokens in each of the \(\left|S\right|\) words: Figure 2: The \(\mathtt{PLL\text{-}original}\) metric inflates scores of multi-token words, such as _souvenir_; the adjusted metrics, \(\mathtt{PLL\text{-}word\text{-}12r}\) and \(\mathtt{PLL\text{-}whole\text{-}word}\), mitigate this issue. Example generated using the \(\mathtt{bert\text{-}base\text{-}cased}\) model. \[\mathrm{PLL_{l2r}}(S):=\sum_{w=1}^{|S|}\sum_{t=1}^{|w|}\log P_{\mathrm{MLM}}(s_{w_ {t}}\mid S_{\setminus s_{w_{t^{\prime}\geq t}}}) \tag{2}\] PLL-whole-word. This metric is similar to PLL-word-l2r and differs from it only in that a [MASK] is placed over _all_ sentence tokens that belong to the same word \(s_{w}\) as the target (both preceding and future). Inference is then conditioned on a context that includes all sentence tokens except those belonging to the current word. The final score of a sentence \(S\) is obtained as the sum of the log probabilities of each of the \(|w|\) tokens in each of the \(|S|\) words in \(S\) given the token's context: \[\mathrm{PLL_{ww}}(S):=\sum_{w=1}^{|S|}\sum_{t=1}^{|w|}\log P_{\mathrm{MLM}}(s_ {w_{t}}\mid S_{\setminus s_{w}}) \tag{3}\] In Appendix G, we also report results for a PLL metric where not only future within-word tokens, but _all_ sentence tokens to the right of the target context are masked (PLL-sentence-l2r). Although this method is most similar to autoregressive LL scoring, sentence-l2r masking for BERT is known to produce poor quality generations (Wang and Cho, 2019); we therefore refrain from including this metric in the main text. ### Models We report results for bert-base-cased (and gpt2-medium for comparison) unless stated otherwise. Results for larger models are provided in Appendices D-F. ### Datasets For our main analyses, we use the EventsAdapt dataset (Kauf et al., 2022, based on Fedorenko et al., 2020). It contains a curated set of 782 syntactically simple sentence pairs that describe plausible or implausible agent-patient interactions in active or passive voice (e.g., _The traveler lost the souvenir_). Sentences in this dataset are 5-7 words long (mean: \(6.1\), std: \(1.05\)), with an average word log frequency of 10.95. We use this dataset because it Figure 3: Out of all PLL metrics, PLL-word-l2r best satisfies theoretical desiderata: **(A)** an inverse relationship between negative sentence PLL (a measure of model surprisal) and sentence length and **(B)** a positive correlation between word PLL and word log frequency. In (A), each dot is a sentence; in (B), each dot is a unique word from the dataset. Here and elsewhere, reported correlations are Pearson correlations. contains a high number of OOV words (19.6% for BERT and 40.3% for GPT-2; see also Appendix C). In Appendices D-F, we show that our results generalize to two larger and more diverse corpora: the Brown corpus Francis and Kucera (1979) and the reference sentence set from the LibriSpeech corpus Panayotov et al. (2015). We also apply our PLL metrics to score the sentences in the Benchmark of Linguistic Minimal Pairs (BLiMP) Warstadt et al. (2020), a challenge set of 67k sentence pairs which target specific aspects of linguistic knowledge. ## 4 Evaluating PLL metric properties ### Effects of sentence length Like Salazar et al. (2020), we expect that models should, on average, assign lower probability to longer sentences. Thus, negative PLL (which reflects model surprisal) should be positively correlated with sentence length. However, the PLL-original metric violates this expectation in our test sentence set, which shows a negative correlation between the number of tokens and negative PLL. In contrast, PLL-word-12r and PLL-whole-word metrics exhibit a positive correlation between the number of sentence tokens and negative PLL, just as the negative LL scores for a unidirectional model, GPT2-medium (Figure 3A). ### Effects of word frequency An appropriate (P)LL metric should reflect the fact that LLMs are sensitive to distributional patterns in training text corpora. In particular, we expect more frequent words to have higher (P)LL scores in the absence of contextual effects. This is indeed the case for GPT2-medium; however, the score inflation for multi-token words means that the PLL-original metric grossly overestimates the scores for low-frequency words (Figure 3B). PLL-word-12r scores restore this relationship: their correlation with word frequency is much higher than for PLL-original. PLL-whole-word also performs well, although its correlation with word frequency is lower than for PLL-word-12r, suggesting that it excessively penalizes OOV words. ## 5 Correlation with GPT-2 scores We expect that PLL scores for bidirectional models should be at least somewhat consistent with LL scores for unidirectional models: both metrics are designed to serve are a proxy for sentence probability. Here, we show that the GPT-2/BERT score correlation for the PLL-original metric is very low, whereas correlation scores for PLL-word-12r and PLL-whole-word are much higher (Figure 4), indicating the validity of this metric for cross-model comparison. As in Section 4.2, PLL-word-12r slightly outperforms PLL-whole-word, likely because it does not penalize OOV words as severely. See Appendices D-F for evidence that all three trends hold for larger models and for other datasets (although the effects in other datasets are attenuated due to a lower OOV ratio). ## 6 Effects on benchmarking Here, we show that the choice of PLL metric affects benchmarking results for a popular, highly controlled, minimal pair linguistic benchmark: BiIMP Despite the fact that the comparisons are highly controlled, different metrics yield different BiIMP scores. For all four tested models, PLL-word-12r achieves the best overall BliMP score (Table 1). Figure 4: Correlation between bidirectional model PLL scores and unidirectional model LL scores. Each dot is a sentence. See Appendix H for detailed scores. ## 7 Conclusion We have shown that PLL-word-l2r is the preferred metric for evaluating sentence PLL under a masked language model, such as BERT. Although the results from studies using the PLL-original metric can still be informative, they become harder to interpret if the proportion of OOV words in their test set is high. Therefore, we recommend using PLL-word-l2r in future works. ### Limitations The proposed PLL-word-l2r metric has the same practical limitations as previous LL/PLL approaches. Most importantly, these scores can be influenced by many superfluous factors, such as the number of available synonyms (_computer_ vs. _laptop_; Holtzman et al., 2021). We therefore expect our method to be most useful in highly controlled minimal pair or multiple choice setups. Even more accurate metrics may emerge in the future. For instance, our approach pre-specifies the number of tokens in a word, thus limiting the space of possible alternatives. Future approaches might investigate a way to normalize the PLL score distribution over words with a varying number of tokens. Further, it would be interesting to attempt to estimate the joint probability of all tokens in a word instead of predicting them left-to-right (as in PLL-word-l2r) or without any other within-word contextual information (as in PLL-whole-word). Finally, we test our approach on English text corpora; our results might not generalize to agglutinative languages (due to a high number of tokens per word and, therefore, increased uncertainty) and are of less relevance to isolating languages (where, if enough training data are available, most word-level items can be represented as single tokens). ### Ethics Statement In our proposed metric, word tokens are masked from left to right following the writing tradition in English; however, for speakers of languages such as Arabic, a "right to left" notation would be more intuitive. Note, however, that this is primarily a denotational difference that does not affect the score itself (LLMs do not discriminate left and right, only beginning and end). We do not anticipate any specific harms that would be intrinsically associated with the techniques described in this paper. ## Acknowledgements We thank Jacob Andreas, Evan Hernandez, and the anonymous ACL reviewers for their insightful feedback. CK was supported by the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT. AI was supported by MIT Quest for Intelligence.
2302.03235
Hebbian and Gradient-based Plasticity Enables Robust Memory and Rapid Learning in RNNs
Rapidly learning from ongoing experiences and remembering past events with a flexible memory system are two core capacities of biological intelligence. While the underlying neural mechanisms are not fully understood, various evidence supports that synaptic plasticity plays a critical role in memory formation and fast learning. Inspired by these results, we equip Recurrent Neural Networks (RNNs) with plasticity rules to enable them to adapt their parameters according to ongoing experiences. In addition to the traditional local Hebbian plasticity, we propose a global, gradient-based plasticity rule, which allows the model to evolve towards its self-determined target. Our models show promising results on sequential and associative memory tasks, illustrating their ability to robustly form and retain memories. In the meantime, these models can cope with many challenging few-shot learning problems. Comparing different plasticity rules under the same framework shows that Hebbian plasticity is well-suited for several memory and associative learning tasks; however, it is outperformed by gradient-based plasticity on few-shot regression tasks which require the model to infer the underlying mapping. Code is available at https://github.com/yuvenduan/PlasticRNNs.
Yu Duan, Zhongfan Jia, Qian Li, Yi Zhong, Kaisheng Ma
2023-02-07T03:42:42Z
http://arxiv.org/abs/2302.03235v1
# Hebbian and Gradient-based Plasticity Ensables Robust Memory and Rapid Learning in RNNs ###### Abstract Rapidly learning from ongoing experiences and remembering past events with a flexible memory system are two core capacities of biological intelligence. While the underlying neural mechanisms are not fully understood, various evidence supports that synaptic plasticity plays a critical role in memory formation and fast learning. Inspired by these results, we equip Recurrent Neural Networks (RNNs) with plasticity rules to enable them to adapt their parameters according to ongoing experiences. In addition to the traditional local Hebbian plasticity, we propose a global, gradient-based plasticity rule, which allows the model to evolve towards its self-determined target. Our models show promising results on sequential and associative memory tasks, illustrating their ability to robustly form and retain memories. In the meantime, these models can cope with many challenging few-shot learning problems. Comparing different plasticity rules under the same framework shows that Hebbian plasticity is well-suited for several memory and associative learning tasks; however, it is outperformed by gradient-based plasticity on few-shot regression tasks which require the model to infer the underlying mapping. Code is available at [https://github.com/yuwenduan/PlasticRNNs](https://github.com/yuwenduan/PlasticRNNs). ## 1 Introduction Biological neural networks can dynamically adjust their synaptic weights when faced with various real-world tasks. The ability of synapses to change their strength over time is called synaptic plasticity, a critical mechanism that underlies animals' memory and learning (Abbott & Regehr, 2004; Stuchlik, 2014; Abraham et al., 2019; Magee & Grienberger, 2020). For example, synaptic plasticity is essential for memory formation and retrieval in the hippocampus (Martin et al., 2000; Neves et al., 2008; Rioult-Pedotti et al., 2000; Kim & Cho, 2017; Nabavi et al., 2014; Nakazawa et al., 2004). Furthermore, recent results show that some forms of synaptic plasticity could be induced within seconds, enabling animals to form memory quickly and do one-shot learning (Bittner et al., 2017; Magee & Grienberger, 2020; Milstein et al., 2021). To test whether plasticity rules could also aid the memory performance and few-shot learning ability in artificial models, we incorporate plasticity rules into Recurrent Neural Networks (RNNs). These plastic RNNs work like the vanilla ones, except that a learned plasticity rule would update network weights according to ongoing experiences at each time step. Historically, Hebb's rule is a classic model for long-term synaptic plasticity; it states that a synapse is strengthened when there is a positive correlation between the pre- and post-synaptic activity (Hebb, 1949). Several recent papers utilize generalized versions of Hebb's rule and apply it to Artificial Neural Networks (ANNs) in different settings (Miconi et al., 2018; Najarro & Risi, 2020; Limbacher & Legenstein, 2020; Tyulmankov et al., 2022; Rodriguez et al., 2022). With a redesigned framework, we apply RNNs with neuromodulated Hebbian plasticity to a range of memory and few-shot learning tasks. Consistent with the understanding in neuroscience (Magee & Grienberger, 2020; Martin et al., 2000; Neves et al., 2008), we find these plastic RNNs excel in memory and few-shot learning tasks. Despite being simple and elegant, classical Hebbian plasticity comes with limitations. In multi-layer networks, the lack of feedback signals to previous layers could impede networks' ability to configure their weights in a fine-grained manner and evolve to the desired target (Magee & Grienberger, 2020; Marblestone et al., 2016). In recent years, some authors argue that other forms of plasticity rules in the brain could produce similar effects as the back-propagation algorithm, although the underlying mechanisms are probably different (Sacramento et al., 2018; Whittington & Bogacz, 2019; Roelfsema & Holtmaat, 2018). Inspired by these results, we attempt to model the synaptic plasticity in RNNs as _self-generated_ gradient updates: at each time step, the RNN updates its parameters with a self-determined target. Allowing the RNN to generate and evolve to a customized target enables the RNN to configure its weights in a flexible and coordinated fashion. Like Hebb's rule, the proposed gradient-based plasticity rule is task-agnostic. It operates in an _unsupervised_ fashion, allowing us to compare these two plasticity rules under the same framework. In machine learning, learning a plasticity rule is one of the many _meta-learning_ approaches (Schmidhuber et al., 1997; Bengio et al., 2013). Although a diverse collection of meta-learning methods have been proposed over the years (Huisman et al., 2021), these meta-learning methods are typically built upon specific assumptions on the task structure (e.g., assume the supervising signals are explicitly given; see Sec. 2 for more detailed discussion). They thus could not be applied to arbitrary learning problems. In contrast, in our networks, the evolving direction of network parameters \(d\mathbf{W}/dt\) solely depends on the current network state, i.e., current network parameters and the activity of neurons. Since the designed plasticity rules do not rely on task-specific information (e.g., designated loss function and labels), they could be naturally applied to any learning problems as long as the input is formulated as time series. Therefore, modeling biological plasticity rules also allows us to build more general meta-learners. Our contribution can be summarized as follows. Based on previous work (Miconi et al., 2019), we formulate a framework that allows us to incorporate different plasticity rules into RNNs. In addition to the local Hebbian plasticity, we propose a novel gradient-based plasticity rule that allows the model to evolve towards self-determined targets. We show that both plasticity rules improve memory performance and enable rapid learning, suggesting that ANNs could benefit from synaptic plasticity similarly to animals. On the other hand, as computational models simulating biological plasticity, our models give insights into the roles of different forms of plasticity in animals' intelligent behaviors. We find that Hebbian plasticity is well-suited for many memory and associative learning tasks. However, the gradient-based plasticity works better in the few-shot regression task, which requires the model to infer the underlying mapping instead of learning direct associations. ## 2 Related Work **Meta-Learning.** Meta-learning, or "learning to learn", is an evolving field in ML that aims to build models that can learn from their ongoing experiences (Schmidhuber et al., 1997; Bengio et al., 2013). A surprisingly diverse set of meta-learning approaches have been proposed in recent years (Hospedales et al., 2021; Finn et al., 2017; Santoro et al., 2016; Mishra et al., 2018; Lee et al., 2019). In particular, one line of work proposes to meta-learn a learning rule capable of configuring network weights to adapt to different learning problems. This idea could be implemented by training an optimizer for gradient descent (Andrychowicz et al., 2016; Ravi & Larochelle, 2017), training a Hypernetwork that generates the weights of another network (Ha et al., 2017), or meta-learning a plasticity rule which allows RNNs to modify its parameters at each time step (Miconi et al., 2019; Ba et al., 2016; Miconi et al., 2018). Our method belongs to the last category. Compared to other meta-learning approaches, training plastic RNNs has some unique advantages. Plastic RNNs are general meta-learners that could learn from any sequential input. In contrast, most meta-learning methods cannot deal with arbitrary learning problems due to their assumptions about task formulation. For example, methods that utilize gradient descent in the inner loop (e.g., MAML (Finn et al., 2017), LSTM meta-learner (Ravi & Larochelle, 2017) and GD\({}^{2}\)(Andrychowicz et al., 2016)) typically assume that there exist explicit supervising signals (e.g., ground truth) and a loss function that is used to update the base learner. However, such information is often implicit in the real world (e.g., when humans do few-shot learning from natural languages (Brown et al., 2020)). In contrast, plastic RNNs are task-agnostic: they can adapt their weights in an unsupervised manner, and only a meta-objective is required for meta-training. Besides, the idea of evolving plasticity rules derives from animals, who are still the best meta-learners we have known so far. Another line of work that is closely related to our gradient-based plasticity rule is meta-learning a loss function. This idea has been applied in reinforcement learning (Houthooft et al., 2018; Oh et al., 2020; Kirsch et al., 2020) and supervised learning (Baik et al., 2021; Bechtle et al., 2021). However, as discussed above, these methods still depend on supervising signals and are thus less general compared to our methods. Moreover, our internal loss generation process is more flexibly determined by ongoing experience. The learning rule in the inner loop is also much more flexible than the usual gradient descent, as each connection has its own learning rate. **Synaptic Plasticity in ANNs.** Previous work has incorporated synaptic plasticity into ANNs in different settings. For example, Hebbian networks can be explicitly used as storage of associative memories for ANNs (Limbacher and Legenstein, 2020; Schlag et al., 2021). In addition, Hebb's rule alone can evolve random networks to do simple reinforcement learning tasks (Najarro and Risi, 2020). Miconi et al. (2019, 2018) and Ba et al. (2016) apply generalized Hebbian plasticity to RNNs and find plasticity helpful on tasks including associative learning, pattern memorization, and some simple reinforcement learning tasks. In Differentiable Plasticity (Miconi et al., 2018), the temporal moving average of outer products of pre- and post-synaptic activities is used as the plastic component of network weights. Miconi et al. (2019) extend this method by adding global neuromodulation. A recent paper further extends Hebbian plasticity with short-term dynamics (Rodriguez et al., 2022). Beyond Hebbian plasticity, some recent works explore other ways to capture plasticity in RNNs. For example, some authors use Fast Weight Programmers to update part of the network with a key-value mechanism (Schlag et al., 2021; Irie et al., 2021). Another line of work on key-value memory networks is also related to synaptic plasticity, where methods including gradient descent (Bartunov et al., 2020; Munkhdalai et al., 2019) and three-factor plasticity rules (Tyulmankov et al., 2021) are proposed to update the memory network. ## 3 Method ### Model Framework Following previous work on Hebbian plasticity (Miconi et al., 2018, 2019; Tyulmankov et al., 2022; Rodriguez et al., 2022), we assume the weights for plastic layers to be the sum of a static part \(\mathbf{\tilde{w}}\) and a plastic part \(\mathbf{w}\). We initialize the plastic part as \(0\) at the beginning of a trial and update the plastic part throughout the trial. We adopt a general architecture of RNNs as shown in Figure 1 (left). Both the RNN and the last linear layer are plastic. The encoder is a plastic linear layer in most tasks; the only exception is the one-shot image classification task, in which case the encoder is a non-plastic Convolutional Neural Network (CNN). The model output \(\mathbf{o}_{t}\) is the concatenation of three parts: a scalar \(\tilde{\eta}_{t}\), which modulates global plasticity by controlling how fast parameters change; a vector \(\mathbf{y}_{t}\) representing the model prediction; and an additional vector \(\mathbf{\tilde{y}}_{t}\), which allows more flexible control of weights for networks with gradient-based plasticity, as described in more detail in Sec. 3.3. We summarize the general framework for training plastic RNNs in Algorithm 1. In the inner loop, i.e., each time step of RNN, the network learns from ongoing experiences and adjusts its weights accordingly. In the outer loop, network parameters, including those that define the learning rules in the inner loop, are meta-trained with gradient descent. Conceptually, the outer loop corresponds to the natural evolution process where the biological synaptic plasticity is evolved. In our framework, the network updates its parameters in an unsupervised fashion, i.e., the computation of \(\Delta\mathbf{w}\) in Algorithm 1 does not depend on the ground truth \(\mathbf{\tilde{y}}_{t}\). The network must thus learn to adapt its parameters given only the input \(\mathbf{x}_{t}\). In some of our tasks, ground truth is given as part of the input for the model to learn the association between observations and targets (see Sec. 4). We choose not to use any external supervising signals to follow the tradition of Hebbian plasticity, which does not depend on explicit supervising signals. Our formulation is thus a more realistic setting where the model must learn to identify the supervising signals from the input. ### Hebbian Plasticity We first discuss Hebbian plasticity with global neuromodulation. Recall that we assume the weight in a plastic layer \(l\) to be the sum of a static part \(\mathbf{\tilde{w}}_{l}\) and a plastic part \(\mathbf{w}_{l}\). The plastic component \(\mathbf{w}_{l}(t)\) is updated at each time step according to the outer product of pre-synaptic activity \(\mathbf{p}_{l}(t)\) and post-synaptic activity \(\mathbf{q}_{l}(t)\): \[\begin{split}\mathbf{q}_{l}(t)&=\sigma_{l}\left( \mathbf{b}_{l}+(\mathbf{w}_{l}(t)+\tilde{\mathbf{w}}_{l})^{T}\mathbf{p}_{l}(t) \right),\\ \mathbf{w}_{l}(t+1)&=(1-\eta(t))\,\mathbf{w}_{l}(t) +\eta(t)\boldsymbol{\alpha}_{l}\circ(\mathbf{p}_{l}(t)\mathbf{q}_{l}^{T}(t)), \mathbf{w}_{l}(0)=\mathbf{0},\end{split} \tag{1}\] where \(\sigma_{l}\) is the activation function, \(\circ\) denotes element-wise product, and \(\boldsymbol{\alpha}_{l}\) are learnable parameters that are initialized from \(\mathcal{U}[-1,1]\). \(\boldsymbol{\alpha}_{l}\) acts as connection-specific learning rates that allow each synapse to have different learning rules (e.g., Hebbian or anti-Hebbian). Previous work on Hebbian plasticity has shown the benefit of having connection-specific plasticity over homogeneous plasticity (Miconi et al., 2018), and we found the same results in our experiments. The decay term, which is similar to the weight decay used in gradient descent algorithms, has been introduced in some recent Hebbian models (Miconi et al., 2018; Tyulmankov et al., 2022) to prevent the weight from exploding. \(\eta(t)\) is the _internal learning rate_ that controls the global plasticity, calculated as follows: \[\begin{split}\eta(t)&=\eta_{0}\times\text{Sigmoid} (\tilde{\eta}_{t})\times\min\left\{1,\frac{\text{max\_norm}}{\|\boldsymbol{ \delta}_{t}\|_{2}}\right\},\text{ where}\\ \boldsymbol{\delta}_{t}&=\text{Concat}\left(\text{Vec} (\mathbf{p}_{l}(t)\mathbf{q}_{l}^{T}(t))|l\in S\right),\end{split} \tag{2}\] \(\eta_{0}\) is a hyperparameter that controls the maximal learning rate, \(\text{Vec}(\cdot)\) denotes the vectorization of a matrix, \(\text{Concat}(\cdot)\) denotes the concatenation of a collection of vectors, and \(S\) is the set of plastic layers in the network. We scale the internal learning rate according to the norm of \(\boldsymbol{\delta}_{t}\) to prevent weights from changing too quickly. We use \(\text{max\_norm}=1\) and \(\eta_{0}=0.2\) in all experiments. Controlling the global plasticity with a self-generated signal \(\eta(t)\) is well-motivated from the biological perspective. In animals, neurotransmitters, especially dopamine, play an essential role in Figure 1: **Left:** Model Architecture (see Sec. 3.1). **Right:** Comparison of different learning rules in a linear layer without nonlinearity. The difference between the two learning rules is highlighted. See Sec. 3.2 and 3.3 for more details. modulating plasticity and consequently influence animals' memory and learning (Cohn et al., 2015; Katiuska et al., 2009; Kreitzer and Malenka, 2008; Nadim and Bucher, 2014). Theoretical models usually model neuromodulation as a global factor due to the volume transmission of neurotransmitters (Magee and Grienberger, 2020). Allowing the brain to adaptively modulate synaptic plasticity enables reward-based learning (Gu, 2002; Pignatelli and Bonci, 2015) and active control of forgetting (Berry et al., 2012; Berry, 2015). Previous work has shown the benefit of such adaptive learning rates on Hebbian plasticity (Miconi et al., 2019). In our experiments, we empirically demonstrate that neuromodulation is helpful for both Hebbian and gradient-based plasticity, validating the biological understandings from a computational perspective. ### Gradient-based Plasticity For RNNs with gradient-based plasticity, at each time step \(t\), we first calculate the _internal loss_ on the model output \(\mathbf{o}_{t}\): \[L(t)=\frac{1}{\text{dim}(\mathbf{o}_{t})}\|\mathbf{w}_{\text{out}}^{T} \mathbf{o}_{t}\|_{2}^{2}=\frac{1}{\text{dim}(\mathbf{o}_{t})}\|\mathbf{w}_{ \text{out}}^{T}\text{Concat}(\mathbf{y}_{t},\mathbf{\tilde{y}}_{t},\eta_{t}) \|_{2}^{2}, \tag{3}\] where \(\mathbf{w}_{\text{out}}\) are parameters initialized as \(1\) that are trained in the outer loop. All three components of the model output are used to calculate the internal loss. \(\mathbf{\tilde{y}_{t}}\) enables the model to meta-learn a customized internal loss function that does not only depend on model prediction. In practice, we find a four-dimensional \(\tilde{y}_{t}\) works well enough. Note that \(\mathbf{\tilde{y}_{t}}\) does not affect network dynamics in Hebbian plasticity. The internal loss term does not involve the ground truth; it can thus be viewed as a self-generated target that the model wants to optimize. We then update the plastic parameters as follows: \[\mathbf{w}_{l}(t+1)=(1-\eta(t))\mathbf{w}_{l}(t)+\eta(t)\mathbf{\alpha}_{l}\circ \frac{\partial L(t)}{\partial\mathbf{w}_{l}(t)},\mathbf{w}_{l}(0)=\mathbf{0};\] \[\mathbf{b}_{l}(t+1)=(1-\eta(t))\mathbf{b}_{l}(t)+\eta(t)\mathbf{\beta}_{l}\circ \frac{\partial L(t)}{\partial\mathbf{b}_{l}(0)},\mathbf{b}_{l}(0)=\mathbf{0}; \tag{4}\] \[\mathbf{\delta}_{t}=\text{Concat}\left(\left.\text{Vec}\left(\left.\frac{\partial L (t)}{\partial\mathbf{w}_{l}(t)}\right),\frac{\partial L(t)}{\partial\mathbf{b }_{l}(t)}\right|l\in S\right),\right.\] where \(\mathbf{\beta}\) are learnable element-wise learning rates just like \(\mathbf{\alpha}\); \(\eta(t)\) is defined the same way as in equation 2, except that \(\mathbf{\delta}_{t}\) now denotes the concatenation of _gradients_ of all plastic parameters. One difference with Hebbian plasticity is that the bias terms \(\mathbf{b}_{l}\) are also plastic and updated similarly to weights. The gradient-based plasticity rule resembles the usual gradient descent, but the connection-specific learning rates \(\mathbf{\alpha}\) and \(\mathbf{\beta}\) allow additional flexibility. Figure 1 (right) shows a conceptual comparison between Hebbian and gradient-based plasticity. The main difference is that, for gradient-based plasticity, the gradient of the post-synaptic activity replaces the activity itself, thus allowing update signals to propagate to previous layers. Two plasticity rules are particularly similar in the last linear layer, where the gradient-based plasticity rule takes the same form as Hebb's rule up to a constant scaling vector (Sec. A.1). In other words, the last linear layer will still follow Hebb's rule in a network with the proposed gradient-based plasticity. ## 4 Experiments Inspired by the hypotheses on the role of synaptic plasticity in animals (Magee and Grienberger, 2020; Martin et al., 2000; Neves et al., 2008), we conduct experiments to test the following hypotheses: 1. Plasticity helps to form and retain memory and enlarges the memory capacity. In our network, we expect the plastic weights to act as extra memory storage in addition to the hidden states of RNN. We use the copying task and the cue-reward association task to test memory capability. 2. Plasticity helps the model to learn rapidly from their experiences and observations. We test this hypothesis with one-shot image classification and few-shot regression tasks. To make fair comparisons between models, we scale the size of hidden layers of different models so that all models have approximately the same number of parameters. We use four different random seeds and average the results in all experiments. See Sec. A.2 for more implementation details. ### Copying Task We first test our models on a sequential copying task. In each trial, we generate a random sequence of length \(n\). After a delay of \(m\) steps, the model must reproduce the sequence in its original order. The total length of one trial is thus \(2n+m\). We calculate the MSE loss as the criterion for model performance. For more details, see Sec. A.3.1. We conduct two sets of experiments to compare our models with non-plastic baseline models. First, to test the ability of the models to retain memory during a delay, we set \(n=5\) and vary the number of delay steps \(m\). Second, to measure the models' sequential memory capacity, we set \(m=0\) and vary the length of the sequence to be remembered. The results of LSTM models are shown in Figure 2 (see Figure 6 for results of RNNs). Indeed, we find plastic models exhibit larger memory capacity and are able to remember the sequence after a long delay. In contrast, baseline models are typically stuck on chance performance when the delay is large (see Figure 7 for learning curves). The qualitative difference between plastic and non-plastic models shows the plastic RNNs' ability to capture long-term dependency when meta-trained with gradient descent. ### Cue-Reward Association Associative memory refers to the ability to remember the relationship between unrelated items. For animals, the dependency of associative memory on synaptic plasticity is well-documented in neuroscience literature (Morris et al., 1986; Kim & Cho, 2017; Nakazawa et al., 2004). To evaluate if plasticity also helps the formation of associative memory in artificial RNNs, we train our models to quickly associate cues with corresponding rewards. In each trial, we first sample \(n\) random cues and their corresponding rewards. We randomly choose a cue at each time step and present it to the model. The model is expected to answer the corresponding reward. During the first half of the trial, the ground truth is also given in input for the model to learn the association. Please refer to Sec. A.3.2 for more details. The result is shown in Figure 3. Plastic models quickly converge to reasonable solutions with both the RNN and LSTM backbone. The two plasticity rules perform similarly, but the gradient-based plasticity appears more compatible with the LSTM backbone. ### One-Shot Image Classification To test models' ability of rapid learning, we train our models on the one-shot image classification task, a classic benchmark used in meta-learning. Here we consider the sequential version of 5-way one-shot image classification on MiniImageNet (Vinyals et al., 2016) and CIFAR-FS (Bertinetto et al., 2019). Similar to the previous task, in each trial, the model needs to learn the association between image embedding and the corresponding class in the training stage, then infer the class of novel images in the testing stage. Following previous work, we choose a regular 4-layer CNN or ResNet-12 architecture as the image encoder (Lee et al., 2019). We describe more task details in Sec. A.3.3. Figure 2: Performance of LSTM models with different plasticity rules on the copying task. Error bars represent the SEM of four random runs. **Left:** Performance with different \(m\) when \(n=5\). **Right:** Performance with different \(n\) when \(m=0\). The test performance of our models is reported in Table 1. Both plasticity rules improve the performance of RNN models by a large margin, with the Hebbian plasticity performing slightly better. The learning curves (Figure 12) show that non-plastic RNNs can also overfit the training set, but the generalization gap is much larger than the plastic RNNs. These observations suggest that plasticity not only increases representation power but also provides a powerful inductive bias that inherently strengthens models' ability to learn from their environments quickly. Interestingly, even though the non-plastic LSTM consistently outperforms the non-plastic RNN, this is no longer the case if plasticity is introduced. We infer that plastic weights provide stable memory storage like the cell states in LSTMs. As a result, the original advantages of LSTM models might no longer exist. Plastic networks have comparable performance to other meta-learning methods when we limit the visual encoder to be a 4-layer CNN. However, we did not find plastic networks significantly benefit from a deep vision encoder like the recent work on few-shot image classification (Lee et al., 2019; Huisman et al., 2021). In recent years, methods that get the best results on few-shot learning benchmarks (e.g., MetaOptNet (Lee et al., 2019), COSOC (Luo et al., 2021)) are also exclusively designed for few-shot image classification. Unlike our plastic RNNs, these methods are difficult to apply to other learning problems or memory tasks. Instead of striving to get the best performance on a specific task, our goal is to build a general architecture that tackles a wide range of memory and learning problems. ### Few-Shot Regression We test our models on a regression task to further evaluate the performance of few-shot learning. In each trial, we randomly generate a mapping \(f:[-1,1]^{d}\rightarrow\mathbb{R}\), which is either a linear function or a small MLP. The model needs to learn the underlying mapping from \(K\) observations, i.e, \(K\) pairs of \((\mathbf{x}_{t},f(\mathbf{x}_{t}))\), then make predictions on \(f(\mathbf{x}_{t})\) given \(\mathbf{x}_{t}\). In our experiments, we use \(K=10\) or \(20\). In the case of \(K=10\), \(K\) is even smaller than the free parameters in \(f\), making the task more challenging. More details of the task are described in Sec. A.3.4. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{**Conv-4**} & \multicolumn{2}{c}{**ResNet-12**} \\ \cline{2-5} **Models** & **CIFAR-FS** & **miniImageNet** & **CIFAR-FS** & **miniImageNet** \\ \hline LSTM, Non-Plastic & 49.9 \(\pm\) 0.5 & 46.8 \(\pm\) 0.7 & 52.0 \(\pm\) 0.9 & 20.3 \(\pm\) 0.3 \\ LSTM, Hebbian & 50.0 \(\pm\) 0.7 & 46.6 \(\pm\) 0.8 & 49.3 \(\pm\) 1.3 & 33.2 \(\pm\) 7.5 \\ LSTM, Gradient & 50.5 \(\pm\) 0.6 & 47.0 \(\pm\) 0.3 & 50.6 \(\pm\) 1.1 & 28.1 \(\pm\) 10.4 \\ RNN, Non-Plastic & 39.9 \(\pm\) 0.8 & 44.1 \(\pm\) 0.7 & 41.5 \(\pm\) 2.4 & 42.3 \(\pm\) 0.6 \\ RNN, Hebbian & **55.5 \(\pm\) 1.0** & **49.8 \(\pm\) 0.5** & **59.6 \(\pm\) 1.5** & **50.4 \(\pm\) 1.1** \\ RNN, Gradient & 51.2 \(\pm\) 2.6 & 47.9 \(\pm\) 1.2 & 52.8 \(\pm\) 4.4 & **50.5 \(\pm\) 0.4** \\ \hline MAML (Finn et al., 2017) & 58.9 \(\pm\) 1.9 & 48.7 \(\pm\) 1.8 & - - & - \\ ProtoNet (Snell et al., 2017) & 55.5 \(\pm\) 0.7 & 53.5 \(\pm\) 0.6 & 72.2 \(\pm\) 0.7 & 59.3 \(\pm\) 0.6 \\ COSOC (Luo et al., 2021) & - & - & - & 69.3 \(\pm\) 0.5 \\ \hline \hline \end{tabular} \end{table} Table 1: Model performance (test accuracy) on the one-shot image classification task compared to other methods. 95% confidence interval is shown. Data for ProtoNet is from (Lee et al., 2019). Figure 3: Validation loss curves in the cue-reward association task. The shaded area represents SEM on four random runs. Here \(n=5\) and trial length = 20. **Left:** RNN models. **Right:** LSTM models. Model performance is shown in Table 2. Unlike previous tasks, the proposed gradient-based plasticity consistently produces the best result, although Hebbian plasticity also improves the performance in most cases. We infer that such a difference is caused by the need for inference in the few-shot regression task. In this task, the model not only needs to remember the observations but also infer the underlying mapping \(f\) from these observations, which necessitates complex calculations not required in tasks such as associative learning. Models with gradient-based plasticity are more capable of learning the underlying rules, probably because they can leverage back-propagation to optimize their circuits over the desired target. In contrast, models with Hebbian plasticity are relatively weak at evolving their network weights to reach any given target. A local learning rule like Hebb's rule might be good enough for learning direct associations. However, the lack of feedback signals to prior layers makes it hard for the whole network to evolve in a coordinated fashion. ### Analysis and Ablation Study Recall that we use the internal learning rate \(\eta(t)\) in our plastic networks to model biological neuromodulation. \(\eta(t)\) is a key quantity that controls how much information is stored and discarded in plastic weights at each time step \(t\). By tuning \(\eta(t)\) in an adaptive way, the network can effectively choose to learn from experience quickly or retain the previously learned knowledge. Indeed, as shown in Figure 4, such adaptive behavior of \(\eta(t)\) is observed in our experiments. In the copying task, the network learns to set large \(\eta(t)\) when the sequence is presented to the model. \(\eta(t)\) quickly decays to 0 during the delay, probably because the model learns to preserve the memory stored in plastic weights. In terms of task performance, we find an adaptive \(\eta(t)\), i.e., instead of setting \(\eta(t)\) to a fixed value, consistently leads to improvements for both plasticity rules. Similar results are also observed in other tasks (Figure 10). These results suggest that neuromodulation is the key to a flexible and robust plasticity-based memory system. We conduct additional ablation studies on the cue-reward association task; the detailed figures are shown in Sec. A.4. We find plastic RNNs consistently outperform non-plastic ones when larger or smaller learning rates are used (Figure 8), and our default learning rate is appropriate for plastic and non-plastic models. We find the maximal learning rate \(\eta_{0}\) to be a key hyperparameter; we \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{**Linear**} & \multicolumn{2}{c}{**MLP**} \\ \cline{2-5} **Models** & \(K=10\) & \(K=20\) & \(K=10\) & \(K=20\) \\ \hline LSTM, Non-Plastic &.605 \(\pm\).002 &.402 \(\pm\).010 &.241 \(\pm\).043 &.087 \(\pm\).006 \\ LSTM, Hebbian &.446 \(\pm\).003 &.229 \(\pm\).001 &.282 \(\pm\).013 &.112 \(\pm\).002 \\ LSTM, Gradient & **.300 \(\pm\).002** & **.107 \(\pm\).001** & **.142 \(\pm\).001** & **.036 \(\pm\).001** \\ RNN, Non-Plastic &.605 \(\pm\).002 &.469 \(\pm\).001 &.465 \(\pm\).015 &.383 \(\pm\).007 \\ RNN, Hebbian &.378 \(\pm\).059 &.200 \(\pm\).042 &.165 \(\pm\).001 &.077 \(\pm\).028 \\ RNN, Gradient & **.301 \(\pm\).001** & **.108 \(\pm\).001** & **.142 \(\pm\).001** & **.036 \(\pm\).001** \\ \hline \hline \end{tabular} \end{table} Table 2: Test error on the \(K\)-shot regression task. 95% confidence interval is shown. Figure 4: **Left:** Dynamics of \(\eta(t)\) in the copying task, we average the result across the 6400 trials in the test set. The shaded area reflects the SEM of models with different random seeds. LSTM models are shown. The sequence length \(n=20\) and delay \(m=40\). **Middle and Right:** Learning curves of different models in the copying task. Here we compare using an adaptive internal learning rate (“modulated”) and using a fixed one (”non-modulated”). **Middle:** models with Hebbian plasticity. **Right:** models with gradient-based plasticity. set \(\eta_{0}=0.2\) for our main experiments to balance model capability and training stability (Figure 5 left, also see Figure 9). We show that randomly-initialized connection-specific learning rates \(\mathbf{\alpha}\) consistently work better than a fixed global learning rate (Figure 5 middle, also see Figure 11). For the gradient-based plasticity, we find that using \(\tilde{\mathbf{y}}_{t}\) improves performance, but \(\dim(\tilde{\mathbf{y}}_{t})=4\) works good enough (Figure 5 right). In addition, on the copying task, when \(n=5\) and \(m=40\), we find that setting \(\text{max\_norm}=100\) instead of \(1\) causes gradient explosion (so that the training loss becomes nan) in 3 out of 4 random runs, illustrating the necessity of tuning down \(\eta(t)\) when the norm of change \(\|\delta_{t}\|\) exceeds an appropriate threshold (equation 2). ## 5 Discussion In this work, we draw inspiration from biological synaptic plasticity and propose to incorporate different forms of plasticity into RNNs. We highlight the advantage of neuromodulated plasticity by comparing plastic RNNs against non-plastic ones on a range of challenging memory and few-shot learning tasks. In resonance with hypotheses from neuroscience (Magee & Griendberger, 2020; Martin et al., 2000; Neves et al., 2008), we show that adopting plasticity in RNNs improves their memory performance and helps them to learn from observations quickly. Moreover, we go beyond the traditional Hebbian plasticity and design a novel gradient-based plasticity where the model can flexibly adapt its weights with a self-generated target. Our experiments illustrate the feasibility of training RNNs capable of doing gradient updates where both the learning rule and the internal loss function are meta-trained. By comparing two plasticity rules under the same framework, we find both of them have pros and cons. The classical Hebbian plasticity, which is computationally more efficient, proved sufficient for robust memory storage and enabled the network to do simple forms of learning. However, results on the few-shot regression task show an example of how networks could benefit from non-local learning rules where error signals are propagated to prior layers. Just like different forms of plasticity rules have been found in different brain regions accountable for different cognitive functions (Magee & Griendberger, 2020), we believe that different plasticity rules are suitable for different tasks in ANNs. For neuroscientists, our work also provides a computational framework that can potentially offer insights into the functions of different plasticity rules in the brain, which is still challenging to directly test in animal experiments (Neves et al., 2008; Magee & Griendberger, 2020). Despite the promising results, training plastic RNNs with the current deep learning paradigm comes with challenges. Plastic weights are intrinsically unstable, and methods that stabilize the model (e.g., clip weights, normalization) could also limit the model's capability. In addition, training plastic RNNs with back-propagation requires the plastic weight at each step to be stored in the computational graph, causing extensive memory usage on GPUs. As a result, applying plastic RNNs in more challenging settings (e.g., large-scale language modeling) takes more engineering effort. We hope our work can encourage future researchers to further improve the engineering framework, explore other designs of plasticity rules, and investigate the benefit of synaptic plasticity in an even more comprehensive range of tasks. Figure 5: Ablation study on the cue-reward association task, gradient-based plasticity is used for all three panels. **Left:** Effect of \(\eta_{0}\) on final validation error, note that \(\eta_{0}=0\) means no plasticity. **Middle:** Effect of connection-specific learning rates \(\mathbf{\alpha}\) on validation loss curve. **Right:** Effect of \(\dim(\tilde{\mathbf{y}}_{t})\) on final validation error. ## Acknowledgements We are thankful to Liyuan Wang and Yudi Xie for their helpful feedback. This work was supported by a NSF of China Project (Research on the power supply of implanted neural dust clusters at the sub-neural cell scale, 041302027).
2308.16105
Advanced Deep Regression Models for Forecasting Time Series Oil Production
Global oil demand is rapidly increasing and is expected to reach 106.3 million barrels per day by 2040. Thus, it is vital for hydrocarbon extraction industries to forecast their production to optimize their operations and avoid losses. Big companies have realized that exploiting the power of deep learning (DL) and the massive amount of data from various oil wells for this purpose can save a lot of operational costs and reduce unwanted environmental impacts. In this direction, researchers have proposed models using conventional machine learning (ML) techniques for oil production forecasting. However, these techniques are inappropriate for this problem as they can not capture historical patterns found in time series data, resulting in inaccurate predictions. This research aims to overcome these issues by developing advanced data-driven regression models using sequential convolutions and long short-term memory (LSTM) units. Exhaustive analyses are conducted to select the optimal sequence length, model hyperparameters, and cross-well dataset formation to build highly generalized robust models. A comprehensive experimental study on Volve oilfield data validates the proposed models. It reveals that the LSTM-based sequence learning model can predict oil production better than the 1-D convolutional neural network (CNN) with mean absolute error (MAE) and R2 score of 111.16 and 0.98, respectively. It is also found that the LSTM-based model performs better than all the existing state-of-the-art solutions and achieves a 37% improvement compared to a standard linear regression, which is considered the baseline model in this work.
Siavash Hosseini, Thangarajah Akilan
2023-08-30T15:54:06Z
http://arxiv.org/abs/2308.16105v1
# Advanced Deep Regression Models for Forecasting Time Series Oil Production ###### Abstract Global oil demand is rapidly increasing and is expected to reach 106.3 million barrels per day by 2040. Thus, it is vital for hydrocarbon extraction industries to forecast their production to optimize their operations and avoid losses. Big companies have realized that exploiting the power of deep learning (DL) and the massive amount of data from various oil wells for this purpose can save a lot of operational costs and reduce unwanted environmental impacts. In this direction, researchers have proposed models using conventional machine learning (ML) techniques for oil production forecasting. However, these techniques are inappropriate for this problem as they can not capture historical patterns found in time series data, resulting in inaccurate predictions. This research aims to overcome these issues by developing advanced data-driven regression models using sequential convolutions and long short-term memory (LSTM) units. Exhaustive analyses are conducted to select the optimal sequence length, model hyperparameters, and cross-well dataset formation to build highly generalized robust models. A comprehensive experimental study on Volve oilfield data validates the proposed models. It reveals that the LSTM-based sequence learning model can predict oil production better than the 1-D convolutional neural network (CNN) with mean absolute error (MAE) and \(R^{2}\) score of 111.16 and 0.98, respectively. It is also found that the LSTM-based model performs better than all the existing state-of-the-art solutions and achieves a \(37\%\) improvement compared to a standard linear regression, which is considered the baseline model in this work. 1-D CNN, Volve oilfield, LSTM, Deep learning, time series forecasting. ## I Introduction The 18th century marked the first profound industrial revolution that predominantly exploited steam power replacing animal labor. Since then, there has been rapid development in industrial operations [1]. Now, the world has come to the brink of the fifth industrial revolution, a.k.a. industry 5.0, where smart systems are built to perform complex tasks more efficiently by leveraging advanced technologies, such as big data, high-performance computing (HPC) platforms, and data-driven analytics [2, 3]. Thus, industries are increasingly striving to create new and efficient methods of production by utilizing the capabilities of artificial intelligence (AI). These advanced technologies offer a wide range of potential benefits, including increased automation, improved decision-making, and enhanced ability to process and analyze large amounts of data. Hence, the DNNs have become a cornerstone of several industrial operations, including accurate prediction or concept classification of operational conditions, aiming at smart control, real-time fault detection, and maintenance. For instance, in the oil and gas industry, intelligent assistive tools (IATs) for production forecasting based on readily accessible parameters is crucial for economic assessment and gain. Nevertheless, it is a challenging task due to (i) the complexity of the environmental and geographical subsurface conditions, (ii) the non-linear relationship between production volume and petro-physical parameters, such as permeability and density, and (iii) the shortage of curated data availability. Therefore, despite technological advancement, hydrocarbon production analysis remains an active research field. It urges the research community to develop reliable and precise predictive models. Such models should provide more comprehension of the ongoing production, resulting in efficient operation, and informed decision-making and management. This work pragmatically develops two deep-learning models using 1-D CNN and LSTM to forecasting oil production. The main contributions of this work are summarized as follows. * Comprehensive study of data pre-processing, viz. handling missing values, data scaling, and feature selection based on petrochemical industrial expertise. * Systematic analysis of time series data for optimal sequence generation. * Hyper-parameter optimization by investigating the most effective model parameters. * Generalized model development. * Exhaustive ablation study and comparative analysis to validate the proposed models' performances. It is worth mentioning that these highlighted contributions are fully or partially missing in the existing works conducted by other researchers on the same data collected from Volve oil field. Thus, this study aims to bridge the main research gaps and propose new strategies to improve production forecasting in the hydrocarbon industries. The rest of this paper is organized as follows. Section II reviews important relevant works, Section III elaborates on the proposed models, and Section IV presents the methodology, Finally, Section V and Section VI provide an overall summary and conclusion of the paper with future directions, respectively. ## II Related Works This section overviews the existing related works under two categories: the general application of ML models in hydrocarbon industries for purposes other than production forecasting - Section II-A and the ML models developed exclusively for production forecasting - Section II-B. ### _Adaptation of ML In Hydrocarbon Industries_ In the modern era, industries want to explore more pathways for cost saving, increasing productivity, and enhancing safety in the operational environment. Thus, engineers and scientists develop various IATs to facilitate industries in achieving their desired goals. For example, in recent years, for solving several problems related to the oil and gas industries, researchers have explored a combination of the nature-inspired meta-heuristic algorithm (MA) and ML techniques. These algorithms are found to have robust performances and converge to the global optimum solution [4, 5, 6]. On the other hand, Alakeev _et al._ implemented a recurrent neural network (RNN)-based model along with convolutional neural networks (CNNs) to simulate reservoir behavior [7]. In addition to reservoir engineering, some research works focused on applying ML for drilling and construction engineering in the petrochemical industry. For instance, Syed _et al._[8] investigated ML models to predict lift selection, assess the wells' performance, and to classify them as "Good" or "Bad" wells based on their life-cycle cost (LCC). Similarly, Adedigba _et al._[9] conducted research touching upon risk assessment of drilling operations using a Bayesian tree augmented Naive Bayes (TAN). They developed this model to predict time-dependent blowout risk based on the current status of the key drilling parameters in real-time. Hence, it is intended for informed decision-making to avoid preventable workplace accidents and enhance the safety of drilling operations. Furthermore, Ozbayoglu _et al._ proposed an artificial neural network (ANN)-based model to estimate flow rate and velocity of pipe rotation for real-time drilling optimization and automation [10]. ### _ML for Forecasting Hydrocarbon Production_ ML-driven data analytics- a branch of science taking advantage of advanced statistical and neural network techniques- is used to realize and unearth insights and trends in large-scale datasets. It can be potentially exploited to drive meaningful information from hydrocarbon well's raw data aiming at increasing production efficiency and maximizing the profit of petrochemical industries. For instance, Bao _et al._[11] investigated the performance of RNN combined with an ensemble Kalman filter (EnKF) for predicting production to assist reservoir characterization and development. They verified their model on synthetic historical production data, rather than real data collected from an oil field. On the contrary, some researchers attempted in developing production forecasting models using actual data. For example, Zanjani _et al._[12] developed multiple algorithms based on ANNs, support vector machine (SVM), and linear regression (LR) for production forecasting using well-specific information. Their results on well NO 159 F-1 C in the Volve oil field show that the ANN-based model performs better than the other two algorithms. Since they focus on single well-specific model development, their approach is not scalable. Wang _et al._[13] conducted a study using machine learning to predict future production. In this research, a machine learning algorithm called the random forest ensemble was implemented to predict time-lapse oil saturation profiles. The algorithm was optimized using feature selection based on feature importance scores and Pearson correlation coefficients in combination with geophysical domain knowledge. The workflow was demonstrated using data from a structurally complex, heterogeneous, and heavily faulted offshore reservoir and was able to predict future time-lapse oil saturation profiles with high accuracy, as measured by over 90% R-square. This approach is notable because it does not require input parameters derived from cores, petrophysical logs, or seismic data and incorporates production data, which is an essential reflection of dynamic reservoir properties and is typically the most frequently and reliably measured quantity throughout the life of a field. Li _et al._ conducted a study for pressure prediction by combining the physics of well's behavior and deep learning (DL) models. Gated Recurrent Unit (GRU) and LSTM models were implemented to compare their results with the RNN model. Results showed that GRU and LSTM performed better compared to RNN. Well NO 15/9-F-1 C from April 2014 to April 2016 was used as a testing profile [14]. Masina _et al._ studied automated declined curve analysis (DCA) using AI to predict production rate. Their results showed that the DCA Fig. 1: An illustration of the phases involved in building the advanced deep learning-based oil production forecasting models. It subsumes several operations, including data gathering, data curation, model configuration, model training, and model evaluation. method is able to predict the desired output with a goodness of fit of 0.82 on the test set [15]. Zhang _et al._[16] proposed a method for detecting and locating leaks in liquid pipelines, which combines inverse hydraulic-thermodynamic transient analysis with an improved version of the particle swarm optimization (PSO) algorithm. The finite volume method is used to solve the continuity, momentum, and energy equations numerically. Four different algorithms were tested to determine the best-performing version of the improved PSO algorithm, and the results were evaluated based on accuracy, stability, robustness, and false alarm rate. The SIPSO algorithm was found to be the most effective. The proposed method was applied to two oil pipelines in real-world scenarios, one during a field opening experiment and the other during a leak incident. The method was able to accurately estimate the location, coefficient, and starting time of the leaks with low relative errors. Noshi _et al._ explored the potential application of Machine Learning algorithms in production prediction. They took advantage of the AdaBoost technique for production prediction. Mean absolute error (MAE) was used as an error metric to show the method's performance. Six features that affect production prediction, including: on stream hours, average choke size, bore oil volume, bore gas volume, bore water volume, and finally, average wellhead pressure were used as input parameters [17]. Panja _et al._ carried out a study to predict hydrocarbon production from hydraulically fractured wells. Two common types of ML models, namely the Least Square Support Vector Machine (LSSVM) and the Artificial Neural Networks (ANN) were analyzed and compared to the traditional curve fitting method known as Response Surface Model (RSM) using second-order polynomial equations to determine production rate [18]. Wui Ng _et al._ studied LSTM model for Volve oilfield production forecasting. In the mentioned work, only well NO 15/9-F-14 H were used for both the training and testing process [19]. This paper used an incorrect strategy in methodology. Specifically, the correlation between the production of oil and gas was found to be equal to 1, but the authors used gas as the input in their network and oil as the output. A more appropriate approach would have been to remove gas from the input variables in order to avoid this issue as it is highly correlated with the output and will result in a bias in the predictions made by the network. One of the classical methods which was used for hydrocarbon production forecasting in recent decades is decline curve analysis (DCA). This method was initiated by Arps _et al._ (1945), and then oil and gas companies adopted this method and its specific applications in related industries [20]. As a result of its simple development, it has been broadly used in various situations [21]. For forecasting hydrocarbon production, numerical reservoir simulation (NRS) can be used as an alternative to DCA. Yet, the performance of the NRS method depends on how historical matching (HM) has been done [22]. Also, NRS requires several features, comprising well locations, fluid properties, geological data, etc. To forecast production more accurately, the simulation model should be updated via HM when new real-time data emerges. As a result, this method has apparent limitations [19]. Data-driven modeling has become a viable option for hydrocarbon production forecasting with the advancement of data analytics and computing technology. Not only is this method easy to implement, but it also captures the intricate relationships between inputs and outputs. Utilizing machine learning (ML) has led to notable advancement in the oil and gas industry, especially in reservoir engineering [19]. However, the hydrocarbon industries face challenges in accomplishing this, as the existing conventional tools are not generalized and robust enough. In recent decades, AI and DL-based innovative solutions have emerged to improve the efficiency of operations in industries. This research has been carried out, which aims to propose two models based on convolutional neural network (CNN) and long short-term memory (LSTM) for hydrocarbon production. The following section (Methodology) elaborates on all of the stages that contribute to the production forecasting of the Volve oilfield data set. ## III Methodology In the last few decades, there has been a significant amount of focus on improving the architecture of DNNs. This attention is due to the fact that DNNs have been effective in solving a wide range of practical problems that arise in various industries. One of the most significant advantages of using these networks is their high capability to learn non-linear relationships regardless of the type of data [4, 23, 24, 25, 26, 27, 28, 29]. As a result, there is a growing interest among researchers to optimize the structure of DNNs. This has involved the exploitation of different topologies, such as skip connections, the application of various techniques to reduce the number of trainable parameters, and novel fast retraining to fine-tune pre-trained DNNs [30]. In this direction, this work advances the regression models for hydrocarbon production forecasting by exploiting the power of the learning capability of the DNNs, particularly in time-series data analysis. This work adopts a modeling technique where it progressively designs and develops predictive models from a baseline least squares-based linear regression to advanced deep learning-based regression models. Fig. 1 illustrates the phases involved in the building of the advanced deep learning-based solutions. The following subsections elaborate on the proposed solutions in a step-by-step manner. ### _Base Model: A Standard Linear Regressor_ Linear regression (LR) estimates the linear relationship between different explanatory attributes and a dependant variable of given data samples (cf. 1) by minimizing an objective function, say the sum of the squares, i.e., the distance between each predicted and actual value of the dependent variable is squared and then summed up for all training samples. Due to its well-established mathematical foundation and easy training procedure, LR is widely accepted as a baseline model for various regression problems in several fields, including engineering, biomedical, behavioral, and social sciences, and business. The standard linear regression model can be defined as in (1). \[y=\alpha_{1}x_{1}+\alpha_{2}x_{2}+...+\alpha_{n}x_{n}+\beta, \tag{1}\] where \(y\), \(\alpha_{i}\), \(x_{i}\), and \(\beta\) stand for the dependent variable (output), coefficient of the \(i\)th input attribute, \(i\)th input attribute, and bias, respectively. The coefficients are optimized by minimizing the total sum of squares (SST) defined in (2), which is the aggregation of the sum of squares (SSE), \(\sum_{i=1}^{n}(y_{i}-\widehat{y}_{i})^{2}\) and the sum of squares (SSR), \(\sum_{i=1}^{n}(\widehat{y}_{i}-\overline{y})^{2}\). \[\sum_{i=1}^{n}(y_{i}-\overline{y})^{2}=\sum_{i=1}^{n}(y_{i}-\widehat{y_{i}})^{ 2}+\sum_{i=1}^{n}(\widehat{y_{i}}-\overline{y})^{2}, \tag{2}\] where \(y_{i}\) stands for the real value in observation \(i\), \(\overline{y}\) is the mean value of the dependant variable, \(y\) from all \(n\) observations, and \(\widehat{y_{i}}\) denotes the predicted value of the dependant variable for the given \(i\)th observation's input attributes. ### _1-D CNN-based Regressor_ CNNs have been known for their robustness and become the de facto standard in a wide range of computer vision tasks [30, 31, 32]. One unique feature that makes CNNs efficient for supervised learning is the spatial-local connectivity that allows layers to share parameters [33]. Feature extraction in CNNs relies heavily on the convolution (Conv) layers, which perform convolution operations on the input data or input feature map(s) using pre-configured kernels, i.e., feature detectors as defined in (3). This Conv operation will generate a volume of learned feature maps. In this work since the input is 1-D sequential data, the input Conv layer receives a 1-D input sequence, \(x(n)\in\mathbb{R}^{1\times 6}\). Then, then a convolution between the kernel, \(w(n)\) and the input generates a feature map, \(z(n)\) as defined in (3) [34, 35]. \[z(n)=x(n)*w(n)=\sum_{m=-k}^{k}x(m)\cdot w(n-m), \tag{3}\] where \(k\) and '\(*\)' denote the kernel size and Conv operation, respectively. The hyper-parameters, such as the number of hidden layers, kernel size (K), number of filters (F), sub-sampling factor, and type of activation function used in each layer determine the structure of the 1-D CNN model. In this work, the proposed 1-D CNN regressor's structure, its layer connectivity, and the hyperparameter setting are given in Fig. 2 and Table I. ### _LSTM-based Regressor_ LSTM networks are an improved version of recurrent neural networks (RNNs) mainly introduced to better handle the long short-term dependencies sequential data. LSTM-based models have proven to be the state-of-the-art in several time series analysis, viz. stock market prediction [36], moving object detection [32] and speech recognition [37]. The recurrent connections and memory mechanisms play a vital role in LSTMs Fig. 2: An illustration of sequential regression model using 1-D CNN. It subsumes input sequence learning using 1-D Conv, subsampling using max pooling, and regression output generation via a densely connected subnetwork, where K, F, P, and S stand for conv kernel size, the number of filters, padding, and stride rate, respectively. to retain significant past observations. By taking advantage of the input gate, forget gate, and output gate operations as defined in (4) - (8), it is possible for an LSTM module shown in Fig. 3 to add new useful information to the memory and omit some information that is no longer important to be preserved. \[i_{t}=\sigma(W_{xi}*X_{t}+W_{hi}*H_{t-1}+b_{i}), \tag{4}\] \[f_{t}=\sigma(W_{xf}*X_{t}+W_{hf}*H_{t-1}+b_{f}), \tag{5}\] \[o_{t}=\sigma(W_{xo}*X_{t}+W_{ho}*H_{t-1}+b_{o}), \tag{6}\] \[C_{t}=f_{t}\circ C_{t-1}+i_{t}\circ\tanh(W_{xc}*X_{t}+W_{hc}*H_{t-1}+b_{c}), \tag{7}\] \[H_{t}=o_{t}\circ\tanh(C_{t}), \tag{8}\] where \(X_{t}\) is an input from a time-series data, \(C_{t}\) is the cell state, \(H_{t}\) is the hidden state, and \(i_{t}\), \(f_{t}\), and \(o_{t}\) are the gates of the LSTM module at timestamp \(t\). Hence, \(W\), '\(*\)', '\(\circ\)', and \(\sigma\) denote the Conv kernels specific to the gates or the internal states, the Conv operator, the Hadamard product, i.e., element-wise matrix multiplication, and hard sigmoid activation function. While Fig. 4 illustrates the general idea of how LSTMs can be applied for a sequence-based regression problem, Table II provides the layer connectivity details of the proposed LSTM-based regressor. ## IV Experimental analysis ### _Environment_ This work exploits Google Colaboratory cloud resources with one Tesla T4 graphical processing unit, 12 Gigabytes of RAM, and 2 CPU cores for training, and evaluation. The models are built using Python programming language and the pre-built libraries, like Numpy, Pandas, Matplotlib, Seaborn, and the open-source deep learning library TensorFlow library. ### _Explanatory Data Analysis_ #### Iv-B1 Data source This work uses the Volve oil field database for the experimental study. The Volve oil field is located in the central part of the North Sea at 2750 - 3120 m depth. The field was revealed in 1993, and the drilling process started in May 2007 [19]. After 8.5 years, the Volve oil field was decommissioned in 2016 [17]. In May 2018, Equinor released the Volve database publicly for research and development purposes [38]. The database includes different categories of data collected from various operations, but this study focuses on real-field production data. The production data subsumes information gathered from seven wells (five producers and two injectors) as summarized in Table III, namely NO 15/9-F-1 C, NO 15/9-F-11H, NO 15/9-F-12 H, NO 159-F-14 H, NO 15/9-F-15 D, NO15/9-F-4 AH, and NO 15/9-F-5 AH, where 15/9-F-4 and 15/9-F-5 are the two injectors. #### Iv-B2 Attributes Table IV lists the attributes of the collected data from seven wells stated in Table III. To understand the nature of each attribute, they are visualized using trend plots wrt duration. For example, Fig. 6 visualizes the attributes of well no. 14 (NO 15/9-F-14 H). Similarly, the attributes' Fig. 4: An overview of applying LSTM for sequence learning. Here, a time series data with a sequence length of \(t\) is input to the LSTM subnetwork. The learnt representation by the LSTM subnetwork is then forwarded to the densely connected regression sub-network that generates the final output. Fig. 3: An illustration of a standard LSTM cell with three gates that control information flow, where \(\mathbf{X}_{t}\), \(\mathbf{C}_{t}\), and \(\mathbf{H}_{t}\), are the input quantity from a time-series data, cell state, and hidden state, respectively, at timestamp \(t\). basic statistical information is also analyzed for each well. For instance, Table V provides the statistical information: mean, standard deviation, minimum value, 1st quartile, median, 3rd quartile, and maximum value of each attribute of well no. 14. #### Iii-B3 Handling missing values Handling missing values plays a crucial role in data-driven model building. The Volve oil field database contains missing values in certain attributes. For example, one can find from the statistical information of well no. 15/9-F-14 H summarized in Table V that there are data samples with missing values in attributes ADP, ADT, ADPT, ACP, AAP, AWP, AWT, and DPC. To resolve this, it is important to understand the information distribution of attributes across all the samples. In this case, the distribution is observed using boxplots as shown in Fig. 5. From these plots, it is clear that the data is highly skewed, so the best approach for missing value imputation is replacing the missing value with the median value of the respective attribute. #### Iii-B4 Feature selection In order to gain a better comprehension of the data, correlations among all of the attributes are calculated and visualized using a heat map shown in Fig. 7. It is worth mentioning that the correlation between ADP, ADT, and ATPT is equal to 0.97 and 0.95, respectively. Therefore, ADP is removed from the input variable list. Moreover, the correlation between produced gas and oil is equal to 1, which complies that oil flow produces gas. ### _Data Preprocessing_ #### Iii-C1 Data scaling Data scaling is a critical preprocessing step, which shows substantial performance gain in time-series analysis [25, 39, 40]. In this work, standard scalar was applied as the normalization method. This method normalizes the data by subtracting the mean and dividing it by the standard deviation as given in (9). It should be noted that during the process of analyzing the test set, predicted target values are re-scaled to the original range. \[x_{scaled}=\frac{x-\mu(x)}{\sigma(x)}, \tag{9}\] where \(x\), \(\mu(x)\), and \(\sigma(x)\) are the raw input, the sample mean, and the standard deviation, respectively. #### Iii-C2 Dataset curation It is found that the existing works on the Volve oil field production database are oil well-specific models; thus, they are not generalized solutions to all the wells. To address this, we curate mutually exclusive training and test datasets that comprise data samples from the five production-related wells listed in Table III. In this regard, after generating sequential data samples with a sequence length of \(t\), the oil well-specific sequences are divided into \(70:30\) non-overlapping training and test datasets. To build generalized models, a global train set and a global test set are formed, respectively by amalgamating all well-specific training sets and test sets. The resulting global sets contain 6286 and 2694 sequential data samples in training and testing sets, respectively. This strategy makes the proposed models more accurate, robust, and generalized across all the oil wells compared to the existing solutions. ### _Evaluation Metrics_ This work uses MAE and R-squared (\(R^{2}\) score) defined in (10) and (11) to measure the precision of the oil production forecasts by the proposed models. \[MAE=\frac{\sum_{i=1}^{n}|y_{i}-\hat{y_{i}}|}{n}, \tag{10}\] \[R^{2}=1-\frac{\sum_{i=1}^{n}(y_{i}-\hat{y_{i}})^{2}}{\sum_{i=1}^{n}(y_{i}- \overline{y})^{2}}, \tag{11}\] where \(y_{i}\), \(\hat{y_{i}}\) and \(\bar{y}\) stand for actual, predicted and average values of target attributes, respectively. Also \(n\) denotes to total number of data points. ### _Hyper-parameter Tuning_ Hyperparameter tuning is a meta-optimization task, which plays a vital role in the realm of DL. Effective hyperparameter adjustments have proved to improve predictive models' performances [41]. In this case, two important aspects, such as the input sequence length and the layer configuration of the proposed model are considered for hyper-parameter tuning. #### Iii-C1 Optimizing sequence length In time-series analysis, selecting the optimal sequence length highly influences the prediction's precision [32, 42, 43]. It requires a strong domain knowledge to comprehend the impact of previous data points in the data sequence. In this work, for the proposed 1-D CNN-based and LSTM-based oil production forecasting models, various sequence lengths (ranging from three to eight) are considered in selecting the optimal sequence length. Fig. 8 shows the influence of sequence length in the proposed models' performance in terms of MAE. From these analyses, it is evident that when the sequence length is increased from three to five, the error in the forecast gradually decreases, but beyond that range, the error increases. As a result, a sequence length of five is chosen as the optimal value. #### Iii-C2 Optimizing layer configuration To finalize an optimal model wrt the number of hidden layers and neurons several sanity analyses are conducted. For example, Table VI summarizes the performances of ten different architectural configurations that led us to finalize the optimal model elaborated in Section III-C and Table II. It is clear from this table, the best performant models (i.e., with the lowest MAE and the highest R-squared) are model 3 and model 4, respectively for LSTM-based and 1-D-based regressors. For more information about the optimized LSTM network, Table II was provided to summarize the model's connectivity pattern with respective layer details. Furthermore, the best performance for 1-D CNN network was gained by model number 4, which includes three convolutions, one max pooling, one flatten and one dense layer. A table was provided to conclude the details of the proposed CNN-based model. ### _Model complexity analysis_ In deep learning, the process of passing a single input through a model to produce an output is called an inference. It is important to know the inference time of a model in advance because it allows researchers to design and optimize the model Figure 5: Visualizing the data distribution of the key attributes with respect to each well listed in Table III. Fig. 6: Trend plot visualization of the attributes: OSH, ADP, ADT, ADPT, AAP, ACP, AWP, AWT, DPC, O, G and W for the well no 15/9-F-14 H. for better performance. To measure the inference time, the total number of computations performed by the model must be calculated. One way to do this is by using a measure called Floating Point Operations (FLOPs), which counts the number of operations involving floating point values in the model. The inference time can then be calculated by dividing the number of FLOPs by the number of FLOPs that the CPU can perform in a given amount of time. ## V Overall Analysis ### _Research Gap In the Existing Works_ In the existing works, there is a lack of experiments conducted on the Volve oil field production datasets. We believe it is because the dataset became public quite recently in 2018. In addition, all the existing works focus on well-specific model developments. Such models are not applicable to forecast the oil production in other wells. Hence, the existing works do not report their performances in all relevant evaluation metrics (MAE and R\({}^{2}\) score). However, this work develops a generalized model, which is applicable to all the oil production wells in the Volve oil field and the performance of the models is evaluated using MAE an R\({}^{2}\) score. ### _Quantitative Analysis_ A thorough comparative analysis is conducted in comparison to the existing works, in Table VII, where the standard linear regression described in Section III-A is considered as the baseline. When compared to this baseline, the best existing work, the conventional neural network-based solution proposed by Chahar _et al._[44] achieves \(8.5\%\) improvement, while the proposed LSTM-based and 1-D CNN-based models provide significant improvement of \(37\%\) and \(14\%\), respectively. In addition to its superior performance, the complexity of the proposed LSTM-based regressor is \(\simeq 45\%\) less than that of the CNN-based counterpart in terms of the number of trainable parameters (cf. Table I, Table II and Table VI). Hence, it requires only about \(2\%\) of computations compared to the CNN-based counterpart in terms of FLOPs. Therefore, the proposed LSTM-based model is a more resource-friendly and efficient solution. The holistic analysis suggests that the LSTM-based oil production forecasting model is a robust, generalized, and reliable solution. As it can be seen from Table VI, LSTM models 2 and 3 have similar MAE and FLOPs. Model 2 is preferable for the resource-limited computational platform, while model 3 is a better solution when prediction precision is very important for the optimal operation of oil production forecasting. Fig. 8: Analysing the impact of input sequence length in forecasting the oil production with respect to evaluation metric (MAE). Fig. 7: Heatmap of linear correlations between different Features. ### _Qualitative Analysis_ Figures 9 and 10 compare the proposed models' oil production forecasting results with the ground truths of the respective test sequences wrt the five production wells in the Volve oil field. From the plots of the prediction and actual values, one can observe that the proposed models' forecasts are very close to the actual values. It is further verified by the quantitative comparisons given in Table VII. ## VI Conclusion In this study, an LSTM-based model and a 1-D CNN-based model were proposed for the time series production forecasting of the Volve oilfield. To determine the appropriate sequence length in the time series data analysis, a comprehensive investigation was conducted at the outset. After the best model topology was chosen based on the hyper-parameter tuning procedure, it was found that the LSTM-based model had better performance than the 1-D CNN model, as demonstrated by the MAE, R-squared, and complexity metric. From an applied perspective, since data from all of the wells was used in the training and testing of the models, they can be generalized to the other existing wells. However, it is important to note that the generalizability of the models has not been investigated in this paper and is a topic for future research. In addition, another potential direction for future research in oil production forecasting using deep neural networks could be the integration of additional data sources. For example, incorporating data on well activity, drilling plans, and geological information could potentially improve the accuracy of forecasts.
2308.01479
Investigating Reinforcement Learning for Communication Strategies in a Task-Initiative Setting
Many conversational domains require the system to present nuanced information to users. Such systems must follow up what they say to address clarification questions and repair misunderstandings. In this work, we explore this interactive strategy in a referential communication task. Using simulation, we analyze the communication trade-offs between initial presentation and subsequent followup as a function of user clarification strategy, and compare the performance of several baseline strategies to policies derived by reinforcement learning. We find surprising advantages to coherence-based representations of dialogue strategy, which bring minimal data requirements, explainable choices, and strong audit capabilities, but incur little loss in predicted outcomes across a wide range of user models.
Baber Khalid, Matthew Stone
2023-08-03T00:10:23Z
http://arxiv.org/abs/2308.01479v1
# Investigating Reinforcement Learning for Communication Strategies in a Task-Initiative Setting ###### Abstract Many conversational domains require the system to present nuanced information to users. Such systems must follow up what they say to address clarification questions and repair misunderstandings. In this work, we explore this interactive strategy in a referential communication task. Using simulation, we analyze the communication trade-offs between initial presentation and subsequent followup as a function of user clarification strategy, and compare the performance of several baseline strategies to policies derived by reinforcement learning. We find surprising advantages to coherence-based representations of dialogue strategy, which bring minimal data requirements, explainable choices, and strong audit capabilities, but incur little loss in predicted outcomes across a wide range of user models. ## 1 Introduction Task-oriented dialogue systems have robust policies to make sure the system correctly captures user-specified parameters [6], but task-oriented interactions can also include points where the system's contributions are essential, such as information presentation [12], constraint satisfaction [6], and real-world coordination [1]. At such points, task success will typically require that the system work across turns to make sure that its contributions become common ground with users. To achieve common ground, the system may need to draw inferences about what the user understands based on what the user says and does [27], and act preemptively to resolve misunderstanding. At the same time, the system can expect users to work collaboratively to confirm their own understanding [4]. When they do so, the system must be able to play its part in users' grounding strategies. In fact, fielded systems rarely have such abilities--they typically cannot answer users' clarification questions, for exam ## 1 Introduction The study of the problem of finding a class of problems in the field of computer science has been initiated by the study of the problem of finding a class of problems in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in field of computer science has been studied in the field of computer science. The problem of finding a class of problems in field of computer science has been studied in the field of computer science. The problem of finding a class of problems in field of computer science has been studied in the field of computer science. The problem of finding a class of problems in field of computer science has been studied in the field of computer science. The problem of finding a class of problems in the field of computer science has been studied in the field of computer science. The problem of finding a class of problems in field of computer science has been studied in the field of computer science. These findings are in line with Clark and colleagues' [5] principle of least _collaborative_ effort--that human speakers' and audiences' strategies are simple and mutually responsive, rather than systematically optimized. We are curious to explore this possibility in a wider range of conversational domains of practical interest. ## 2 Related Work There have been several efforts in the dialogue research literature to model initiative in situated conversations. In one influential prototype [1], a receptionist system manages conversation initiative to interact with different customers in a situated conversation. Other conversational systems in task-oriented settings address information queries [3] and collaborative problem solving [8]. In such systems, task initiative is generally defined as requiring systems to guide the conversation so that the user specifies the task-specific parameters according to the system's expectations, and contrasted with mixed-initiative systems where the user specifies parameters more freely. User-initiated clarifications are not on the table, as confirmed by a survey of task-oriented conversational systems [14], which reveals that fielded systems are generally incapable of answering clarifications by the user. There is also work which aims to understand what role mixed-initiative plays in human-human interaction. The goal is to understand the dynamics of human communication to help in in building better conversation models. However these do not model the complex dynamics between the speakers involved in the conversation [7; 29; 9]. Reinforcement learning (RL) has also been used to optimize communication policies over handcrafted baselines [15; 17; 20]. For example, RL has been shown to enable adaptive and user-centric policies for initial information presentation [11]. But such work has not addressed the rephrasing required to respond to user-initiated clarification. ## 3 Problem Statement Here we will first provide a summary of the task and architecture of our dialogue system. We then move on to lay basic building blocks of how the model of system behavior induces a learning problem for communicating with a user. We then summarize the mechanisms behind the simulation models and the reinforcement learning frameworks we utilize to solve the resulting optimization problem. ### Colors in Context We use an established referential communication task [22] to test the performance of our director model. The task involves showing participants, director and matcher, a set of three color patches \(x_{1}\), \(x_{2}\), \(x_{3}\) in different permutations. The director knows which of the three patches is the target and has to identify the target to the matcher. The conversation data is collected in English through a text chat interface. A task example is shown the in Figure 1. The human-human conversation data is collected in three task difficulties: i) _far_ ii) _split_ iii) _close_. The _far_ condition is easiest, since the color patches in this case generally come from different color categories. The _split_ condition has two color patches which look similar, while all color patches look similar in the _close_ condition, which makes it the hardest. Subjects sometimes find it hard to identify the target color patch in a single turn and the matcher regularly makes use of clarification questions to resolve any ambiguities in the director's explanations. Overall, human matchers are successful in selecting the correct target \(\sim 90\%\) of the time. Around \(\sim 97\%\) of the human conversations do not have clarifications: thed matcher selects the target just using the description in the first turn. Most of the other conversations conclude after a clarification question and a single director responds. This suggest that human directors are quite successful both in their initial descriptions and in their followup utterances. ### Director Communication Strategy Analyzing human-human conversations reveals a range of communication strategies that human directors use to describe the target color patch. For example, human directors sometimes make use of parallel descriptions for each color patch and then specify the target by repeating one description. However, human description strategies can be formalized as a sequence of descriptions of individual referents. Consequently, we structure RL to learn a composition of different color patch descriptions so it will consider complex human-like communication strategies. We explain this in detail in Section 4. Figure 1: Figure shows two example interactions from CIC dataset. ### Generating Color Patch Descriptions We use an existing cognitive model of color descriptions [19] to generate color descriptions. The model is based on the crowd-sourced collection of descriptions of color patches curated by Randall Monroe. The model offers several psychologically-plausible methods for effectively describing color patches which output a probability distribution \(P(w_{t}|x_{t},C)\), where \(x_{t}\) is the target referent, \(C\) is the context consisting of non-target color patches and \(w_{t}\) is the color patch description at time \(t\). We utilize the conservative speaker to generate color descriptions (its output is most reliable) and use the expectation maximization model to estimate the user's understanding (its inferences are most human-like). ### Approximating a State Posterior We utilize the coherence approach [13] to model dialogue state. Each new utterance \(w_{t}\) is first translated into a logical form which is obtained using a domain-specific NLU module. In this case, the module is a parser for a domain-specific probabilistic context-free grammar (P-CFG). The logical form is used to update the context state represented as a knowledge graph of coherence relations. The logical form adds a new node to the knowledge graph through coherence-based attachment and is resolved in context through the use of a cognitive model which translates it to a probability distribution \(P(x_{i}|w_{t})\). Since the logical form for an utterance \(w_{t}\) attaches to a node representing a previous utterance \(w_{t-1}\), a posterior \(P(x_{i}|w_{t},w_{t-1},...,w_{1})\) can be obtained which summarizes the all the contributions in a chain of attachments \(w_{t},w_{t-1},...w_{1}\). ## 4 Reinforcement Learning Setup We use deep q-learning algorithm (DQN) as the RL approach and specify all its necessary components here: ### State Vector As stated earlier, each new speaker contribution is attached into a knowledge graph of discourse relations to obtain an updated context representation. A probability distribution over the color patches \(x_{i}\) is approximated using the cognitive models [19]. We serialize this knowledge graph into a vector to represent the state \(s_{t}\) for the RL algorithm. Our state vector \(s_{t}\) is given as: \[s_{t}=\{P(x_{i}|w_{1},..,w_{t})\ \forall\ i;P(x_{target}|w_{1},..,w_{t});a_{1};a_{2};...;a_{n};d_{min};d_{max};d_{avg};l_{conv};pt\}\] where \(a_{i}\in A\) (set of matcher and director actions) indicates whether \(i^{th}\) action has occurred previously, \(d_{*}\) indicates the relevant distance between the color patches, \(l_{conv}\) indicates the conversation length so far and \(pt\) is a flag indicating whether the previous speaker was the matcher or the director. \(pt\) is used by the RL model as an indicator of whether the director is continuing its turn. ### Director Actions Our analysis reveals that human directors use a variety of creative communication strategies. The three most common strategies employed by human directors are i) \(a\); ii) \(a\&a*\); iii) \(a\sim b*\); where \(a\) and \(a*\) represents two different descriptions for the target color patch and \(b*\) represents the description for the distractor closest to the target. These strategies can be created through composition of basic color descriptions about target or distractor color patches. We structure RL such that the director agent keeps making decisions until it makes an _end turn_ decision so it can learn to compose color patch descriptions. _Left_ table 1 shows the basic color descriptions RL director can choose from. Actions 3 and 5 in the _left_ table 1 are only used as a response to a matcher clarification. Composing actions 1 and 2 result in \(a\&\sim(b|c)\) (called the extended referential strategy) results in a strategy which may be helpful in a _close_ difficulty case since it contrasts the target with the distractors providing additional signal for the matcher. Similary, composition of strategies 1 and 4 results in \(a\&\sim b\) which is one of the strategies human directors utilize and is a relaxed version of the extended strategy. ### Reward Function Drawing insights from the Paradise paradigm [28] we want the director to effectively describe the color patch in as few turns as possible. So, we formulate the reward \begin{table} \begin{tabular}{l l l l} & Basic Director Strategies & & & Definition \\ \hline 1 & a & Strategy & Definition \\ 2 & \(\sim\) b or c & Direct & \(a\) \\ 3 & Affirm a Clarification Term & Extended & \(a\&\sim(b|c)\) \\ 4 & Negate the Color Patch Closest to the Target & Mixed & Use _extended_ in close cases \\ 5 & Negate a Clarification Term & & otherwise used _direct_. \\ 6 & End Turn & & \\ \end{tabular} \end{table} Table 1: i) Left table shows the basic description strategies used by the RL director to curate a target description. ii) Right table shows the logical forms for different director baseline policies. function such that each new director term earns a penalty. Each task success earns a large reward and failure a small one. This formulation summarizes our reward function: \[reward=r_{outcome}+(r_{term}*term\_count) \tag{1}\] where \(r_{outcome}\) specifies the reward for the task outcome, \(r_{term}\) specifies the penalty for each new color description the director model uses and \(term\_count\) is the number of color descriptions uses by the director. ### Matcher Simulation To train an conversational agent it needs to interact with a companion so it can try out communication strategies and get the reward feedback it can learn from. Since interacting with humans is too expensive this is accomplished through human simulations [26; 24; 25; 16; 23]. For the analysis of learned RL policy we use two matcher simulations: * matcher always selects the color patch most likely to be the target. * given a threshold, the matcher asks clarifications if the probability for the most-likely target is less than threshold. Our analysis reveals that humans only ask clarifications around 3% of the time. In addition, humans tend to ask clarifications about the two most-likely color patches most of the time. Due to this reason, we opt for the use of clarifications about the two most-likely color patches in the matcher simulation. To adjust the rate of clarifications by the matcher simulation, we adjust the select action threshold at 95% such that this holds true. Since human interactions are noisy we also specify a small clarification error rate of 10%. The matcher asks problematic clarifications at this rate and these provide no signal regarding matcher's understanding of the conversation context. This allows DQN to learn conversation policy in a noisy setting. ### DQN Formulation We make use of deep q-learning (DQN) algorithm [21] to train the RL agent. The algorithm uses a policy \(Q^{p}_{\theta}\) and a target network \(Q^{\prime}_{\theta}\) to approximate the current and the future expected q-values respectively. Q-values for a given state and action are approximated using these two networks to compute the difference \(\delta\): \[\delta=Q^{p}_{\theta}(s_{t},a_{t})-(r(s_{t},a_{t})+\gamma max_{a^{\prime}\in A _{dir}}Q^{\prime}_{\theta}(s_{t+1},a^{\prime})) \tag{2}\] where \(A_{dir}\) is the action set for the director, \(\gamma\) is the discount factor and \(r(s_{t},a_{t})\) is the reward for the action \(a_{t}\) in the state \(s_{t}\). The Adam optimizer is used to optimize the weights \(Q^{p}_{\theta}\) and \(Q^{\prime}_{\theta}\) such that \(\delta^{2}\) is minimized. Similar to the traditional DQN, we make use of an experience replay memory to construct a dataset of state-action transitions compute the \(\delta\) using sampled mini-batches from this replay memory. ## 5 Baseline Formulation and Analysis Relying on the insights drawn from human conversations, we formulate three director baseline policies. Formal notation for the director policies is shown in the _right_ table 1. * first is a basic director which tries to identify the target without utilizing the distractor information in the task context--we call this the _direct baseline_ policy. * second policy resembles a director which is always extra careful and tries to provide extensive information to identify the target color patch--we call this the _extended director_ policy. * third policy baseline involves using the extended policy for the _close_ condition and using the basic policy rest of the time--we call this the _mixed director_ policy. Our analysis reveals that answering matcher clarifications bridges the performance gap between the direct and extended strategies. This means that RL model will be able to learn interesting communication strategies for the matcher who always selects the target given a description. ### Effect of Clarifications To discern the room for flexible and context-sensitive director communication strategies we conduct a study on how changes in threshold for the _select_ action of the matcher affects the task outcome. At _Left_ in Figure 2, we show the effect of this change on task success. We find out that different strategies show a difference in their success rates when the user does not ask clarifications (threshold for the select action is low). However clarifications by the matcher diminish this difference which indicates that a rational matcher with the ability to clarify in case of ambiguity has the ability to make use of multiple ambiguous descriptions to arrive at the right answer. This also shows that RL agent will have the most room to learn trade-offs when it is dealing with a matcher who does not ask clarifications. ### Tuning Noise for a Realistic Setting Human-human conversations show a success rate of \(\sim 90\%\) and human matchers use clarifications in minority of the cases \(\sim 3\%\). As shown in the _left_ figure 2 even when the threshold for select action is low the task success rate for the direct baseline strategy is \(\sim 92\%\). One of the reasons for this high success rate is that there is no noise in the way we are evaluating probability distributions. Human actions in the real world are noisy so we use two noise inducing methods our matcher simulations. **Noise Induction in Select Action**: We use a temperature based noise inducing parameter \(\tau\) to perform a noisy softmax operation on the matcher's probability distribution and induce noise at the time of selection [18]. We call this method the _noisy finger_ method. Lets use \(p\) to represent the probability distribution \(P(x_{i}|w_{t},..,w_{1})\) then noisy distribution \(p_{tau}\) can be formulated as: \[p_{tau}=softmax(\tau*p) \tag{3}\] At _Right_ in Figure 2, we show the effect of using \(tau\) based noisy distribution on the success rates of both direct baseline and extended communication strategies. This method is successful for inducing noise because the temperature parameter \(\tau\) affects the highest probability for the target disproportionately. **Noise Induction in Semantic Interpretation**: To induce noise in the semantic interpretation of a director contribution we use a parameter \(\alpha\) to sample a distribution from the gamma distribution obtained using the product of parameter \(alpha\) and the probability distribution \(p\)[10]. Since the extended strategy involves composition of multiple contributions the noise is added to each probability distribution individually before obtaining the posterior. Thus, this operation affects the extended and direct baseline strategies differently. At _Left_ in Figure 3, we show the impact of varying parameter \(\alpha\) on the success rates of direct baseline and extended strategies. It reveals that due to combining information from multiple descriptions extended strategy is able to outperform the direct baseline strategy across the board. To make this concrete, for the noise parameter \(\alpha=\sim 0.05\) where extended strategy achieves a success rate of \(\sim 90\%\) the baseline (direct) strategy is only able to achieve a success rate of \(\sim 75\%\) Figure 2: i) _Left_ figure shows the disadvantage of using ambiguous communication strategies vanishes as the user asks clarifications. ii) _Right_ figure shows the effect of parameter \(\tau\) on the success rate for the direct baseline and extended strategies. Since the noise is added right before the select action each strategy is affected equally. ### Communication Strategy Choice Analysis Since the extended strategy shows an improvement over the direct baseline, we further analyze this improvement to better understand the impact of different communication strategies. Using the analysis presented above we adjust the noise inducing parameters \(\tau=4.5\) and \(\alpha=0.15\) such that each induces half of a realistic (human-human) error-rate. Our analysis of direct baseline and extended strategies, shown at _Right_ in Figure 3, highlights that most of the performance gains occur in the _close_ task setting when utilizing the extended strategy. This suggests that RL should be able to learn strategies which improve the success rate for _close_ condition. ### Reward Function Analysis The extended director strategy, though very thorough in structure, requires more effort from the director whereas the direct strategy does not utilize the external context information effectively. Our expectation for a DQN-based model is that it will learn a balance between some variations of direct and extended communication strategies. However, since DQN policies get their signals from a reward function we conduct a reward space analysis of the three director strategies specified above to identify the necessary parameters for the reward function specified in 1 which will allow DQN to learn a flexible policy. At _Left_ in Figure 4, we show the reward function for the three director hand-crafted policies as a function of penalty for the \(term_{c}ount\) when \(r_{success}=1.0\) and \(r_{failure}=-0.8\). Since we know the number of conversations for each difficulty we specify the \(term\_count\) to be the average number of terms used by a director utilizing each policy. As depicted at _left_ in Figure 4, there is a region in the space where mixed strategy achieves better performance in terms of the reward. Choosing a term penalty in this region will allow the DQN to learn a flexible policy. Figure 3: i) _Left_ figure shows that \(\alpha\) impacts both strategies differently. We can see that extended strategy outperforms the direct baseline across the board because it cumulates contributions of multiple descriptions. ii) _Right_ figures shows performance gain for extended strategy is observed in the _close_ task difficulty setting when \(\tau=4.5\) and \(\alpha=0.15\). ## 6 Analysis of a DQN Based Director Strategy In this section we first present the parameters we specified for the DQN learning process and then conduct a comparative analysis of the the hand-crafted communication strategies and those learned by the DQN algorithm. Our analysis suggests that DQN is able to learn a flexible communication strategy which outperforms the extended hand-crafted strategy in terms of the reward when interacting with the always selecting user but does not offer an advantage in terms of the success rate. When interacting with the clarifying matcher, DQN learns a variation of direct baseline strategy which is in line with our expectations, since answering clarifications bridges the performance gap between communication strategies. In our experiments \(Q_{\theta}^{p}\) and \(Q_{\theta}^{\prime}\) are represented using a 2-layered dense network with a ReLU activation in between. The learning rate for the setting when: i) matcher model always selects the target is \(10^{-2}\) and ii) matcher model asks clarification when appropriate is \(7.5x10^{-5}\). Following from the reward analysis presented above we choose the penalty for additional color descriptions as \(r_{term}=-0.025\). The noise parameters are \(\tau=4.5\) and \(\alpha=0.15\) to make the conversation setting realistic. We used 5000 CIC task contexts (set of three color patches) from the training set to generate simulation data for the experience replay memory used by the DQN algorithm. To test the policy we measure the success rate and the average reward of using 1000 CIC contexts from the test set. ### Policy when the Matcher Always Selects In this setting the DQN model learns to describe the target color patch at the start of the conversation. The model proceeds to provide an additional description for the target if the probability of the target given the description is below the threshold of \(\sim 84\%\). When the target posterior probability is above this threshold the DQN Figure 4: i) _Left_ figure shows reward as a function of term use penalty when task success reward is \(1.0\) and task failure reward is \(-0.8\). ii) _Right_ figure shows the decisions of the DQN policy after an initial description as a function of the posterior probability of the target and the most-likely distractor. DQN chooses to end the turn when the target probability is above \(\sim 84\%\). model procceeds to end the turn. Figure The _right_ figure 4 shows a visualization of the learned policy by the DQN model. The DQN policy outperforms the hand-crafted strategies by a slight margin in terms of the earned average reward which is in line with our reward space analysis presented above. A comparison of the learned policy and the hand-crafted director policies is shown in the table 2. The DQN policy outperforms the direct policy in terms of the success rate but fails to outperform the extended strategy. ### Policy when Matcher Clarifies Ambiguities As described in the section 4.4, we specify additional parameters to tune the rate of clarifications and induce noise in the clarification questions. When interacting with this simulation, DQN model learns a variation of the direct policy which is indicative that it understands that clarifications diminish the advantage of using extended descriptions. The learned policy in this case has the following characteristics: * the DQN provides a target color patch description in the first turn. * in case of a clarification the director responds with one of the terms matcher used to describe the target or by negating both the distractor color patches. This shows that DQN agent understands it might have to re-describe the target if the probability distribution indicates that question is not referring the target. The DQN policy achieves a success rate of 95.9% with an average reward of 0.901 where the direct policy achieves a success rate of 95.8% with a reward of 0.899 policies. ## 7 Discussion and Conclusive Remarks We present a detailed analysis of trade-offs when trying to learn a director model using a coherence based decision theoretic approach in a referential communication setting. The coherence based state tracking approach outlined in [13] coupled with RL is able to successfully learn flexible and context-sensitive communication strategies. However, our analysis reveals that a director which can answer clarification questions to resolve matcher ambiguities can bridge the performance gap between brief and detailed communication strategies. Because of these reasons using \begin{table} \begin{tabular}{c|c c} Strategy & Success Rate & Reward \\ \hline DQN & 95.5\% & 0.891 \\ Direct & 94.7\% & 0.880 \\ Extended & 97.8\% & 0.874 \\ \end{tabular} \end{table} Table 2: This table presents a performance comparison between the handcrafted and learned director policies. RL based techniques to learn context-specific communication policies is not practical when a simple director policy can get the job done. A detailed reward space analysis as presented above can help identify the utility of a RL approach. In our evaluation it is revealed that strategies learned by the RL director and those crafted through analysis of human-human conversations are very similar. Any effect induced by these strategies with human subjects will too small to measure reliably with feasible experiment sizes, so we do not conduct human evaluations. ### Future Work and Conclusion One of the possible directions this work can go is to explore how our findings hold up in other domains e.g. slot-filling domains like restaurant booking [2]. We hypothesize that since the job of a director is to guide the user to fulfill a certain task and answer any clarifications regarding task-specific parameters reliably, our obtained insights should sustain themselves in those domains. However we suspect this to be true for only those situations where initiative is held by the system. Many conversation scenarios could be mixed-initiative such that both the system and the user hold key pieces of information to complete a given task. In such a scenario, a model has to be able to answer clarifications reliably as well as clarify ambiguities. An example of such a scenario could be a conversation system deployed in a disaster control domain where job of the system is to guide the workers to help victims given the available information about the disaster site. In such a case the system will need to updates its understanding based on the new findings workers report about the disaster site e.g. a pile of rubble requires machinery to clear. This requires two-way communication about the world state and so involves different trade-offs. Most dialogue research involves conversation models performing a reactive role where a user specifies the necessary parameters of interaction, as in e.g. a movie recommendation task. However, as these interfaces become more familiar and powerful, people will utilize them for more complex tasks e.g. asking an automated agent to book a flight for them on phone. This requires the agent to take initiative in their interactions and ascertain uncertainty in user state so that they are able to answer clarifications effectively. In this paper, we present the challenges and trade-offs when trying to learn a communication policy in an environment where system holds the initiative. Our findings suggest that systems can get away with simple descriptions as long as they are able to answer clarification questions from the user effectively. In addition we find that empirical exploration of reward and action space is able to highlight the possible trade-offs and practicality for using RL.
2304.01974
Dialogue-Contextualized Re-ranking for Medical History-Taking
AI-driven medical history-taking is an important component in symptom checking, automated patient intake, triage, and other AI virtual care applications. As history-taking is extremely varied, machine learning models require a significant amount of data to train. To overcome this challenge, existing systems are developed using indirect data or expert knowledge. This leads to a training-inference gap as models are trained on different kinds of data than what they observe at inference time. In this work, we present a two-stage re-ranking approach that helps close the training-inference gap by re-ranking the first-stage question candidates using a dialogue-contextualized model. For this, we propose a new model, global re-ranker, which cross-encodes the dialogue with all questions simultaneously, and compare it with several existing neural baselines. We test both transformer and S4-based language model backbones. We find that relative to the expert system, the best performance is achieved by our proposed global re-ranker with a transformer backbone, resulting in a 30% higher normalized discount cumulative gain (nDCG) and a 77% higher mean average precision (mAP).
Jian Zhu, Ilya Valmianski, Anitha Kannan
2023-04-04T17:31:32Z
http://arxiv.org/abs/2304.01974v1
# Dialogue-Contextualized Re-ranking for Medical History-Taking ###### Abstract AI-driven medical history-taking is an important component in symptom checking, automated patient intake, triage, and other AI virtual care applications. As history-taking is extremely varied, machine learning models require a significant amount of data to train. To overcome this challenge, existing systems are developed using indirect data or expert knowledge. This leads to a training-inference gap as models are trained on different kinds of data than what they observe at inference time. In this work, we present a two-stage re-ranking approach that helps close the training-inference gap by re-ranking the first-stage question candidates using a dialogue-contextualized model. For this, we propose a new model, global re-ranker, which cross-encodes the dialogue with all questions simultaneously, and compare it with several existing neural baselines. We test both transformer and S4-based language model backbones. We find that relative to the expert system, the best performance is achieved by our proposed global re-ranker with a transformer backbone, resulting in a 30% higher normalized discount cumulative gain (nDCG) and a 77% higher mean average precision (mAP). As part of this work, we also release pre-trained checkpoints for bi-directional and autoregressive S4 models trained on Wikipedia and PubMed data. ## 1 Introduction History taking is a critical component of a medical encounter [15]. It involves collecting relevant patient-reported information such as presenting symptoms, patient concerns as well as the past medical, psychological and social history. This information forms the basis of subsequent patient triage, diagnosis, and care planning. While history taking is an important component of the medical encounter, it is also one of the most time-consuming components [37; 6] and when done incompletely can lead to triage, diagnostic, and treatment errors [15]. Creating tools for automating portions of history taking has been an on-going effort for more than five decades [45]. The simplest of such tools are static pre-visit questionnaires that are now used widely in US healthcare. However, static questionnaires tend to be long, ask not very relevant questions, and are not customized to patients' needs. More recently, there has been work on building intelligent systems that can adjust questions based on patient responses (see [41; 9] and citations therein). However, developing the medical reasoning necessary for these systems is difficult. Existing approaches include using reinforcement learning with simulated patients [38; 19], supervised learning on clinical notes [41], and expert systems [9]. In all of the previous works, the medical reasoning system was built on top of data that was a proxy for real doctor-patient interactions. This is because, on the one hand, there is little available data consisting of doctor-patient history-taking dialogue, on the other hand, the space of possible questions asked during history-taking is very large. Thus, training a history-taking model requires significant amounts of labeled interaction data. However, this data is more readily available from indirect sources such as medical notes, expert knowledge, or simulations. This creates a training-inference gap: the data that is used to train the model is not fully representative of the data that the model sees at inference time. This training-inference gap has a significant impact on the quality of history taking, especially since only a few questions can be asked in a given encounter. This calls for an approach that reconciles the difficulty of supporting a large set of potential questions to a small set of pertinent questions attuned to the patient's health issue, with only a small amount of direct training data. In this paper, we start with an expert system and show how to use a relatively small amount of real doctor-patient dialogue data to close this training-inference gap. We take inspiration from the information retrieval literature where "retrieve and re-rank" is a popular paradigm for computationally efficient retrieval of documents from a large corpus [28; 22]. In our case, the "retrieve" part is performed by the expert systems which retrieves a list of possible questions to ask the patient, and a dialogue-trained re-ranker then "re-ranks" the possible questions. Because the re-ranking model takes the original expert system's candidate questions, it does not need to predict over the space of all possible questions. Instead, it only needs to re-rank from a much smaller subset, which greatly simplifies the machine-learning task. Our model takes both the previous dialogue and the possible questions as free text entries, which means that the system can operate even if the underlying expert system is replaced with something else. Our contributions are as follows: 1. We propose a two-step approach to history-taking question selection where we use an expert system to retrieve a list of candidate questions and then use a machine-learned re-ranker to get the top question to ask. 2. We propose a novel "global re-ranker" which embeds both the preceding dialogue and candidate questions into a single long string. We then train long context language models to predict the relevance of each question simultaneously. 3. We perform a careful study of other re-rankers for this task. This includes different architectures such as bi-encoder, cross-encoder, and autoregressive re-rankers. We examine different long context models including S4 (bi-directional and autoregressive), Nystromformer (bi-directional, variants with Nystrom attention and with full attention), and LongT5 (autoregressive). We examine the effect of different loss functions from the pointwise, pairwise, and listwise families. Finally, we perform some ablation studies on the context length and the initial retrieval ordering. 4. We release checkpoints for S4 pre-trained both bidirectionally and autoregressively on the English subset of Wikipedia3 and Pubmed PMC Open Access Subset4 datasets. Footnote 3: [https://huggingface.co/datasets/wikipedia](https://huggingface.co/datasets/wikipedia) 5. We find that our global re-ranker approach performs better than other more traditional approaches. Furthermore, all re-rankers significantly improve the original expert system performance. We also find that S4-based, while worse than full-attention transformers, is competitive with the Nystrom-attention transformer. ## 2 Generalizable Insights about Machine Learning in the Context of Healthcare One of the main challenges in using deep learning for healthcare is the lack of large annotated datasets. Obtaining large amounts of annotated data is costly and time-consuming because annotations need to be provided by trained healthcare professionals. Recent works have successfully leveraged the progress in the development of large language models that are trained on web-scale data. In many tasks, including medical history taking discussed in this paper, this approach introduces a training-inference gap: the data used to train the model does not fully represent the data that the model sees at inference time. In this context, our approach of retrieving a candidate set of answers (we use an expert system as the base model to provide candidates, but this can come from a large language model, too) and then using a learned reranker based on small amounts of labeled data is a promising alternative. As we show in this paper, such a reranker can be trained from public data sources and then fined tuned to the task. ## 3 Related work We study re-ranking history-taking recommendations based on doctor-patient dialogue. These dialogues tend to be long and exceed the typical token-length limits of transformer models. As such, there are two bodies of literature relevant to this work. SS 3.1 discusses work on modern neural long-range language models that are able to encode the entire doctor-patient conversation. SS 3.2 discusses work on re-ranking algorithms that can take the encoded dialogue and use it to re-rank history-taking questions. ### Long-range transformers Transformers [42] have become the mainstream architecture for natural language processing. With the self-attention mechanism, transformers can attend to all tokens in a sequence simultaneously, thereby being more powerful than classical architectures like convolutional [21] or long short-term memory (LSTM) [16] networks. However, due to its \(O(n^{2})\) complexity, the original transformer cannot process long sequences efficiently. Popular pre-trained language models such as BERT [10] and RoBERTa [24] can only process up to 512 tokens. Efforts have been made to reduce the computational complexity of transformers, as a variety of efficient transformers that can process long text sequences have been proposed, such as Reformer [20], Linformer [44], Longformer [2], BigBird [49], Performer [8], Nystromformer [48], LongT5 [14], etc. In addition to transformers, alternative approaches have also been shown to be promising for processing very long sequences, notably state-space models [12; 13; 26]. The Structured State Space Sequence model (S4) [12] has significantly outperformed many long-range transformers in the Long Range Arena benchmark [39]. In this paper we utilize the Nystromformer[48] and the S4 model[12] as the possible non-autoregressive backbones and LongT5 as an autoregressive backbone. ### Re-ranking Modern information retrieval (IR) or question answering (QA) systems are usually divided into two stages [28; 22]. In the first stage, given a query, a large number of documents are retrieved using an efficient method. In the second stage, a computationally intensive but more accurate method is used to re-rank documents retrieved in the first stage. This is analogous to our problem statement where we use an expert system to retrieve a set of relevant history-taking questions, and then use a re-ranking algorithm as the second stage. Relevant related work addresses both the architectural choices that can be made for the re-rankers, as well as the loss functions used to train them. Architectures for second-stage re-ranking.There are three main type of architectures (1) bi-encoders [22; 29; 36; 40] (2) cross-encoders [22; 29; 17] and (3) autoregressive re-rankers [30; 33; 27]. In the bi-encoder architecture, the query and the candidate document are encoded into vector representations by two separate encoders (though these two encoders can share the same weights [36]). The relevance score is calculated as the distance between the two vector representations. In a cross-encoder, a query and a candidate document are concatenated together and fed into the cross-encoder in a single pass. In most use cases, cross-encoders outperform bi-encoders in document retrieval and ranking [40; 17]; however, bi-encoders are usually more efficient than cross-encoders, as all documents in a given corpus can be pre-computed and stored as dense embeddings for retrieval, thereby avoiding repeated computations [36]. Recent sequence-to-sequence models such as T5 [35] have also been applied to autoregressive re-ranking. In this approach, the query and the documents are usually encoded by the encoder and the decoder either predicts whether the document is relevant [30] or directly generates the retrieved text in response to the query [27]. Scoring functions for ranking.There are also several scoring paradigms possible for ranking: (1) "pointwise" scores where the relevance of a query is computed on a per-document basis (similar to cross-encoders) [29], (2) "pairwise" scores where documents are ranked relative to each other in pairs [29], and (3) "listwise" scores where a list of candidates are ranked simultaneously [23]. Prior studies show that pairwise and listwise approaches tend to outperform pointwise approaches [29; 33; 27; 50; 7]. In this study, we compare all three ('pointwise', 'pairwise', and 'listwise') approaches for re-ranking history-taking questions but mainly focus on listwise approaches. Connection to our proposed global re-ranker.In this paper we propose a novel re-ranker approach we call "global re-ranker." In this approach, all candidate documents are concatenated into a single input that is then processed by a long context language model. For schematic comparison between bi-encoder, cross-encoder, and global re-ranker please see Figure 1. For a more detailed description of the method see SS 5. Concatenating pairs of documents into a single string has been previously done both in bi-directional [29] and autoregressive [33] paradigms. These models only investigate pairwise scoring due to length constraints imposed by the pre-trained transformer. For listwise ranking with more than two documents, previous approaches focused on ranking only the extracted embeddings [7; 1], which doesn't model the deep semantic relationships between candidate documents. ## 4 Closing the train-inference gap with re-ranking An overview of our approach to closing the train-inference gap in an existing history-taking system can be seen in Figure 2. We first use an expert system to suggest relevant history-taking questions and then use a deep neural network contextualized by the entire doctor-patient dialogue to re-rank expert system suggestions. The goal of re-ranking is, given the prior dialogue context \(\mathbf{d}\) and a list of \(n\) candidate history-taking questions \(Q=[\mathbf{q}_{1},\mathbf{q}_{2},\dots,\mathbf{q}_{n}]\), to generate a new list \(Q^{\prime}\) which consists of (possibly reordered) Figure 1: (a) Bi-encoder, (b) the cross-encoder and (c) the proposed global reranker. Figure 2: Overview of the proposed two-stage history-taking workflow. An expert system suggests candidate questions based on relevant entities extracted from the dialogue. A machine-learned deep neural network re-ranker then re-ranks the candidate questions based on the dialogue text. questions from \(Q\) such that the higher relevance questions appear earlier in the sequence. In our case, the candidate questions are generated using an in-house Expert System, and the ground truth labels \(\mathbf{y}=[y_{1},y_{2},\dots,y_{n}],y_{i}\in\{0,1\}\) represent whether a doctor asked a given recommended question (\(1\) if the question was asked, \(0\) if the question was not asked). A doctor may ask multiple questions at the same time, thus multiple elements of \(\mathbf{y}\) can have a value of \(1\), see SS 6.1 for more details on how the ground truth is produced. Finally, in all of the models studied in this work, the re-ranking is achieved by assigning scores \(\mathbf{s}=[s_{1},s_{2},\dots,s_{n}]\) to each question in \(Q\), and then constructing \(Q^{\prime}\) by reordering using scores in \(\mathbf{s}\). ## 5 Global re-ranker We propose the global re-ranker, an accurate and efficient listwise re-ranking method. In this approach (see Figure 1(c) for a schematic), the history-taking dialogue and all candidate history-taking questions are concatenated into a single text input, using which the model then assigns the ranking scores to all questions simultaneously. The global re-ranker directly encodes all texts through the language model, thereby ensuring deep semantic interactions not only between the dialogue and the candidate questions but also between all candidate questions. The input text to the global re-ranker is the concatenation of both the dialogue context and all the candidate questions: [CLS] \(\mathbf{d}\)[SEP] \(\mathbf{q}_{1}\)[MASK1][SEP] \(\mathbf{q}_{2}\)[MASK2][SEP] \(\dots\mathbf{q}_{n}\)[MASKm][SEP], where the [SEP] token is used to mark the boundaries of candidate questions. The [MASK1] token is the pooling token for the preceding question \(\mathbf{q}_{i}\). For each pooling token [MASK1], the global reranker predicts a score \(s_{i}\), which represents the relevance for \(\mathbf{q}_{i}\). We also added type embeddings to every input tokens to indicate whether it belongs to the dialogue or the candidate questions. The actual number of candidate questions provided by the expert system ranged from 3 to 40. While self-attention itself does not assume any inherent order of the input sequence, pretrained transformer models usually encode the text sequentially due to the presence of positional embeddings. In the current task, it is expected that a language model learns the sequential relations between words within \(\mathbf{d}\) and \(\mathbf{q}_{i}\). From our ablation experiments (see SS 7.2), we found that the best performance is achieved when the model is agnostic to the order of input questions \([\mathbf{q}_{1},\mathbf{q}_{2},\dots,\mathbf{q}_{n}]\). In order to remove the positional bias, we reset the positional embedding when each new question starts. We selected three different neural architectures to implement the global ranker, all of which can process long textual sequences. The first two approaches are based on the Nystromformer [48], which was originally proposed to be an efficient transformer. We experiment with Nystromformer with both Nystrom attention turned on and turned off (in which case it uses full attention). We use Nystromformer as the base of our "full attention" transformer because this enables us to leverage the pretrained Nystromformer checkpoints that had been trained on long texts and retain the good performance of full attention. We learned from pilot experiments that other efficient transformers such as Longformer [2] failed to converge. The third neural architecture is a state-space model, S4, which has been shown to process long sequences more effectively than many transformers [12]. To train the global re-ranker, we compared a variety of loss functions across point-wise, pair-wise and listwise approaches in the learning-to-rank framework [23]. The point-wise baseline was trained with binary cross-entropy. For pairwise loss functions, we tested the RankNet [3] and LambdaRank [4]. The listwise loss functions we used were ListNet [5], ListMLE [47], ApproxNDCG [34] and NeuralNDCG [32], the latter two of which directly optimized the Normalized Discounted Cumulative Gain (NDCG) metrics. ## 6 Experiments ### Data The medical dialogue data was collected from a portion of real doctor-patient interactions collected on our text-based medical service platform. In a typical interaction, the physician asks a series of history-taking questions that can be entered either as free text or selected from a list of recommendations. These recommendations are made using the Expert System that forms the first stage in our proposed workflow. At each dialogue turn where recommended questions are asked, the doctor selected questions are marked as relevant and the not-selected questions are marked as irrelevant. This forms a natural dataset of doctor annotated selections on which we train our re-rankers. The dataset consists of 13071 encounters. We filtered non-history-taking dialogue turns using in-house dialogue segmentation model, similar to [43]. The detailed statistics of our data are displayed in Table 1. ### Metrics For evaluation, we adopted two common ranking metrics, normalized discounted cumulative gain (nDCG)[18] and mean average precision (mAP) [22]. The mAP assumes binary relevance whereas nDCG can work with both binary and continuous relevance. Specifically for global re-rankers, the average metrics over 5 repeated runs of evaluations were reported. In each run, the order of candidate questions fed to the global re-ranker was randomly reshuffled to mitigate positional biases. ### Baseline approaches In addition to the global ranker, we also implement three widely adopted baseline ranking approaches: bi-encoder, cross-encoder, and autoregressive re-ranker. Bi-encoder.In the bi-encoder architecture (see Figure 1(a)), the dialogue query and the candidate questions are encoded by two separate encoders \(f_{D}\) and \(f_{Q}\), and the relevance score between the two resulting vector representations are computed with cosine similarity. The bi-encoder learns an embedding space where the dialogue representation is close to the most relevant questions while being distant from less relevant questions. The training objective is to minimize the InfoNCE loss function [31] through contrastive learning with 7 negatives randomly sampled from the list of recommended candidate questions by the Expert System. The temperature parameter of the InfoNCE loss was set to 0.05 throughout the training [11]. Cross-encoder.In the cross-encoder architecture (see Figure 1(b)) the prior dialogue is concatenated with a candidate question. The cross-encoder \(f_{C}\) assigns a relevance score to this candidate question using a classification head on top of the contextual representation of the dialogue and the query. We consider transformers and S4-based models. For transformers, the [CLS] token is treated as the contextual representation. For the bi-directional S4 re-rankers, we use average pooling of the last layer to obtain the contextual representations. All cross-encoder variants are trained with the binary cross-entropy loss. Autoregressive re-ranker.We also consider autoregressive re-rankers [30; 33]. For a transformer baseline, we use a pre-trained LongTS [14]. The query and the document are concatenated together to form the input sequence: Query: \(d\)Document: \(q_{i}\)Relevant:, which is fed into the encoder. The decoder then predicts true for relevant documents or false for irrelevant documents. During inference, a softmax function is applied to the logits of the true and the false tokens to normalize the results across multiple queries. For autoregressive S4, when we followed the Long-T5 method, we found it to highly unstable and failed to converge, similar to what was found in the literature [30] on its dependency certain keywords e.g., true/false. Therefore, we followed the same setting as in the cross-encoder, except that the underlying model is autoregressive rather than bi-directional. Here, the concatenated dialogue and a \begin{table} \begin{tabular}{l c c c} \hline \hline & **Train** & **Dev** & **Test** \\ \hline **Num. Encounters** & 12105 & 311 & 655 \\ **Num. Samples** & 26106 & 626 & 1361 \\ **Avg. Length of Dialog.** & 287.2 & 374.7 & 288.8 \\ **Num. Selected Questions** & 4.0 & 3.8 & 4.1 \\ **Num. Candidate Questions** & 27.9 & 26.6 & 28.1 \\ **Avg. Length of Questions** & 8.0 & 8.1 & 8.0 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of different data splits. Text lengths were calculated based on words. candidate question are fed into the S4 re-ranker and the average pooling of the last layer is classified as either relevant or irrelevant through a classification head. ### Implementation S4 model pretraining.The S4 model was based on the original implementation of S4 language model[12], in which the S4 layers were used as a drop-in replacement for the self-attention layers in a typical transformer. We implemented a 12-layer bidirectional and autoregressive S4 models. We set the hidden dimensions to 768 in order to match the parameter count of mainstream pretrained transformers (such as BERT-base [10]), and the number of state-space machines (SSM) to 128 with 64 states for each SSM. Both bidirectional S4 and autoregressive S4 models were pretrained on large-scale texts. The autoregressive S4 was pretrained with the casual language modeling task on the whole English subset of Wikipedia. The second iteration of pretraining, initialized with the pretrained Wikipedia checkpoint, was on the whole Pubmed PMC Open Access Subset. The bidirectional S4 models were pretrained on the same datasets but with the mask language modeling task using the same masking settings as in BERT [10]. The maximum sequence length for pretraining was set to 8192 and the effective batch size was 256. All models were optimized with AdamW optimizer with a learning rate of 1e-4 and the learning rate was dynamically adjusted using the Cosine Scheduler with a warm-up step of 1000. The pretraining took place on 8\(\times\)RTX 3090 GPU with 24GB of memory. The training was stopped when the evaluation loss stopped to decrease (\(\sim\)12k steps for all models). The autoregressive and bi-direction checkpoints pre-trained on these datasets will be released together with this paper. Transformer implementation.Transformer models were all implemented through the Transformers package [46] with default dimensions. The autoregressive model was LongT5 [14] initialized by the long-t5-tglobal-base checkpoint. Other transformers were based on the Nystromformer [48] with initialization from the public checkpoint uw-madison/nystromformer-4096. Re-ranker training.For global re-rankers, the maximum input length was set to 4096 with an effective batch size of 32. For other models, the effective batch size was 64 and the maximum length was 2048, as this length was enough to cover almost all of the data samples. Models were trained with a maximum of 5 epochs and only the model with the best validation performance was kept. All models were trained using the AdamW optimizer [25] with a learning rate of 5e-5. We used a cosine scheduler with a warm-up step of 1000 to automatically adjust the learning rate during training. All ranking models were trained on a single V100 GPU with 16GB of memory. ## 7 Results ### Main results Our main results are summarized in Table 2. All neural re-ranking models outperform the baseline Expert System in both metrics, suggesting that re-ranking does up-rank the more relevant history-taking questions. Among the neural baselines, the transformer-based cross-encoder outperforms the bi-encoder, which is consistent with previous findings [29]. Surprisingly, the LongT5 autoregressive re-ranker, despite having more parameters (220M parameters), also performs worse than the cross-encoder (\(\sim\)110M parameters). The best performance is achieved by the global re-ranker for both transformer and S4 architectures, regardless of the loss functions chosen. Among the various loss functions, the pointwise binary cross-entropy (BCE) performs the best. Our hypothesis is that since our ground truth relevance scores are binary rather than continuous, the current task does not make full use of the listwise loss functions. The effectiveness of the global re-ranker lies in the fact that it attends to the semantic interactions not only between the dialogue and the candidate questions but also between the candidate questions themselves. This allows the model to exploit the dependencies between history-taking questions, such as co-occurrence statistics, to improve ranking outcomes. It is also worth noting that, despite its outstanding performance in some long sequence processing benchmark [12], S4 still lags behind transformers in the current task. One reason could be that the S4 model here has only been pre-trained on a comparatively small amount of texts, while transformers have been pre-trained on huge amounts of texts. Furthermore, the text sequences in our task range from a few hundred to about three thousand words, which might not be long enough for S4 to reveal its full potential. ### Ablation analysis We conducted ablation analyses on the global re-ranker to assess the impact of dialogue context length, the effect of type embeddings, and the effect of shuffling candidate question order. The results are displayed in Table 3. Context length ablations.When ablating on context length, only the _last_\(N\) tokens of the dialogue were considered (full model is 4096 tokens, ablation are 3072, 2048, and 1024 tokens). While most of the text sequences were shorter than 1000 tokens, truncating texts still decreases test performance on some text sequences that are particularly long (longer than 1024), as some important information could be removed. In general, the global re-ranker benefits from getting more dialogue contexts, though this benefit seems to diminish after expanding to more than 2048 tokens. Effect of position and type embeddings.We find that the removal of type embeddings (which are learned embeddings that differentiate whether the token is from dialogue or a candidate question) has almost no impact on the test performance. We reset the positional embeddings for each candidate \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Dev**} & \multicolumn{2}{c}{**Test**} \\ \cline{2-5} & **nDCG** & **mAP** & **nDCG** & **mAP** \\ \hline **Expert System** (Baseline) & 0.592 & 0.383 & 0.570 & 0.349 \\ \hline **Bi-encoder** (Transformer) & 0.690 & 0.548 & 0.677 & 0.531 \\ \hline **Cross-encoder** & & & & \\ \hline Transformer & 0.718 & 0.584 & 0.706 & 0.566 \\ Nystromformer & 0.653 & 0.496 & 0.654 & 0.497 \\ Bidirectional S4 (Wiki Pretraining) & 0.648 & 0.490 & 0.641 & 0.481 \\ Bidirectional S4 (Pubmed Pretraining) & 0.643 & 0.483 & 0.630 & 0.464 \\ \hline **Autoregressive Re-ranker** & & & & \\ \hline LongT5-base & 0.690 & 0.546 & 0.678 & 0.529 \\ Autoregressive S4 (Wiki Pretraining) & 0.658 & 0.502 & 0.648 & 0.490 \\ Autoregressive S4 (Pubmed Pretraining) & 0.654 & 0.498 & 0.642 & 0.484 \\ \hline **Global Re-ranker** & & & & \\ \hline Transformer & & & & \\ + Pointwise loss: BCE & **0.744** & **0.618** & **0.743** & **0.618** \\ + Pairwise loss: RankNet & 0.739 & 0.612 & 0.735 & 0.603 \\ + Pairwise loss: LambdaLoss & 0.739 & 0 616 & 0.739 & 0.612 \\ + Listwise loss: ListNet & 0.737 & 0.609 & 0.740 & 0.610 \\ + Listwise loss: ListMLE & 0.727 & 0.597 & 0.721 & 0.587 \\ + Listwise loss: ApproxNDCG & 0.701 & 0.555 & 0.697 & 0.550 \\ + Listwise loss: NeuralNDCG & 0.742 & 0.617 & 0.741 & 0.612 \\ Nystromformer & & & & \\ + Pointwise loss: BCE & 0.684 & 0.537 & 0.678 & 0.530 \\ Bidirectional S4 (Wiki Pretraining) & & & & \\ + Pointwise loss: BCE & 0.667 & 0.516 & 0.663 & 0.510 \\ Bidirectional S4 (PubMed Pretraining) & & & & \\ + Pointwise loss: BCE & 0.697 & 0.556 & 0.670 & 0.518 \\ \hline \hline \end{tabular} \end{table} Table 2: Results of reranking experiments. questions in the input sequence, as this might help the model learn to be agnostic to the order of questions. We trained a model that used sequential positional embeddings for the input sequence. It turned out that positional embeddings played a minor role in training the global re-ranker. Effect of shuffling.We tested the importance of permutation invariance with regard to the order of input candidate questions. The list of candidate questions \([\mathbf{q}_{1},\mathbf{q}_{2},\ldots,\mathbf{q}_{n}]\) were concatenated with the prior dialogue as an input to the model. We found that while the expert system should produce questions in order or relevance, performance was significantly higher when the model was trained with shuffled order. We believe that this forces the model to learn to re-rank the questions without falling back to the original order of the candidate questions. ## 8 Discussion In this work, we address an important problem of closing the training-inference gap for automated medical history-taking. Our approach, inspired by modern neural information retrieval systems, has two stages: (1) we use an expert system to suggest a list of candidate questions (out of possible thousands), (2) we train a machine-learned re-ranking model to re-rank expert system-suggested questions based on the free text of the doctor-patient dialogue. To perform re-ranking (stage 2), we introduce a new approach which we call "global re-ranker", and compare it to existing neural baselines. We also explore several language model back-bones including various transformers and structure-state-space (S4) models5. We find that while all neural re-ranking models outperform the original expert system, the global re-ranker with a full-attention transformer backbone performs the best with a 30% increase in nDCG and 77% increase in mAP over the first-stage recommendations. Footnote 5: As part of this publication, we release bi-directional and autoregressive S4 checkpoints pre-trained on the English Wikipedia and Pubmed PMC Open Access Subset. While our results directly show the effectiveness of training a re-ranking model on top of an expert system for history taking, we believe this approach can also be applied to other decision support systems. The conditions under which this approach is beneficial are the following: (1) There exists a scoring system that has a training-inference gap (2) The space of possible predictions is very large, and as such would require a lot of data to machine-learn from scratch. One example beyond history-taking where we believe these conditions are satisfied is medical diagnosis prediction. There are many expert-system-derived diagnosis models, and training a diagnosis model from scratch can be difficult as the space of possible diagnosis is very large. Re-ranking could be used to close the gap between an off-the-shelf diagnostic expert system and the practice's actual patient population outcomes. LimitationsThis work is still limited in several ways. While our proposed global re-ranker had exhibited best overall performance over other ranking models, it is still computationally inefficient as the full attention transformers have quadractic computational complexity in processing long sequence. This will become a more serious bottleneck as the dialogue gets longer or the number of candidate questions increases. Secondly, the global re-ranker only learns the association between history-taking questions and the dialogue contexts from languages but it does not have the underlying medical \begin{table} \begin{tabular}{c c c c c} \hline \multirow{2}{*}{**Ablation**} & \multicolumn{2}{c}{**Dev**} & \multicolumn{2}{c}{**Test**} \\ \cline{2-5} & **nDCG** & **mAP** & **nDCG** & **mAP** \\ \hline - Maximum length: 4096 (Full) & 0.744 & 0.618 & 0.743 & 0.618 \\ - Maximum length: 3072 & 0.741 & 0.614 & 0.739 & 0.611 \\ - Maximum length: 2048 & 0.746 & 0.622 & 0.747 & 0.622 \\ - Maximum length: 1024 & 0.747 & 0.623 & 0.737 & 0.609 \\ - No type embedding & 0.733 & 0.610 & 0.732 & 0.607 \\ - Sequential position embedding & 0.749 & 0.625 & 0.741 & 0.613 \\ - No random shuffling of questions & 0.515 & 0.313 & 0.523 & 0.319 \\ \hline \end{tabular} \end{table} Table 3: Results of ablation studies on global re-ranker knowledge. It will be paramount to augment such models with real medical knowledge such that it makes more informed decisions and does not biased against low frequency long-tail history-taking questions. In the future, we plan to investigate more effective approaches to encode long textual contexts and inject knowledge into language mdoels. ### Ethics This work was done as part of a quality improvement activity as defined in 45CFR SS46.104 (d)(4)(iii) - secondary research for which consent is not required for the purposes of "health care operations."
2305.13081
An FFT-based framework for predicting corrosion-driven damage in fractal porous media
Understanding fracture in cementitious materials caused by the deposition and growth of corrosion products requires scale-bridging approaches due to the large length-scale difference between the micro-pores, where deposition occurs, and the structure, where deterioration manifests. Cementitious materials bear a highly heterogeneous micro-structure owing to the fractal nature of micro-pores. Simultaneously, a corrosion-driven fracture is a multi-physics problem involving ionic diffusion, chemical reactions, and stress development. This multi-scale and multi-physical character makes scale-bridging studies computationally costly, often leading to the use of simplified fractal porous media, which has important consequences for the quantitative interpretation of the results. Recent advances in homogenization approaches using Fast-Fourier-Transform (FFT) based methods have raised interest due to their ease of implementation and low computational cost. This paper presents an FFT-based framework for solving corrosion-driven fractures within fractal porous media. We demonstrate the effectiveness of the Fourier-based spectral method in resolving the multiple corrosion-driven mechanisms such as ionic diffusion, stress development, and damage within a fractal porous microstructure. Based on the presented methodology, we analyze the impact of simplifying fractal porous media with simple Euclidean geometry on corrosion-driven fracture. Our results demonstrate the importance of preserving both the porosity and fractal nature of pores for precise and reliable modeling of corrosion-driven failure mechanisms.
Mohit Pundir, David S. Kammer, Ueli Angst
2023-05-22T14:51:19Z
http://arxiv.org/abs/2305.13081v1
# An FFT-based framework for predicting corrosion-driven damage in fractal porous media ###### Abstract Understanding fracture in cementitious materials caused by the deposition and growth of corrosion products requires scale-bridging approaches due to the large length-scale difference between the micro-pores, where deposition occurs, and the structure, where deterioration manifests. Cementitious materials bear a highly heterogeneous micro-structure owing to the fractal nature of micro-pores. Simultaneously, a corrosion-driven fracture is a multi-physics problem involving ionic diffusion, chemical reactions, and stress development. This multi-scale and multi-physical character makes scale-bridging studies computationally costly, often leading to the use of simplified fractal porous media, which has important consequences for the quantitative interpretation of the results. Recent advances in homogenization approaches using Fast-Fourier-Transform (FFT) based methods have raised interest due to their ease of implementation and low computational cost. This paper presents an FFT-based framework for solving corrosion-driven fractures within fractal porous media. We demonstrate the effectiveness of the Fourier-based spectral method in resolving the multiple corrosion-driven mechanisms such as ionic diffusion, stress development, and damage within a fractal porous microstructure. Based on the presented methodology, we analyze the impact of simplifying fractal porous media with simple Euclidean geometry on corrosion-driven fracture. Our results demonstrate the importance of preserving both the porosity and fractal nature of pores for precise and reliable modeling of corrosion-driven failure mechanisms. keywords: Corrosion-driven fracture, Concrete, Diffusion, Spectral method, Phase-field model + Footnote †: journal: Journal of Computational Mechanics , ## 1 Introduction Corrosion of steel reinforcement bars in concrete plays a significant role in a structure's durability and serviceability lifetime [1; 2]. Ferrous ions released at the steel-concrete interface diffuse through the pores in the concrete and undergo many chemical reactions leading to the precipitation of corrosion products, as schematically illustrated in Figure 1. These precipitates, which are confined in the pore space, grow over time, exerting pressure onto the pore walls, subsequently leading to internal cracking around the pore space and eventually to macroscopic cracks. Even though the underlying processes are generally well recognized for their contributions to the degradation of steel-reinforced concrete, a precise and quantitative description of this multi-physical process remains missing. One major reason is that the pore space in concrete is highly complex [3; 4]. The pore sizes range from the nanometer-scale to the micrometer-scale, and pore surfaces are fractal, _i.e._ features exhibit similar patterns across all of these length scales. The complex and concealed nature of pore spaces thus makes it difficult to assess the aforementioned corrosion-driven mechanisms and the induced damage from external examination of structures. Therefore, numerical simulations have been an important tool to study the corrosion-driven mechanisms at the pore scale and to analyze their effect on a structure's durability and its serviceability lifetime. These approaches often rely on synthetic representations of the pore space built from data derived from measurement techniques such as Mercury Intrusion Porosimetry (MIP), nitrogen-adsorption [5; 6; 7; 8; 9; 10; 11; 12] or tomography (\(\mu-\)CT, FIB-SEM) [13; 14]. These synthetic pore spaces are then employed in numerical frameworks, such as the finite-element method, pore network model, and Lattice Boltzmann method, to simulate various physical mechanisms within representative volumes. To bridge the physical mechanisms within the micro-pores to a structure's degradation at the macroscale, a multi-scale homogenization approach is followed. In these frameworks, the pore space is modelled either implicitly [15] or explicitly [16], where the complex pore space is often simplified using simple geometries such as spheres, cylinders and ellipsoids [17; 18]. The main purpose is to reduce the computational complexities (high computational cost, complex analytical formulations) associated with high-resolution simulations of diffusion, stress development and crack initiation in a fractal domain. Although this idealization of a fractal space into a smooth Euclidean space (cylinder/sphere) is necessary, it approximates the studied physical process. For example, idealizing a fractal surface with a smooth surface approximates the stress development and the effective diffusion path of ferrous ions in the porous material, which affects the precipitation of corrosion products and the initiation of cracks. Therefore, an accurate representation of the pore space is necessary to fully capture and understand the influence of corrosion-driven mechanisms at the structural scale. In this paper, we will answer the question if it is possible to preserve the actual representation of the pore space and simulate the corrosion-driven mechanisms in a computationally efficient manner. Thus, the aim is to present a single numerical framework for corrosion-driven processes that simultaneously includes the diffusion of chemical species, the stress development due to the growth of precipitates, and crack initiation, while being computationally less demanding than conventional approaches and being straightforward to implement for fractal spaces. Such a single framework allows for studying the interplay among all the mechanisms involved in this multi-physics-driven fracture process. The paper is organized as follows: In Section 2, we present the numerical framework based on the spectral method [19; 20; 21; 22] to simulate multiple corrosion-driven mechanisms simultaneously in a given microstructure: diffusion within the pore space, the stress development due to the growth of precipitates and the crack initiation in the matrix. In Section 3, we employ the presented framework to simulate the mechanisms above in fractal pore spaces reconstructed from tomographic scans of cementitious materials. We apply the proposed framework to a multi-scale setting to study the initiation and propagation of cracks over time. We analyze how the creation of micro-cracks changes a pore structure over time, influencing the pore structure's total porosity. In Section 4, we employ the presented methodology to compare an actual pore space to its approximated counterpart and highlight the effect approximation of pore space has on different physical mechanisms. We show that preserving the porosity as well as the pore shape is important for the reliable modelling of corrosion-driven failure mechanisms. Finally, in Section 6, we conclude our study by discussing the possible applications of the proposed numerical framework, especially in modelling reaction-diffusion-driven internal cracking in porous media. ## 2 Methodology This section presents the numerical framework for corrosion-driven mechanisms (see Figure 1) based on the Fourier-based spectral method. Applying the Fast-Fourier Transform (FFT) to solve partial differential equations makes spectral methods more efficient than conventional methods. Since an FFT-based approach requires a pixel-based representation of the structure, its application to tomographic scans of porous media is straightforward. We consider a porous structure \(\Omega\) in \(n-\)dimensional space, where \(n\in[1,2,3]\). The solid phase is denoted as \(\Omega_{s}\) and the pores as \(\Omega_{p}\). The volume fraction of the pore space is denoted as \(\eta=\Omega_{p}/\Omega\). The porous structure is subjected to an overall concentration gradient \(\mathbf{\nabla}c_{\rm mac}\) and an overall strain \(\varepsilon_{\rm mac}\). In a multi-scale setting, \(\mathbf{\nabla}c_{\rm mac}\) and \(\mathbf{\varepsilon}_{\rm mac}\) represent macroscale quantities and \(\Omega\) the underlying representative volume element Figure 1: **Corrosion-driven mechanisms. Schematic figure showing various corrosion-driven mechanisms that lead to fracture in porous media. The entire corrosion process is divided into 3 stages. Stage I : The ferrous ions released at the steel-concrete interface undergo diffusion through the pore space (shown in white). Stage II : Over time, the ferrous ions precipitate and the precipitates grow within these pores. Stage III : The growth of the precipitates results in the development of stresses within the solid (shown in grey) and, consequently, crack initiation and propagation.** (RVE) [16]. In the next few sections, we employ FFT-based methodology to calculate the micro-scale quantities, which are concentration of diffusive species, stresses and fracture. We choose the FFT-based Galerkin method [23] in which the gradients of concentration and displacement are the primary degrees of freedom. ### Diffusion process The diffusion of ions within the pore phase \(\Omega_{p}\) (see stage I in Figure 1) is simulated by solving the static diffusion equation over the whole domain \(\Omega\) until the average concentration gradient is equal to the applied overall concentration gradient, _i.e._\(\langle\mathbf{\nabla}c(\mathbf{x})\rangle=\mathbf{\nabla}c_{\rm mac}\), which is given by Fick's law as, \[\mathbf{\nabla}\cdot\mathbf{j}(\mathbf{x})=0,\quad\exists\;\langle\mathbf{\nabla}c(\mathbf{x}) \rangle=\mathbf{\nabla}c_{\rm mac} \tag{1}\] where \(\mathbf{j}\) is the flux defined as \(\mathbf{j}=-\mathbf{D}\cdot\mathbf{\nabla}c(\mathbf{x})\) and \(\mathbf{D}\) is the diffusion tensor at each spatial point \(\mathbf{x}\) defined as \[\mathbf{D}=\omega D_{\rm solid}\mathbf{I}+(1-\omega)D_{\rm pore}\mathbf{I},\quad\text{ where}\;\begin{cases}\omega=1&\forall\mathbf{x}\in\Omega_{s}\\ \omega=0&\forall\mathbf{x}\in\Omega_{p}\end{cases} \tag{2}\] To solve the diffusion equation using the spectral method, the weak form of the equation is reformulated such that the unknown quantity is \(\mathbf{\nabla}c(\mathbf{x})\). The derivation of the weak form for Equation (1) (identical to other elliptic problems such as for static equilibrium in solids) is covered in great detail in the literature [24; 20]. Therefore, only the main fundamentals essential to the proposed framework are discussed here. With \(\mathbf{\nabla}c(\mathbf{x})\) satisfying periodic conditions, the weak form is given as \[\int_{\Omega}\delta\mathbf{\nabla}c\cdot\mathbf{j}(\mathbf{x})d\Omega=0\, \tag{3}\] where \(\delta\mathbf{\nabla}c\) is the test function. A \(2^{\rm nd}\)-order projection operator \(\mathbf{G}\) imposes the compatibility conditions (periodic and curl vanishes) on \(\delta\mathbf{\nabla}c\). For details on the operator \(\mathbf{G}\), please refer to A. The projection is calculated through a convolution operation between \(\mathbf{G}\) and an arbitrary vector \(\delta\widetilde{\mathbf{\nabla}}c\), denoted as \(\big{(}\mathbf{G}*\delta\widetilde{\mathbf{\nabla}}c\big{)}(\mathbf{x})=\int_{-\infty}^{ \infty}\mathbf{G}(\mathbf{x}):\delta\widetilde{\mathbf{\nabla}}c(\mathbf{x}-\mathbf{y})\;\mathrm{d }\mathbf{y}\). The weak form is now given as \[\int_{\Omega}(\mathbf{G}*\delta\widetilde{\mathbf{\nabla}}c)(\mathbf{x})\cdot\mathbf{j}(\mathbf{ x})\;\mathrm{d}\Omega=\int_{\Omega}\delta\widetilde{\mathbf{\nabla}}c(\mathbf{x}) \cdot(\mathbf{G}*\mathbf{j})(\mathbf{x})\;\mathrm{d}\Omega=0 \tag{4}\] where the symmetry of operator \(\mathbf{G}\) is used. The domain is discretized into \(n\) grids along each direction with discretization length \(\Delta=l/n\), where \(l\) represents the side length. Employing the Galerkin approach [20; 24], the unknown continuous fields, _i.e._\(\mathbf{\nabla}c(\mathbf{x})\) and \(\delta\widetilde{\mathbf{\nabla}}c(\mathbf{x})\), are approximated by multiplying discrete values \(\mathbf{\nabla}c(\mathbf{x}_{k})\) and \(\delta\widetilde{\mathbf{\nabla}}c(\mathbf{x}_{k})\) with shape functions \(\mathcal{N}_{k}\) defined at \(n\) grid points, _i.e._\(\mathbf{\nabla}c(\mathbf{x})=\sum_{k=1}^{n}\mathbf{\nabla}c(\mathbf{x}_{k})\mathcal{N}(\mathbf{x}_ {k})\) and \(\delta\widetilde{\mathbf{\nabla}}c(\mathbf{x})=\sum_{k=1}^{n}\delta\widetilde{\mathbf{ \nabla}}c(\mathbf{x}_{k})\mathcal{N}(\mathbf{x}_{k})\). Upon applying the discretization, the weak form of Equation (4) can be written as: \[\int_{\Omega}\underbrace{\sum_{k}\mathcal{N}(\mathbf{x}_{k})\delta\widetilde{\bm {\nabla}}c(\mathbf{x}_{k})}_{[\delta\widetilde{\mathbf{\nabla}}c]^{\mathrm{T}}:[ \mathcal{N}]}\cdot(\mathbf{G}*\mathbf{j})(\mathbf{x}_{k})\;\mathrm{d}\Omega=0 \tag{5}\] where \([\star]\) represents a column vector ( \(n\times 1\) ) of quantity \(\star\) evaluated at each grid points. The above equation must hold for any \(\delta\widetilde{\mathbf{\nabla}c}\), therefore, \(\int_{\Omega}[\mathcal{N}]\cdot(\mathbf{G}\ast\mathbf{j})(\mathbf{x}_{k})\mathrm{d}\Omega=0\). A trapezoidal scheme, similar to [20] is chosen for the integration, whereby nodal points \(\mathbf{x}_{k}\) serve as integration points with equal weights. This simplifies the approximated weak form, which reads: \[\sum_{k=1}^{n}[\mathcal{N}](\mathbf{x}_{k})(\mathbf{G}\ast\mathbf{j})(\mathbf{x}_{k})=0. \tag{6}\] In the above equation, convolution is performed in Fourier space and then transformed back to the real space, which reads \[\mathcal{F}^{-1}\{\widehat{\mathbf{G}}(\mathbf{\xi}):\widehat{\mathbf{j}}(\mathbf{\xi})\}=\bm {0}\, \tag{7}\] where \(\widehat{f}(\mathbf{\xi})\) represents the Fourier transform \(\mathcal{F}\) of a field \(f\) and \(\mathcal{F}^{-1}\) inverse transform to real space. For a given overall concentration gradient \(\mathbf{\nabla}c_{\mathrm{mac}}\), the above equation is solved using an iterative solver such as a Conjugate Gradient solver. For a non-linear behaviour of \(\mathbf{j}(\mathbf{\nabla}c(\mathbf{x}))\), the solution of Equation (7) requires Newton-Raphson iterations in addition to a Conjugate gradient solver [20]. Algorithm 1 explains the numerical algorithm for the solution of Equation (7) using a Newton-Raphson iteration along with a linear iterative solver. Finally, the solution to Equation (7) yields the concentration gradient \(\mathbf{\nabla}c(\mathbf{x})\) within the microstructure. To estimate corrosion products' precipitation (discussed later in Section 2.2), one requires concentration values \(c(\mathbf{x})\) of diffusive species. Therefore, we now discuss the methodology to construct concentration \(c(\mathbf{x})\) from the computed \(\mathbf{\nabla}c(\mathbf{x})\). The concentration gradient in the representative volume is expressed as \[\mathbf{\nabla}c(\mathbf{x})=\mathbf{\nabla}c_{\mathrm{mac}}+\mathbf{\nabla}\phi(\mathbf{x}) \tag{8}\] where \(\mathbf{\nabla}\phi(\mathbf{x})\) is the periodic fluctuation of the concentration gradient due to the presence of heterogeneities in the micro-structure [15, 25]. Integrating the above equation gives the concentration of diffusive species within the microstructure, which reads: \[c(\mathbf{x})=\mathbf{\nabla}c_{\mathrm{mac}}\cdot\mathbf{x}+\phi(\mathbf{x})+\beta \tag{9}\] where \(\beta\) is the integration constant, and \(\phi(\mathbf{x})\) is the micro-fluctuations in the concentration of ions caused by the presence of heterogeneities. Both contributions are unknown. First, we determine \(\phi(\mathbf{x})\) by solving the derivative equation of Equation (8): \[\Delta\phi(\mathbf{x})-\mathbf{\nabla}\cdot(\mathbf{\nabla}c(\mathbf{x}))=0 \tag{10}\] where we used that \(\mathbf{\nabla}\cdot(\mathbf{\nabla}c_{\mathrm{mac}})=0\), and the local concentration gradient \(\mathbf{\nabla}c(\mathbf{x})\) is known (from the solution of Equation (1) and Equation (7)). Then, considering \(\beta\), we note from Equation (9) and Equation (10) that we can choose any arbitrary value for \(\beta\) without affecting the solution to Equation (7). We chose the value of \(\beta\), so the local concentration \(c(\mathbf{x})\) in the pores and the solid phase satisfy two conditions: (i) the average concentration in the solid phase must be zero _i.e._\(\langle c(\mathbf{x})\rangle_{\mathrm{solid}}=0\) and (ii) the average concentration in the pores must be equal to the macro-concentration of the ions divided by the porosity \(\eta\), _i.e._\(\langle c(\mathbf{x})\rangle_{\mathrm{pore}}=c(\mathbf{X})/\eta\)[15]. The above two conditions apply only to a system where the solid phase has a negligible diffusion coefficient compared to the pores. On averaging Equation (9) over each phase and then plugging in the required average concentration values for each phase, we find two different values of \(\beta\): \[\beta_{\text{pore}}=c(\mathbf{X})/\eta-\langle\nabla c_{\text{mac}}\cdot\mathbf{x} \rangle_{\text{pore}}-\langle\phi(\mathbf{x})\rangle_{\text{pore}} \tag{11a}\] \[\beta_{\text{solid}}=-\langle\nabla c_{\text{mac}}\cdot\mathbf{x}\rangle_{\text{ solid}}-\langle\phi(\mathbf{x})\rangle_{\text{solid}} \tag{11b}\] Substituting \(\beta_{\text{pore}}\) into Equation (9) yields the concentration within the pores as \[c(\mathbf{x})=\mathbf{\nabla}c_{\text{mac}}\cdot\mathbf{x}+\phi(\mathbf{x})+c(\mathbf{X})/\eta- \langle\nabla c_{\text{mac}}\cdot\mathbf{x}\rangle_{\text{pore}}-\langle\phi(\bm {x})\rangle_{\text{pore}},\quad\text{for }\mathbf{x}\in\Omega_{\text{pore}}. \tag{12}\] Thus, the derived ionic concentration \(c(\mathbf{x})\) can be employed to model various chemical reactions that will lead to the precipitation of corrosion products. The above methodology ensures that the average values of \(\langle\beta(\mathbf{x})\rangle\) and \(\langle\phi(\mathbf{x})\rangle\) are 0 such that the average of local concentration \(c(\mathbf{x})\) thus obtained is equal to the macroscopic concentration _i.e._\(\langle c(\mathbf{x})\rangle=c(\mathbf{X})\). In the next section, we present the methodology to estimate the corrosion product concentration caused by the ionic concentration \(c(\mathbf{x})\) (see stage II in Figure 1). This will later be used to compute the pressure developed due to their growth within the pores. ### Precipitation of corrosion products and pressurization of pores After acquiring the concentration of ferrous ions within the pore space, we estimate the precipitation of corrosion products. The complex chemical reactions and phase changes in this process require thermodynamically consistent modelling for cementitious materials. In the literature, various empirical relations or thermodynamic-consistent methods, namely the Gibbs energy minimization method or the law of mass-action method, exist for calculating precipitation. In principle, all approaches seek local chemical equilibrium between the chemical species to estimate the concentration of corrosion products. Since such approaches have been well-established, we do not discuss them here, and a reader can find a detailed description of various methodologies elsewhere [26; 27; 28]. For our methodology, one can choose any of these approaches to determine the precipitate concentration at each grid point; irrespective of the approach employed, the proposed methodology remains the same. For this paper, we employ a thermodynamically-consistent approach to estimate the precipitation of corrosion products. The growth of the corrosion products leads to the pressurization of pores. In this paper, we consider the pressurization of pores due to the expansion of the corrosion products. We calculate an isotropic eigenstrain due to the expansion of corrosion products at a spatial point \(\mathbf{x}\), which is given as: \[\mathbf{\varepsilon}_{\text{eig}}(\mathbf{x})=\frac{\big{(}V_{\text{ppt}}(\mathbf{x})-V( \mathbf{x})\big{)}_{+}}{V(\mathbf{x})}\mathbf{I},\quad\text{where }\left\{\begin{array}{ll}\big{(}\star\big{)}_{+}=\star&\text{if }\star>0\\ \big{(}\star\big{)}_{+}=0&\text{if }\star<0\end{array}\right. \tag{13}\] where \(V_{\text{ppt}}(\mathbf{X})\) is the precipitate's volume, \(V(\mathbf{x})\) is the volume associated with the grid point \(\mathbf{x}\) and \(\big{(}\star\big{)}_{+}\) represents Macaulay brackets. Expressing the volume of the precipitate in terms of concentration simplifies the relation, which reads: \[\mathbf{\varepsilon}_{\text{eig}}(\mathbf{x})=\big{(}c(\mathbf{x})\mathcal{M}_{\text{ppt}} -1\big{)}_{+}\mathbf{I} \tag{14}\] where \(\mathcal{M}_{\text{ppt}}\) is the molar volume of the precipitate. In the next section, we apply these eigenstrains within the pore space and solve for the stresses developed. ### Stress development Under these eigenstrains, we solve for static mechanical equilibrium in \(\Omega\) until the average strain within the microstructure is equal to the overall applied macro strain \(\mathbf{\varepsilon}_{\text{mac}}\). The strong form equation for static mechanical equilibrium is thus given as \[\mathbf{\nabla}\underbrace{\mathbb{C}:(\mathbf{\varepsilon}(\mathbf{x})-\mathbf{\varepsilon}_ {\text{eig}}(\mathbf{x}))}_{\mathbf{\sigma}}=0,\quad\ni\langle\mathbf{\varepsilon}(\mathbf{x}) \rangle=\mathbf{\varepsilon}_{\text{mac}}. \tag{15}\] The stiffness tensor \(\mathbb{C}\) at a point \(\mathbf{x}\) is defined as \[\mathbb{C}(\mathbf{x})=\omega\mathbb{C}_{\text{solid}}+(1-\omega)\mathbb{C}_{ \text{pore}},\quad\text{where}\ \begin{cases}\omega=1&\forall\mathbf{x}\in\Omega_{s}\\ \omega=0&\forall\mathbf{x}\in\Omega_{p}\end{cases} \tag{16}\] where for a given phase \(\mathbb{C}_{i}=\lambda_{i}\mathbf{I}\otimes\mathbf{I}+2\mu_{i}\mathbb{I}\), and \(\lambda_{i}\) and \(\mu_{i}\) being the Lame's constants for phase \(i\). The reformulation of the mechanical problem using the spectral method is similar to the diffusion problem formulation described earlier. Thus, the discretized weak form is given as: \[\mathcal{F}^{-1}\{\widehat{\mathbb{G}}_{s}(\mathbf{\xi}):\widehat{\mathbf{\sigma}}( \mathbf{\xi})\}=\mathbf{0} \tag{17}\] where \(\mathbb{G}_{s}\) represents a \(4^{\text{th}}\)-order projection operator that imposes compatibility conditions on \(\mathbf{\varepsilon}(\mathbf{x})\) (for further details on \(\mathbb{G}_{s}\), please refer to A). Since the above equation is subjected to both eigenstrains within the pores and an overall strain \(\mathbf{\varepsilon}_{\text{app}}\), we solve the equation using a Newton-Raphson approach until the residual \(\mathbf{r}(\mathbf{x})=-\nabla\mathbf{\sigma}(\mathbf{\varepsilon}(\mathbf{x})-\mathbf{\varepsilon}_ {\text{eig}}(\mathbf{x}))\) approaches 0. Within each iteration step of the Newton-Raphson solver, Equation (17) is solved using a linear iterative solver. Algorithm 1 summarizes the numerical algorithm for solving the above equation subjected to eigenstrains within pores and an overall strain. Next, we determine the crack's initiation and propagation based on the strain state in the solid phase (see stage III in Figure 1). ### Crack initiation and propagation We chose a variational phase-field approach [29; 30] for modelling fracture within the solid phase. A sharp crack interface is regularized over a finite length \(l_{0}\) where the damage within this regularized length is represented by a variable \(d\) that varies from 0 (unbroken) to 1 (completely broken). For this paper, we chose the hybrid anisotropic formulation [31] for the evolution of damage variable \(d\) whose strong form reads: \[-\frac{\mathcal{G}_{c}l_{0}}{2}\mathbf{\nabla}\cdot\mathbf{\nabla}d+\frac{\mathcal{G}_ {c}}{2l_{0}}d+\mathcal{H}^{+}d=\mathcal{H}^{+}. \tag{18}\] The term \(\mathcal{H}^{+}\) is the history field that stores the maximum strain energy at a point throughout a simulation, _i.e._\(\mathcal{H}^{+}=\max_{\{\tau\in[0,t]\}}\psi^{+}(\mathbf{\varepsilon}^{+}(\mathbf{x}, \tau))\). The total elastic energy \(\psi\) is decomposed to its positive \(\psi^{+}\) and negative \(\psi^{-}\) parts where only the strain energy associated with tension and shear, _i.e._\(\psi^{+}\) (see Equation (19)) contributes to the creation of cracks [31; 32]. The formulation thus prevents the formation of cracks in the compressed region and also, the penetration of cracks surface upon crack closure. The positive part of the strain energy density \(\psi^{+}\) is given as: \[\psi^{+}(\mathbf{\varepsilon})=\frac{1}{2}(\lambda+\mu)\big{[}\mathrm{tr}(\mathbf{ \varepsilon})\big{]}_{+}^{2}+\mu(\mathbf{\varepsilon}^{\mathrm{dev}}:\mathbf{ \varepsilon}^{\mathrm{dev}}) \tag{19}\] where \(\lambda,\mu\) are Lame's coefficients, \(\big{[}\star\big{]}_{+}=\frac{1}{2}(\star+\|\star\|)\) and \(\mathbf{\varepsilon}^{\mathrm{dev}}=\mathbf{\varepsilon}-\frac{1}{2}\mathrm{tr}(\mathbf{ \varepsilon})\mathbf{I}\). In Equation (18), the parameter \(l_{0}\) represents the regularized length scale, and \(\mathcal{G}_{c}\) represents the fracture energy of the material. We compute the Laplacian of \(d\), _i.e._\(\mathbf{\nabla}\cdot\mathbf{\nabla}d\) in Fourier space and transform it back to the real space, hence Equation (18) becomes \[-\frac{\mathcal{G}_{c}l_{0}}{2}\mathcal{F}^{-1}\{\mathbf{i}\mathbf{\xi}\cdot \mathbf{i}\mathbf{\xi}\ \widehat{d}(\mathbf{\xi})\}+\frac{\mathcal{G}_{c}}{2l_{0}}d(\mathbf{x})+\mathcal{H}^ {+}d(\mathbf{x})=\mathcal{H}^{+}\, \tag{20}\] which is then solved using a linear iterative solver such as GMRES [33]. The stiffness tensor \(\mathbb{C}\) in Equation (15) is updated to account for the reduction in stiffness due to micro-cracks within the solid. The degraded stiffness tensor at a point is given as \(((1-d)^{2}+\kappa)\mathbb{C}(\mathbf{x})\), where \(\kappa\) is a small artificial residual stiffness of the completely broken solid phase to keep Equation (15) well-posed as \(d\) approaches 1. ### Combined methodology for corrosion-driven fracture Algorithm 1 summarizes the entire methodology for the corrosion-driven fracture using an FFT-based method. We use a staggered solution scheme, which solves each mechanism in a sequential manner. We first solve the diffusion problem for given overall boundary conditions and then use the local concentrations obtained to estimate corrosion products' concentration. Similarly, the mechanical problem is solved for the obtained eigenstrains first, so strains \(\mathbf{\varepsilon}(\mathbf{x})\) are known when the phase-field problem is solved for the initiation and propagation of cracks. The staggered scheme thus requires a small time step to minimize numerical errors. Despite this restriction, the staggered scheme allows for a straightforward implementation of the multi-physics problems. ``` 1: For a given overall concentration gradient \(\mathbf{\nabla}c_{\rm mac}\) and an overall strain \(\mathbf{\varepsilon}_{\rm mac}\) Diffusion of chemical species 2:\(r(\mathbf{x})=-\nabla\cdot\mathbf{j}(\mathbf{\nabla}c_{\rm mac})\), \(\mathbf{\nabla}c(\mathbf{x})^{i}=\mathbf{\nabla}c_{\rm mac}\)\(\triangleright\)\(\mathbf{j}(\mathbf{\nabla}c)=\mathbf{D}\cdot\mathbf{\nabla}c\) 3: solve : \(\mathcal{F}^{-1}\{\widehat{\mathbf{G}}(\mathbf{\xi}):\widehat{\mathbf{j}(\delta\mathbf{\nabla}c )}(\mathbf{\xi})\}=\mathcal{F}^{-1}\{\widehat{\mathbf{G}}(\mathbf{\xi}):\widehat{\mathbf{j}(r) }(\mathbf{\xi})\}\)\(\triangleright\) Linear iterative solver 4: update : \(\mathbf{\nabla}c(\mathbf{x})^{i+1}=\mathbf{\nabla}c(\mathbf{x})^{i}+\mathcal{F}^{-1}\{\widehat{ \mathbf{\delta\nabla}c}(\mathbf{\xi})\}\) 5: solve : \(\phi(\mathbf{\xi})=\mathbf{\nabla}(\mathbf{\xi})\mathbf{\nabla}c(\mathbf{\xi})/\mathbf{\nabla}(\mathbf{ \xi}).\mathbf{\nabla}(\mathbf{\xi}),\quad\phi(\mathbf{x})=\mathcal{F}^{-1}\{\widehat{\phi }(\mathbf{\xi})\}\)\(\triangleright\) Periodic fluctuations 6: solve : \(c(\mathbf{x})=\mathbf{\nabla}c_{\rm mac}\cdot\mathbf{x}+\phi(\mathbf{x})+c(\mathbf{X})/\eta-\langle \nabla c_{\rm mac}\cdot\mathbf{x}\rangle_{\rm pore}-\langle\phi(\mathbf{x})\rangle_{ \rm pore}\), for \(\mathbf{x}\in\Omega_{\rm pore}\)\(\triangleright\) Concentration Pressurization of pores 7: solve : precipitation at each spatial point \(\mathbf{x}\) 8: compute : eigen strains \(\mathbf{\varepsilon}_{\rm eig}(\mathbf{x})\) Development of stresses 9:\(\mathbf{r}(\mathbf{x})=-\nabla\mathbf{\sigma}(\mathbf{\varepsilon}_{\rm mac}),\ \mathbf{ \varepsilon}(\mathbf{x})^{i}=\mathbf{\varepsilon}_{\rm mac}\)\(\triangleright\)\(\mathbf{\sigma}(\mathbf{\varepsilon})=\mathbb{C}:\mathbf{\varepsilon}\) 10:while true do 11: solve : \(\mathcal{F}^{-1}\{\widehat{\mathbb{G}_{s}}(\mathbf{\xi}):\widehat{\mathbf{\sigma}( \delta\mathbf{\varepsilon})}(\mathbf{\xi})\}=\mathcal{F}^{-1}\{\widehat{\mathbb{G}_{s }}(\mathbf{\xi}):\widehat{\mathbf{\sigma}(\mathbf{r})}(\mathbf{\xi})\}\)\(\triangleright\) Linear iterative solver 12: update : \(\mathbf{\varepsilon}(\mathbf{x})^{i+1}=\mathbf{\varepsilon}(\mathbf{x})^{i}+\mathcal{F}^{-1} \{\widehat{\delta\mathbf{\varepsilon}}(\mathbf{\xi})\}\) 13:if\(\|\delta\mathbf{\varepsilon}(\mathbf{x})\|<\) tol then break 14:else 15:\(\mathbf{r}(\mathbf{x})=-\nabla\mathbf{\sigma}(\mathbf{\varepsilon}(\mathbf{x})^{i+1}-\mathbf{ \varepsilon}_{\rm eig}(\mathbf{x}))\) Crack initiation and propagation 16: solve : \(-\dfrac{\mathcal{G}_{c}l_{0}}{2}\mathcal{F}^{-1}\{\mathbf{i}\mathbf{\xi} \cdot\mathbf{i}\mathbf{\xi}\ \widehat{d}(\mathbf{\xi})\}+\dfrac{\mathcal{G}_{c}}{2l_{0}}d(\mathbf{x})+\mathcal{H}d (\mathbf{x})=\mathcal{H}\)\(\triangleright\) Linear iterative solver 17: update: stiffness tensor \(\mathbb{C}\)\(\triangleright\) Material degradation ``` **Algorithm 1** FFT-based algorithm for corrosion-driven fracture problems ### Gibbs ringing artefacts The spectral-based method employs trigonometric basis functions, which are continuous with global support. Therefore, any contrast in local properties leads to Gibbs ringing artefacts (results show high-frequency oscillations or checkerboard patterns around the discontinuities). In Figure 2, we show these artefacts for different corrosion-relevant mechanisms (details of specific simulation are provided in Section 3) when Fourier-based gradient operator \(\mathbf{i}\mathbf{\xi}\) is employed (represented by the blue curve in the sub-figures). Various discrete operators with local support have been proposed in the literature to mitigate Gibbs ringing artefacts. A few of such operators are the central difference operator \(\mathbf{i}\xi_{\alpha}=(\mathbf{i}\sin(\xi_{\alpha}\Delta)-1)/\Delta\), a higher-order central difference operator (\(8^{th}\)) and the forward difference operator \(\mathbf{i}\xi_{\alpha}=(\exp(\mathbf{i}\xi_{\alpha}\Delta)-1)/\Delta\). In the expressions described above, \(\Delta\) represents the grid size in real space, and \(\alpha\) represents the spatial direction. We employ these operators to simulate corrosion-related mechanisms and compare their effectiveness in reducing Gibbs artefacts observed in fluxes, stresses and damages. Overall, the high-frequency oscillations or Gibbs artefacts are considerably suppressed by the aforemen tioned discrete operators (see Figure 2). However, we observe that the forward-difference gradient operator is the most effective among all the considered operators for mitigating Gibbs' ringing artefacts. For all mechanisms considered, it leads to smooth transitions across the discontinuities. Therefore, in the following, we will use the forward-difference gradient operator for simulations of corrosion-driven fracture. ## 3 Corrosion-driven fracture in cementitious material This section employs the presented numerical framework to simulate corrosion-driven fracture in a cementitious material. We consider an actual 2D scan of a cementitious sample with a porosity of \(\eta=0.36\) obtained using focused ion beam scanning electron microscopy (see Figure 3a). The sample has dimensions of \([0,l]\times[0,l]\), where \(l=1\) mm, and it is discretized into \(199^{2}\) grid points. An odd number of grid points is chosen to maintain the compatibility of the concentration gradient and deformation gradient (see [24; 21] for details). The free diffusion coefficient of ferrous ions within the saturated pores is taken as \(D_{\text{pore}}=1\) mol/mm\({}^{2}\)s. For a porous micro-structure, the diffusion coefficient within the solid phase should be 0. However, a negligible value of \(D_{\text{solid}}\approx 0\) leads to numerical difficulty in convergence, and any finite value leads to non-physical diffusive fluxes and concentrations within the solid. Therefore, we conducted a sensitivity analysis to determine an optimal value for the free diffusion coefficient in the solid phase (see B). We note that a value of \(D_{\text{solid}}=D_{\text{pore}}/10^{6}\) leads to optimal results. The material parameters for the solid phase are chosen as: elastic modulus \(E=10\) MPa, Figure 2: **Effect of gradient operators on different corrosion-relevant mechanisms.** Two cross-sections along a microstructure are considered and computed with different gradient operators. The grey area represents the solid, and the white presents the pore. (a) Fluxes, stresses and damage values (from left to right) along a horizontal cross-section (marked by line in inset). (b) Fluxes, stresses and damage values (from left to right) along a vertical cross-section (marked by line in inset). The material contrast between the pores and the solid is \(10^{3}\), _i.e._\(D_{pore}/D_{solid}=10^{3}\) and \(E_{solid}/E_{pore}=10^{3}\). The results shown are for an overall concentration gradient \(\mathbf{\nabla}c_{\text{mac}}=1\) and an overall strain \(\mathbf{\varepsilon}_{\text{mac}}=10^{-2}\) Poisson ratio \(\nu=0.2\), critical strength \(\sigma_{c}=10\) MPa, and critical fracture energy \(\mathcal{G}_{c}=10\) J/m\({}^{2}\). The regularized length \(l_{0}=0.01\) mm for the phase-field simulation is chosen based on the relation \(\sigma_{c}=9\sqrt{E\mathcal{G}_{c}/6l_{0}}/16\)[34]. Since pores are saturated with water, we consider material parameters of water \(E=1\) MPa and \(\nu=0.49\) for the pore space. To demonstrate the capabilities of our approach, we consider precipitation of Fe(OH)\({}_{2}\) only, but there are no limitations to the type of precipitation. At a constant pH of 8, as soon as the local concentration of ferrous ions exceeds the solubility limit of \(10^{-3}\) mol/L, chemical reactions lead to the precipitation of Fe(OH)\({}_{2}\)[28; 35]. Thus, the concentration of precipitated Fe(OH)\({}_{2}\) is computed by subtracting the solubility limit \(\mathcal{S}\)(pH) = \(10^{-3}\) mol/L from the total concentration of ferrous ions _i.e._\(c(\mathbf{x})-\mathcal{S}\)(pH). Meanwhile, the concentration of ferrous ions saturates to \(\mathcal{S}\)(pH). The material parameters for Fe(OH)\({}_{2}\) considered are: density \(\rho=3.4\) g/cm\({}^{3}\) and molar volume \(\mathcal{M}=26\) cm\({}^{3}\)/mol. ### Different corrosion-mechanisms within a micro-structure We simulate the physical process within the microstructure under static conditions, _i.e._ at a certain instance in time. We assume that the sample is subjected to an overall concentration gradient of \(\mathbf{\nabla}c_{\mathrm{mac}}=10^{-6}\) mol/mm and an overall strain of \(\mathbf{\varepsilon}_{\mathrm{mac}}=\mathbf{0}\) at a constant pH of 8. Furthermore, we arbitrarily chose a macro-concentration \(\langle c\rangle\) value (superimposed to the gradient) to compute the micro-concentrations within the pore space (see Equation (12)). The values of \(\mathbf{\nabla}c_{\mathrm{mac}}\) and \(\langle c\rangle\) are chosen to ensure that the local concentration of ferrous ions \(c(\mathbf{x})\) reach the solubility limit \(\mathcal{S}\)(pH) = \(10^{-3}\) mol/L (see Figure 3b) and Fe(OH)\({}_{2}\) precipitates within pores (see Figure 3c), which leads to eigenstrains induced by the volume expansion of Fe(OH)\({}_{2}\) (see Figure 3d). Due to the fractal nature of the microstructure, we observe the concentration of stresses along the pore boundaries (see Figure 3e and Figure 3f). We observe the initiation of several micro-cracks along the pore boundaries (see Figure 3i), which result from stress concentration along the pore interface. We note that this 2D simulation with 199\(\times\)199 grid points, performed on a single-core machine, takes approx 5 seconds for the diffusion problem, 15 seconds for the elasticity problem, and 9 seconds for solving the phase-field equation, totalling 29 seconds. The majority of the simulation time \(\approx 90\%\) is spent in constructing the two projection operators \(\mathcal{G}\) and \(\mathbb{G}\)). ### Evolution of micro-cracks within a micro-structure Next, we apply the proposed methodology to study how stresses and, subsequently, the micro-cracks develop within a micro-structure over time. To simulate the temporal evolution of the corrosion-driven mechanisms, we employ a multi-scale setting (see Figure 4). To this end, we consider a bar composed of cementitious material whose lower surface is subjected to a constant flux of ferrous ions (see Figure 4-left). The lower surface is representative of a Steel-Concrete Interface (SCI) where steel corrosion happens in cementitious materials. The fluxes are zero at other boundaries. Each material point of the bar, represented as \(\mathbf{X}\), is coupled to a representative volume element (RVE), in this case, the fractal porous media considered previously. Here, we use the same microstructure for every point, but there are no technical limitations to generate different microstructures for each point. The diffusion coefficients (\(D_{\mathrm{pore}},D_{\mathrm{solid}}\)) for an RVE are considered as described previously. The diffusion process is simulated using a multi-scale approach Figure 3: **Corrosion-driven mechanisms in fractal pore space.** (a) The microstructure from a scan of a cementitious sample. (b) The concentration of ferrous ions (computed from Equation (12)) within the pore space assuming an average concentration of \(\langle c\rangle=2\times 10^{-5}\) mol/mm\({}^{3}\). Ferrous ions concentration is normalized by macro-concentration \(\langle c\rangle\). (c) The concentration of precipitate Fe(OH)\({}_{2}\) (\(c_{\rm{ppt}}\)) at a constant pH of 8 and solubility limit of \(10^{-9}\) mol/mm\({}^{3}\). Concentrations are normalized by \(\langle c\rangle\). (d) Eigenstrains \(\varepsilon_{eig}\) developed due to the expansion of Fe(OH)\({}_{2}\) within the pore space, computed from Equation (14). (e,f) Stress components \(\sigma_{xx}\) and \(\sigma_{yy}\), respectively, computed from Equation (15) and Equation (17). (g,h) The positive and negative components of the elastic energy, respectively. contributing to the crack initiation and propagation is computed from Equation (19). (i) Crack initiation and propagation within solid phase represented by phase-field variable \(d\). where macroscopic concentration gradients are applied to the RVEs, in which the effective diffusive flux _i.e._\(\langle\mathbf{j}(\mathbf{x})\rangle\) is computed and then upscaled to the macro scale, where it is used to compute the macroscopic concentration of ferrous ions in the bar. The multi-scale approach is only used for the diffusion process; all other processes are simulated only locally within the RVEs. At the macro-scale, we solve Equation (1) using a finite difference scheme. For a constant pH system, as considered in this study, the precipitation occurs as soon as the ferrous ion concentration reaches the solubility limit \(\mathcal{S}(\mathrm{pH})\). Finally, the stresses within the solid phase are computed assuming a zero overall strain _i.e._\(\mathbf{\varepsilon}_{\mathrm{mac}}=0\) at each time step. The material parameters (\(E,\nu,\sigma_{c},\mathcal{G}_{c}\), \(l_{0}\)) are considered as described previously. We analyze the results for the RVE located at a distance of 5 mm from the steel-concrete interface (see Figure 4). Unlike the corrosion products, the stresses within the solid phase of the RVE form after a delay (see Figure 5a and Figure 5b). The stresses develop after 1.5 \(\times 10^{-5}\) mol/mm\({}^{3}\) of corrosion products have precipitated. The delay is caused because locally, a precipitate \(V_{\mathrm{ppt}}(\mathbf{x})\) must expand to the associated cell volume \(V(\mathbf{x})\) to exert pressure (see Equation (13)). Therefore, a precipitate must reach a threshold concentration locally to exert pressure. The initial drop in the average stresses is due to micro-crack initiation (see Figure 5d-e) and the relaxation caused due to it. The initiation and propagation of micro-cracks also affect other mechanisms. For example, internal cracks increase the porosity of a solid phase (see Figure 5c) and thus facilitate the diffusion of chemical species further into the solid phase. To this end, we use the damage variable from the phase field to estimate the increase in the porosity of a fractured region, \(d=0\) being non-porous and \(d=1\) being completely porous (see Figure 5f). The total Figure 4: **Multiscale approach for diffusion process:** Schematic of employed multiscale approach for Section 3.2. (left) Each material point \(\mathbf{X}\) at the macroscale is coupled to a representative volume. The lower side of the bar is subjected to a constant flux of ferrous ions. The macroscale concentration gradients are passed to the coupled RVE to compute the diffusive flux (from Equation (7)). The local diffusive flux \(\mathbf{j}(\mathbf{x})\) is then averaged over an RVE and is passed to the macroscale to perform the diffusion at the macroscale. The macroscale concentration \(c(\mathbf{X})\) thus computed is passed to the RVE to compute local concentrations of precipitates within the pores. (middle) Approximated pore space generated using the pore-network algorithm that does not preserve porosity. A few pore spaces, shown in dark grey, are isolated from the boundaries (including the top boundary) and thus do not contribute to the diffusion process. (right) Pore space is generated from the modified pore-network algorithm that preserves total porosity. porosity of the RVE increases as the micro-cracks initiate and propagate within the solid phase of the RVE (shown by the blue curve in Figure 5c). Although in this study, we do not consider the effect of porosity change on other mechanisms (for example, diffusion), the micro-structure changes must be coupled to the other physical mechanism. Since the quantities, such as the concentration of ferrous ions and damage values per grid point, are stored in a standard vector or matrix format (no special data structures are required), the coupling of the process through data exchange is straightforward. Furthermore, a regularized description of fracture also allows for coupling the fracture process with the ingress of chemical species due to increased porosity, which makes the study of the interplay among mechanisms feasible [36]. Although the phase-field formulation employed in the numerical framework is robust in handling crack initiation and propagation and subsequently, coupling to other physical processes, it also leads to some non-physical observations. For example, a closer look at Figure 2 and Figure 6 shows that cracks also develop within the pores. This is mainly because of the diffusive representation of a sharp crack over a length \(2l_{0}\). When a crack originates within the solid phase along the interface, due to its finite width of \(2l_{0}\), a part of it also develops within pores. However, such cracks within the pores never further develop or lead to crack propagation within the pore phase (as seen in Figure 6). Therefore, the development of cracks within the pores is a numerical artefact of the phase-field formulation employed, which can be reduced by taking a smaller value of the length scale parameter \(l_{0}\). Finally, we note that the above simulation took approximately 4 hours when performed on a single-core machine. ## 4 Comparison with a Euclidean pore space The main advantage of an FFT-based methodology is that it preserves the actual microstructure. As previously mentioned, conventional numerical frameworks approximate the fractal pore space to reduce computational complexities. This considerably approximates the various physical processes that lead to corrosion-driven fracture. To demonstrate the effect of pore space approximation on corrosion-driven mechanisms, we simplify the 2D fractal pore space considered in Section 3 and repeat the multi-scale simulation with the same conditions as the actual pore space. We use the pore-network approach [37], employed primarily for studying flow or diffusion in porous media, to simplify the complex pore space using spheres and cylinders (see Figure 4). The approximation of the pore space results in lower total porosity (\(\eta\)=0.29) compared to the fractal pore space (\(\eta=0.36\)), and also the isolation of certain pore space from the boundaries (see dark gray in Figure 4-middle). Since only open pores contribute to the diffusion process, we set the diffusion coefficient of isolated pores to that of the solid. The effect of reduced pore space can be seen in the diffusion of ferrous ions along the bar and the precipitation of corrosion within the pores (see Figure 7a and Figure 7b). As expected, an approximated pore space, with reduced porosity, allows less diffusion of ferrous ions compared to the fractal pore space and consequently has a slower precipitation rate than the actual pore space (see Figure 7b for the RVE located at 5 mm from SCI). For an RVE located further away from the SCI, the concentration of diffused ferrous never reaches the solubility limit. Hence, precipitation never occurs (see Figure 7c for the RVE located at Figure 5: **Evolution of damage, average stress and total porosity.** Different corrosion-driven mechanisms are shown for the RVE located at 5 mm from the steel-concrete interface. (a) Evolution of average concentration of corrosion product over time. (b) Development of average stresses \(\langle\sigma\rangle_{xx}\) (red) and \(\langle\sigma\rangle_{yy}\) (green). (c) Increase in total porosity of the RVE due to internal cracking. (d-e) Evolution of internal cracks represented by the damage \(d\) within the solid phase of the RVE at two different time instances indicated by grey lines in (b). (f) State of porosity within the solid phase of the RVE at a time instance indicated by grey line in (c). Figure 6: **Numerical artefacts of phase-field formulation.** Damage variable is shown over the entire domain of the microstructure, _i.e._ in the solid and pores, for the fractal pore structure at times corresponding to (a) Figure 3d and (b) Figure 3e. 15 mm). Compared to a fractal pore space, the stresses develop much later in an approximated pore space due to the slow precipitate rate (see Figure 7d). The delay is significant as we go further away from the SCI (see Figure 7e where stresses have not developed for an approximated pore space). Furthermore, the maximum value of the average stress \(\bar{\sigma}_{xx}\) in the solid before the crack initiation is also different. The maximum stress increases by a factor two from the fractal pore space to the approximated pore space. Additionally, the way the micro-cracks initiate and propagate within the solid phase is entirely different in the two cases (see Figure 7g-h). In the previous example, we employed the pore-network algorithm, which inscribes spheres and cylinders within a pore space for approximation. As a result, the total porosity is not preserved, and the differences observed in the previous section are mainly due to the variation in porosity. To analyze the influence of pore shape, we again approximate the fractal pore space but preserve the porosity. To this end, we increase the radii of the inscribed spheres until the approximated pore space has the same porosity as that of fractal space _i.e._\(\eta=0.36\). Figure 4(right) shows the approximated pore space with the same porosity. Preserving the porosity leads to a similar concentration profile of diffused ions along the bar. Consequently, a similar precipitation rate within the RVEs located at \(Y=5\) mm and 15 mm (see Figure 7b and Figure 7c). Although the maximum stress value (\(\langle\sigma\rangle_{xx}\approx\)-200 MPa for RVEs at 5 mm and 15 mm) and the start of failure (\(t\approx 2000\) sec for RVE at 5 mm and \(t\approx 4000\) sec for RVE at 15 mm) are approximately the same, the evolution of stresses post-cracking is significantly different. The post-cracking stresses in approximated pore space (\(\eta=0.36\)) are much higher compared to actual pore space (see Figure 7d and Figure 7e). The undamaged solid phases (indicated by blue colour in Figure 7g-i), which are under compression as precipitates grow within the pores, contribute to the post-cracking stresses. The variation in micro-cracks propagation within the two spaces (see Figure 7g-i) significantly affects how the undamaged spaces are created and how the stresses develop locally within such undamaged areas. Given that the same percentage of solid phase has been damaged (\(\approx 30\%\) see Figure 7f) in the two spaces, the different post-cracking stresses highlight the necessity for accurate resolution of local stresses and micro-cracks. The effect of pore shape approximation is much more distinct for pore spaces with low porosity. Figure 8 compares mechanisms for a pore space with a porosity of \(\eta=0.17\). Unlike for high porous spaces, preserving the total porosity does not result in the same diffusion profile, possibly due to the isolation of pores from the boundaries. As expected, the differences in the diffusion of ions lead to different precipitation rates and the development of stresses within RVEs. Especially for the RVE located far from the steel-concrete interface at 15 mm (see Figure 8c). In the previous two sections, we employ the proposed methodology to show the effect that an approximation of the pore space and shape has on corrosion-driven mechanisms. A detailed study that characterizes the exact differences in various mechanisms that arise due to the approximation of pore space and shape is beyond the scope of this paper and will be considered for future work. Figure 7: **Comparison with Euclidean pore space.** (a) Ferrous ions concentration profile along the length of the bar after 3260 sec. The colours indicated the three different RVEs considered: actual pore space (blue), approximated pore space generated from the pore-network algorithm (red) and approximate pore space with the same porosity as actual pore space (black). (b-c) Amount of corrosion product precipitated within RVEs. The two RVEs considered are 5 mm and 15 mm from the steel-concrete interface. The location of the two RVEs is indicated with grey dotted lines in (a). The value shown for the concentration is averaged over an RVE. (d-e) Development of stresses \(\langle\sigma\rangle_{xx}\) at the same two RVEs. The stresses value shown are averages of stresses within the solid. (f) Percentage of damaged solids within the RVE located at 5 mm. A material point within an RVE is considered completely damaged once the damage variable value exceeds 0.9. (g-i) State of micro-cracks within the three types of RVE considered. All the states shown are at time = 3260 sec. Figure 8: **Comparison with Euclidean pore space at low porosity.** (a) Ferrous ions concentration profile along the length of the bar after 3260 sec. The colours indicated the three different types of RVE considered: actual pore space (blue), approximated pore space generated from the pore-network algorithm (red) and approximate pores space with the same porosity as actual pore space (black). (b-c) Evolution of stresses within the solids of an RVE. The results in (b) and (c) are for RVEs located 5 mm and 15 mm from the steel-concrete interface. The stresses shown are \(\langle\sigma\rangle_{xx}\), averaged within the solid of an RVE. The dotted lines in the figures indicate the precipitate’s concentration and evolution over time within an RVE. The values shown are average values. ## 5 Discussion Although cementitious materials have been around for centuries, little to no attention has been given to the influence of materials' micro-structure on the corrosion-driven propagation phase of failure. The focus has been mainly on macroscopic properties, such as overall porosity or electrical resistivity. As we showed in our analysis, having the same porosity but different microstructure (Figure 3 and Figure 8) leads to different stress evolution in the microstructure. This also directly affects how crack initiate and propagate at the micrometre scale, which in a later stage will impact other corrosion-driven mechanisms. Another measure often employed to include micro-scale features in predicting corrosion-related mechanisms is the pore-size distribution obtained experimentally. However, measurements obtained from common techniques assumes that pores are either cylindrical or spherical. As we showed, approximating a complex micro-structure with smoother features can lead to a different behaviour of corrosion-driven mechanisms. Furthermore, measures such as overall porosity or the pore size distribution lack information regarding the geometry and spatial arrangement of pores or, conversely, solid phases, which is critical for understanding local stress state and crack initiation (see Figure 5). Therefore, numerical analyses employing these measures provide limited insight into the underlying corrosion-driven mechanisms because they lack a fundamental consideration of the effect of microstructure. However, we consider such fundamental consideration important, especially in the context of the significant endeavors towards fabricating novel cementitious materials by chemically altering micro-structure [38]. The presently proposed framework will thus contribute to a better understanding of the microstructure-related features critical to corrosion-driven damage. Principally, determining the desired values of such critical features/measures could increase the serviceability lifetime of concrete structures in corrosive environments. We believe the proposed framework will thus help tailor or design sustainable cementitious materials. To identify the features of a microstructure that are critical to corrosion-driven damage, a detailed statistical analysis of three-dimensional RVEs is necessary. This remains difficult as it requires a complete analysis where both diffusion as well as stresses are solved in a multiscale setting. Moreover, since the features of a microstructure will evolve (due to the deposition of precipitates and formation of crack channels as shown in Figure 3), both processes must be coupled. Even though the FFT framework, as proposed here, is computationally efficient (as can be seen by the computational cost reported in Section 3.1 and Section 3.2), its efficiency suffers when the property contrast between phases is significantly high. This asks for computationally efficient solvers or algorithms, compared to the ones employed here, capable of handling large differences in phase properties. The optimization of the algorithm with respect to computational efficiency is beyond the scope of this paper and will be considered for future work. Lastly, the proposed numerical framework could be improved by validating it with experimental data, although specifically designed experiments are needed to provide the required validation data. In the present study, we assume certain conditions or parameters largely due to a lack of experimental values. For example, the corrosion rate or diffusive flux value employed in Section 3.2 is assumed to be constant along the entire steel-concrete interface. However, under conditions representative for engineering structures, the corrosion rate is variable as it depends on time-variable exposure conditions, and ultimately the electrochemically active steel surface _i.e._ steel surface in contact with water. Recent studies [39] estimate this variability that could be incorporated into the numerical framework for better predicting corrosion-driven mechanisms. Similarly, in the present framework, we assumed two distinct values of \(\beta\) in Equation (11) (_i.e._\(\beta_{\rm pore}\) and \(\beta_{\rm solid}\)) to compute local concentrations within the pores. This numerical trick ensures that the local concentration values satisfy certain macro-scale conditions. However, the physical significance behind this \(\beta\) parameters remains unknown, which should be verified by comparison with experimental observations. Furthermore, we also assume that the entire pore space of an RVE is saturated with water. A better estimation of the condensation of micro-pores would be helpful in further improving the proposed numerical framework. We believe these assumptions or limitations of the proposed numerical framework can be surmounted by actively verifying the model with experimental results and designing experiments to elucidate details of the microstructure of cementitious materials. ## 6 Conclusion This paper presents a numerical framework for fracture within fractal porous media where the interplay among different physical mechanisms drives the fracture. The proposed FFT-based spectral integral methodology is computationally efficient and easy to implement for fractal pore spaces, and its extension to a multi-scale approach is straightforward. We highlight the capability and the robustness of the spectral-integral method to resolve different physical problems, _e.g._, diffusion and mechanical, within complex micro-structure, such as fractal pore spaces in cementitious materials. We show the significance of actual pore space by drawing a comparison with an approximated pore space, as usually done in the literature. Our comparisons show that the approximation of pore space may severely underestimate various physical mechanisms (e.g. reactive transport) and fracture initiation and propagation. The present methodology thus opens a path for a better understanding of the effect of complex pore spaces, an aspect often neglected or crudely approximated, on corrosion-driven mechanisms in cementitious materials. Although this paper mainly focuses on corrosion-driven fracture in concrete, the presented methodology is general. It can be easily extended to other multi-physics-driven fractures in complex porous media. ## 7 Acknowledgements The authors would like to thank Thilo Schmid (Durability of Engineering Materials, ETH Zurich) for providing the FIB-SEM scan of the cementitious samples used in this study. ## 8 Data Availability The code [40] for the simulation is written in Python and is an extension of the code provided in [20]. The simulation data generated in this study have been deposited in the ETH Research Collection database under accession code ethz-b-000593923. ## Appendix A Projection operator The main objective of the projection operator (\(\mathbf{G}\) or \(\mathbb{G}\)) in the FFT Galerkin approach is to project an arbitrary tensor into a compatible one. For a diffusion problem, the compatible tensor is the concentration gradient (\(1^{\text{st}}\)-order) and for the elasticity problem, the compatible tensor is the strain tensor (\(2^{\text{st}}\)-order). The convolution operations \(\mathbf{G}*\mathbf{\nabla}c\) and \(\mathbb{G}*\mathbf{\varepsilon}\) in the Fourier space can be expressed as: \[\widehat{A_{i}}(\mathbf{\xi})=\widehat{g_{ij}}(\mathbf{\xi})\widehat{\nabla c_{j}}(\bm {\xi}), \tag{11a}\] \[\widehat{A_{ij}}(\mathbf{\xi})=\delta_{im}\widehat{g_{jl}}(\mathbf{\xi})\widehat{ \varepsilon_{ml}}(\mathbf{\xi}) \tag{11b}\] where \(\widehat{A_{i}}(\mathbf{\xi})\) and \(\widehat{A_{ij}}(\mathbf{\xi})\) are the respective compatible tensors in Fourier space. For a diffusion problem, the concentration gradient \(\mathbf{\nabla}c(\mathbf{x})\) is only a gradient of a scalar field _i.e._ the concentration of diffusive species \(c(\mathbf{x})\). Similarly for an elasticity problem, a row vector of strain tensor is only the gradient of a component of the displacement field \(\mathbf{u}(\mathbf{x})\). An arbitrary field vector \(\mathbf{f}(\mathbf{x})\) is only the gradient of a scalar field \(g(\mathbf{x})\) if its curl vanishes, _i.e._\(\nabla\times\mathbf{f}(\mathbf{x})=0\). Furthermore, \(\mathbf{f}(\mathbf{x})\) must be periodic. These two conditions thus form the compatibility conditions. Since the periodicity of \(\mathbf{f}(\mathbf{x})\) is inherently satisfied by the Fourier transform, the projection operator \(\widehat{g_{ij}}(\mathbf{\xi})\) is formulated such that the curl of \(\mathbf{f}(\mathbf{x})\) vanishes. Leute et al [21] mathematically prove that \(\widehat{g_{ij}}(\mathbf{\xi})=\mathbf{i}\xi_{i}\cdot(\mathbf{i}\mathbf{\xi})_{j}^{-1}\) projects an arbitrary field into a compatible one in a least-square sense, _i.e._ the residual vector \(\mathcal{R}=\mathbf{\nabla}g(\mathbf{x})-\mathbf{f}(\mathbf{x})\) is minimized. In the expression for the projection operator, \((\mathbf{i}\mathbf{\xi})^{-1}\) is the inverse of the Fourier representation of the gradient given as \[(\mathbf{i}\mathbf{\xi})^{-1}=\frac{\mathbf{i}\mathbf{\xi}^{\star}}{\mathbf{i}\mathbf{ \xi}\cdot\mathbf{i}\mathbf{\xi}^{\star}}\, \tag{12}\] where \(\mathbf{i}\mathbf{\xi}^{\star}\) is the conjugate of \(\mathbf{i}\mathbf{\xi}\). The above relation applies only for \(\mathbf{i}\mathbf{\xi}\cdot\mathbf{i}\mathbf{\xi}^{\star}\neq 0\), otherwise \(0\). ## Appendix B Sensitivity analysis
2310.07242
Textiverse: A Scalable Visual Analytics System for Exploring Geotagged and Timestamped Text Corpora
We propose Textiverse, a big data approach for mining geotagged timestamped textual data on a map, such as for Twitter feeds, crime reports, or restaurant reviews. We use a scalable data management pipeline that extracts keyphrases from online databases in parallel. We speed up this time-consuming step so that it outpaces the content creation rate of popular social media. The result is presented in a web-based interface that integrates with Google Maps to visualize textual content of massive scale. The visual design is based on aggregating spatial regions into discrete sites and rendering each such site as a circular tag cloud. To demonstrate the intended use of our technique, we first show how it can be used to characterize the U.S.\ National Science Foundation funding status based on all 489,151 awards. We then apply the same technique on visually representing a more spatially scattered and linguistically informal dataset: 1.2 million Twitter posts about the Android mobile operating system.
Caroline Berger, Hanjun Xian, Krishna Madhavan, Niklas Elmqvist
2023-10-11T07:16:05Z
http://arxiv.org/abs/2310.07242v1
# Texiverse: A Scalable Visual Analytics System for Exploring Geotagged and Timestamped Text Corpora ###### Abstract We propose Texiverse, a big data approach for mining geotagged timestamped textual data on a map, such as for Twitter feeds, crime reports, or restaurant reviews. We use a scalable data management pipeline that extracts keyphrases from online databases in parallel. We speed up this time-consuming step so that it outpaces the content creation rate of popular social media. The result is presented in a web-based interface that integrates with Google Maps to visualize textual content of massive scale. The visual design is based on aggregating spatial regions into discrete sites and rendering each such site as a circular tag cloud. To demonstrate the intended use of our technique, we first show how it can be used to characterize the U.S. National Science Foundation funding status based on all 489,151 awards. We then apply the same technique on visually representing a more spatially scattered and linguistically informal dataset: 1.2 million Twitter posts about the Android mobile operating system. **Keywords:** Text analytics, geospatial analytics, performance, large-scale, text visualization. ## 1 Introduction Real-world datasets are often both complex and massive. Representing such complex and large-scale data presents unique challenges in data mining and visual analytics. **Complexity** means that different data types each have their own ideal visual representations. [39] Techniques that are appropriate for visualizing quantitative data, textual, or relational data are often not applicable for the geospatial context. **Massive scale** gives rise to challenges such as data reduction, parallel computing, high-resolution displays, and user interfaces [37]. Scientific visualization researchers have long treated scale as a core research problem, and have accordingly proposed a variety of architectures to enable high-speed data streaming, [3] adaptive compression and indexing, [24, 42] and parallel I/O. [33] However, visual analytics and information visualization research has lagged behind, particularly in the context of increasingly popular web-based interfaces, where visually delivering a large volume of information is further limited by the browser capability and network bandwidth. Also, a large dataset often implies frequent updates and rapid growth in data, which increases the difficulty and cost in preprocessing data. In this paper, we propose a novel visual analytics system called Texiverse for mining and visualizing large-scale, geotagged, and timestamped textual data. Our implementation has a web-based visual interface and allows for exploring large multidimensional and multimodal datasets. The intended use of Texiverse is to explore enormous collections of text entries, each of which has a geospatial reference, a timestamp, and a numerical value. To factor in these four attributes, we first plot markers of different sizes on a map to denote regional values. Each marker can be turned into a circular dynamic tag cloud with the most significant tags in the center. The temporal trends for geospatial sites are represented as animation of marker resizing. Tag clouds can also be animated to show how some tags become more/less significant from one time period to another. To view the temporal trends for a particular tag, we develop a similar design as SparkClouds [36] to overlay a sparkline on each tag. To address the scalability challenge for the web interface implementation, we extract keyphrases in parallel based on the full text and then aggregate them to reduce query execution time. We use keyphrase extraction rather than traditional word-frequency solutions because phrases in general are more descriptive than single words. However, keyphrasing algorithms usually consume significantly more computational resources than dealing with single words. Also, we propose a refined tf-idf solution to tackle the frequently updating text. Circular markers are loaded by default for browsing with a large number of nodes on the map at a time, whereas the interactive SVG versions provide interactive detail and are presented upon user request. We describe in detail how we achieve this functionality given performance bottlenecks for processing large bodies of text. To demonstrate the typical use scenarios of our technique, we describe how it can be used to characterize the U.S. National Science Foundation (NSF) funding status based on all 489,151 awards over the years 1976 to 2012. Each award has associated information such as the proposal summary, funding amount, awarded institutions, and awarded date, that correspond to the four dimensions in our design. Our second application is used to visualize 1.3 million Twitter feeds about Android and iOS. Finally, we discuss limitations of our technique and the potential of using it in other applications. ## 2 Related Work Our work involves multidimensional, textual, and geospatial data, as well as large-scale data analytics. Below we review relevant work in these fields. ### Multidimensional Visualization _Multidimensional data_ is often used interchangeably with _multivariate_ data in visual analytics research. However, when these two terms were first proposed, multidimensional data referred to data that had multiple independent parameters and their relationships, whereas multivariate focused on dependent variables [7]. In Wong's review [57]of 30 years' multidimensional multivariate visualization (mdmv), he argued that such strict definitions had been discarded and both terms shifted towards a broader definition to study multiple variables regardless of their inter-dependency. The present study follows the modern and broad definition of these terms. Among multidimensional visualizations, a line of research focuses on multiple variables but concerns only one or two data types. Parallel coordinates [28] and its extensions [62, 60, 22] position variables as parallel axes and each data point is represented as a polyline that connects the corresponding points on each axis. As a classic and effective multivariate visualization, parallel coordinate plots are widely used to visualize data sets that have multiple quantitative values. Scatterplot matrices (SPLOMs) [20] also aim to visualize multiple quantitative variables, but the approach is different. The technique produces a collection of scatterplots between any two variables and organizes them in a matrix, where the diagram in a cell corresponds to the pairwise correlation between the row variable and the column variable. Wong and Bergeron [58] used the metric scaling technique to identify the inherent dissimilarities between quantitative attributes and accordingly reduced the data to low dimensionality. Dust & Magnet [59] uses a metaphor where each attribute is a magnet and each data point is a speck of iron dust that can be attracted or repelled by magnets. All these efforts try to reduce multiple data dimensions to fit 2D displays. However, they are all specifically designed for presenting quantitative variables and are therefore not capable of handling multiple data types. Attempts have been made to visualize a data model that involves several disparate data types. Chernoff [12] displayed cartoon-like faces on a map and used facial characteristics to represent a composite of nominal and quantitative values for demographics in a given geospatial regions. Weber et al. [54] visualized nominal and quantitative time-series data by developing a spiral visualization that plotted the time line as a spiral curve. Different quantitative values, such as sunshine intensity, are mapped to the corresponding points on the spiral rendered in different colors, textures, line widths, or icons. World Explorer [2] helps users explore geo-referenced photos on Flickr with the map labelled with weighted tags associated with the photos. GeoTime [32] adds temporal information as Z index in a 3D environment to visually track events in an interactive view. These studies integrate variables of different data types within a single view and generally offer more sophisticated interactions than single-type multidimensional Figure 1: **Testiverse for U.S. research awards. Our Textiverse visual analytics application being used to explore 489,151 U.S. National Science Foundation awards from 1976 to 2012. Highlighted is a snapshot of Woods Hole, MA in 2009.** visualizations. When even more data types are needed in visualizations, existing approaches tend to rely on analysis and exploration to create multiple low-dimensional views that address different aspects of the data. Polaris [45] (which later became the commercial tool Tableau) is a highly customizable visual interface for exploring large multidimensional data sets. It offers table-based queries and a diversity of visualizations for identifying patterns and trends between two variables. Software libraries such as prefuse, [26] D3, [9] and the InfoVis Toolkit [21] require coding, but give users more freedom to explore multivariate data and provide many different kinds of visualizations to suit various data models. However, the multiple views may cause loss of relationships between different contexts. In this work, we aim to reconcile the need of multiple data types with the desire of maintaining contexts between views. ### Text Visualization Starting with the ubiquitous word clouds (or tag clouds) made popular by Flickr in 2001, [49] text visualization is now widespread on the web. The basic notion of text visualization is to summarize, highlight, and reduce potentially large bodies of text into compact visual representations, and has been a focus in information visualization research since its inception. [43, 56] We identify three different types of text visualization: frequency-based, relational, and composite, and review them below. _Frequency-based techniques_ focus on summarizing text primarily based on the frequency of words in a corpus. Tag clouds [49] is the canonical example, but several variations exist, such as Wordle, [50] ManiWordle, [35] and EdWorlde. [52] _Relational text visualization_ shows not just the content but also the context and relations of words and phrases. Several examples exist: WordTree [53] is a visual and interactive concordance for phrases, DocuBurst [13] shows a document in the context of a word ontology, and Parallel Tag Clouds (PTCs) [14] visualize the relations between words and concepts in a document collection. Finally, _composite text visualization_ often combines pure textual representations with other visualizations to convey additional data. Examples include SparkClouds, [36] which overlays a temporal trendline on the keywords in a tag cloud, TIARA, [55] ThemeRiver, [25] TextFlow, [16] context-preserving dynamic word clouds, [15] which integrate text with time-series charts, and WordBridge, [34] which replaces the nodes and links in a graph visualization with node clouds and edge clouds that show the content of the relation. The present study shares commonalities with composite text visualization, particularly with those combined with timely data. However, instead of focusing on generating topics and providing an additional graph, we present an animated tag cloud to represent change of tag significance over time. Also, none of the above studies has attempted extremely large datasets, which as mentioned earlier require new solutions in data preprocessing and visual analytics. ### Thematic Map Visualization Thematic maps refer to a kind of maps that only use coastlines, boundaries, and places as points of reference for the phenomenon being mapped. [46] Visualization researchers have proposed a variety of techniques in representing a thematic map. One solution is choropleth mapping, [17] which aggregates data and renders regions based on pre-defined boundaries such as country and state. Similar to choropleth map but without pre-defined boundaries, a dasymetric map [19] determines the spatial divide based on the underlying data's statistical distribution. A contour map [48, 5] links data points of the same value with continuous smooth curves and is often used to describe 3D surfaces. These visualizations have been applied to demonstrate the geospatial distribution of numerical data, but have rarely been applied to multidimensional and textual data. The proportional symbol technique has the potential to overlay more dimensions on a map. It communicates information associated with a location via the symbol content, shape, color, and size. Besides the capability of handling 1D numerical data [30] like the aforementioned techniques, it can represent composite datasets. [12, 10] For example, GeoVISER supports exploration of thematic attributes and and geospatial data. [61] The present study aims to propose a visual representation for multidimensional data and therefore, we choose the proportional symbol map as the geovisualization technique. ### Large-Scale Visual Analytics Big data is the next frontier in computing. Managing big data is one of the grand challenges of virtually any data-intensive computing discipline, visualization and visual analytics included. Accordingly, much work is focused on this topic. There are three major solutions to processing large-scale data. The first approach is to reduce data volume without compromising data precision. Pajarola [41] and Shen [42] both used a hierarchical tree structure to encode and index scientific data at different granularity levels. In information visualization, Guoqing et al., [24] Wong et al., [57] and Michaels et al. [40] attempted to cluster variables according to their correlations and therefore reduced unnecessary attribute coupling. In contrast to variable clustering, Fua et al. [23] and Abello et al. [1] focused on clustering data points that share similar characteristics among all the attributes. Our study subsets a large dataset according to the zoom level and adaptively enables/disables interactions. The second way is to build the visual analytics system using parallel computing techniques. This technique is widely used in scientific visualization for rendering 3D objects, [4, 8] but is less common in visual analytics. In the present work, we attempt to deploy algorithms on a distributed computing environment for data transformation and visualization production. The third and last solution is to cache and stream data on demand. Ahrens et al. [3] discussed requirements for streaming scientific data: separable, mappable, and result invariant. In the present work, we distribute individual visualization files to multiple servers. These files contain all the information for a geographical region and are loaded only upon user request. ### Visual Analytics for Text Visual analytics systems provide a composite set of visual tools and weaves them together effectively to allow exploration of large-scale multivariate datasets. Java et al. [29] and MacEachren et al. [38] both propose temporal and geospatial visualizations based on Twitter data, but neither of them focuses on the textual content. SentenTree visualizes frequent sequential text patterns and supports detail exploration of tweets through interaction [27]. Visual Backchannel [18], Spatiotemporal Social Media Analytics [11], and I-SI [51] are visual analytic systems for analyzing social media data by topics, friendship, temporal trends, and geospatial distribution and can be used for event detection, topic exploration. They incorporate extraction of topics, high-performance computing, and visual interface interactions into a system. The present study shares similar motivations of analyzing large-scale multidimensional data and adopts a similar approach of using geospatial visualization and text visualization techniques. However, we propose a scalable solution for processing large-scale text corpora with a detailed performance evaluation for the preprocessing step. Also, we present the evolution of topics using animated tag clouds. ## 3 Textiverse: Large-Scale Spatial Text Analytics Textiverse is a novel text visual analytics system to mine and visualize large-scale geotagged and timestamped textual data on a map. The approach is based on incrementally extracting representative keywords and phrases from a large and dynamically updating dataset using parallel computation. The visual representation then aggregates spatial regions into discrete sites and renders each as a tag cloud. Tags are arranged in a circular layout with the most prominent text at center, and less significant ones towards the edge. From one time frame to another, the tag cloud is animated to reflect changes in text significance: increasingly significant tags are moved closer to the center, tags that experience a reduction in significance move to the edge, and new tags emerge with a "phosphor" effect. [6] To view the temporal trends of various tags, we overlay a sparkline on each tag. ### Architecture Textiverse takes any large-scale tabular-structure dataset such as CSV and SQL databases as input. As illustrated in Figure 4, the dataset includes geospatial information, timestamp, and quantitative values for each textual unit such as a report, a Twitter feed, a job description, and a policy document. Other unused attributes are dropped and will not proceed to the next stage. Then we develop a mapper service to translate geospatial character strings into <latitude, longitude> values (step 1 in Figure 2). Next, we deploy and run a MapReduce job to aggregate input by latitude and longitude and extract keyphrases from each geospatial site (step 2). The resulted dataset is stored into a database, which can be queried by the web-based user interface (step 3). The web interface allows users to interact with the map and tag clouds by zooming, panning, and clicking so as to explore the timely trend, geographical distribution, and main topics in the dataset (step 4). The following sections present how the underlying infrastructure supports large-scale data processing and what the front end is capable of representing visually. ### Data Management The input data for Textiverse are text corpora, each of which is associated with a geospatial reference, a timestamp, and a quantitative value. Unstructured text is represented as a sequence of words with punctuations removed. There is no word limit or format requirement for a text corpus. All the word sequences are passed to our keyphrase extraction algorithm derived from GenEx. [47] Unlike many tf-idf-based solutions that use all documents to be analyzed as the text corpus, we use a fixed third-party word-frequency database from American National Corpus (ANC). The ANC word-frequency statistics are based on a massive collection of generic written and spoken materials in American English. It contains 293,866 words and their frequencies with the top 'the' having 1,081,168 occurrences and many uncommon words with only one occurrence. We adopt the ANC database to avoid reliance on target documents to be analyzed. As a result, newly updated documents will not be affected by the previous corpus and hence the order of processing documents does not have an effect on the final result. Also, newly up dated documents do not influence earlier results and do not require a full rerun to obtain the most accurate result. This opens a new opportunity for parallelization and a significant reduction of cost for incremental changes. We discuss this further in the section on Scalability Considerations. The basic idea of our algorithm is to (1) identify single keywords based on adapted tf-idf scores, (2) include words before or after those keywords to form phrases, (3) compute a phrase's weight based on the sum of the tf-idf scores of all the words the phrase contains with an adjustment, (4) sort phrases by weight, and (5) select the top n phrases as keyphrases. Keyphrase extraction is selected against word-frequency because phrases in general are more descriptive and retain context of the original document better than single words. The resulted data set is an array of <phrase, weight> pairs. However, keyphrasing algorithms usually consume significantly more computational resources than those dealing with single words. This may become a performance bottleneck when processing a large corpus. We discuss this problem and other scalability issues below. Geospatial data varies from address, postal code, to latitude-longitude pairs. We have implemented a preprocessing service to map geospatial data of various formats (such as "Seattle, WA", "IN 47907", "Yellowstone National Park") into latitude-longitude values. This includes (1) mapping U.S. zip codes to latitude-longitude values based on the Zip Code Tabulation Areas (ZCTA) from the U.S. Census Bureau; (2) mapping international cities to latitude-longitude pairs using data from [http://GeoNames.org](http://GeoNames.org); and (3) a general mapping of any address to coordinates using the Google Maps API. Time-series data may be stored as a timestamp or a time range. To simplify, we model timestamps as a time range with the same start time and end time. A time range is divided into standardized units such as year, month, week, day, and hour. Quantitative values are distributed into these standardized units based on a function of time. Figure 3 illustrates the data management for the Textiverse system. After the preprocessing step above, input data are aggregated by geospatial location. As a result, for every spatial region, there is a list of <std_time_range, phrase, weight> triples. The weight here is adjusted based on the phrase weight and the quantitative value associated with a given time and a geospatial site. ### Animated Tag Clouds The tag cloud layout is derived from WordBridge, [34] where the most significant tags are sized larger and placed in the center. First, tags are sorted by weight and sized accordingly. Starting from the largest one placed in the center, it produces four available areas for the next tag. The next tag chooses the area closest to the center while large enough to fit the tag. The layout continues until all tags are displayed or no available areas are sufficiently large to hold tags. Figure 4: Textiverse map visualization. Map view with less significant sites shown as small dots. Figure 3: Textiverse data model. Data model and preprocessing for Textiverse. Figure 2: Textiverse system architecture. An overview of the Textiverse architecture. Besides the above visual design, our layout algorithm also considers the potential movement of tags. The motivation is to make the layout stable as it is animated. This means that the tag cloud layout is also determined by its previous layout so that the tag movement is minimized when transitioning from one tag cloud to another. When several available boxes of free space can all fit a tag, the tag will be placed in the box that has the lowest sum of distance to the canvas center and distance from its prior position. The time-series data associated with tags is rendered as a temporal trendline that overlays the text such as in SparkClouds. [36] Similar to SparkClouds, the space limit makes it impossible to provide more details such as labels and legends on the chart. Therefore, the sparkline only gives a rough picture of the changes. ### _Geospatial Layout_ Rendering tag clouds for all geospatial sites on a map is not only computationally infeasible [31] but also produces overplotting (text overlap) that makes it difficult to interpret. Geospatial visualizations often render nodes in different colors, shapes, and sizes to represent different data attributes. Texiverse follows the same rule and thus must manage overplotting in dense regions just like any other geospatial visualization. One common solution is spatial clustering, which aggregates data within a certain geographical distance and represents them as one point. However, it has been reported [44] that such data aggregation misleads users about the actual location that the data originate from. To avoid this, our approach plots nodes in their precise locations and reduces less significant points to small marks so as to alleviate the occlusion problem, as shown in Figure 4. In the world view, we allow markers to occlude because markers at this zoom level convey the quantitative meaning more than delivering textual contexts. As users zoom in, markers are magnified at a lower rate than the map so that occluded ones start to separate. A marker is expanded to show a tag cloud corresponding to the given time and location when a user clicks on the marker. Figure 4 demonstrates the tag cloud that overlays Seattle, WA in 2000. As a different time period is selected, all markers are animated to the new sizes and tag clouds are updated as discussed in the earlier section. ### _Scalability Considerations_ Our goal with Texiverse is to update the dataset for the web-based data explorer in a very short time when new input data are given. There are two major performance bottlenecks in this process: keyphrase extraction and network traffic between server and client. As mentioned earlier, our keyphrase extraction algorithm is derived from GenEx [47] but we do not use the analyzed documents as the text corpora for frequency calculation, but instead the ANC corpus. Therefore, there is no limit regarding the number of documents to be analyzed at a time or size of text assigned to the algorithm. There is also no difference in extraction results given different execution order. This brings a huge advantage in parallelizing keyphrase extraction because we can divide a job and distribute them to as many machines as possible without compromising the extraction quality. This is not like traditional tf-idf approaches, where the accuracy is improved as the number of documents increases. Also, big data often imply frequently updated records. When 100 new documents are to be added to the existing one billion documents, tf-idf solutions either extract tags from these new documents based on the existing large corpus or require a re-construction of statistics of all documents including the new ones. Such incremental updates again do not cause the same problem in our approach. Our solution adopts the MapReduce paradigm to parallelize keyphrase extraction in documents. First, \(n\) documents are distributed evenly to \(\lhd\) machines. On each machine, our mapper takes the assigned documents and the ANC database as input. Each document is then divided into sentences and finally words. Second, each word's weight is computed based on its local frequency and its rank in ANC. Similar to tf-idf, a high local frequency in a given document and a low rank in ANC indicate a high significance. Phrases that contain more words have a higher adjusted weight to offset the lower probability of occurrence. The mapper output is a key-value list where latitude-longitude values are keys and other fields (tag, weight, time) are values. Third, our reducer aggregates the mapper output to coordinates and stores the result in a database. The second performance bottleneck is the network traffic required for transferring data to produce tag clouds and geographical information. We apply drill-down and caching techniques in presenting large-scale data. First, only the most significant sites are displayed and animated on the map while others are shrunk to a pixel. As users zoom in to a smaller region, one-pixel sites are invoked to regular sizes. Second, animated tag clouds are produced only upon user request. Third, latitude, longitude, and quantitative values are truncated to a less precise form when the zoom level is low. Last but not least, the animation duration can be used to preload data for the next possible action. ## 4 Implementation The intended use of the Texiverse visual analytics system is for very large datasets. Therefore, our implementation contains both server-side and client-side components. The server-side component uses a scalable data management pipeline that extracts keyphrases from online databases in a Hadoop cluster. We deploy our keyphrase extraction program in a Hadoop cluster that has about 800 nodes, each with two 2.33 GHz Quad-core Intel E5410 CPUs and 16GB memory. The keyphrase extraction algorithm is implemented in Python and it reads input data from a MySQL database. Each node processes an even number of documents and stores the extracted keyphrases and weights in a CSV file and finally into a database. The web-based implementation is built in JavaScript and uses the Google Maps API to visualize textual content of massive scale. The client requests data from a JSON-RPC server implemented in PHP and renders the map and markers using Google Maps. The tag cloud is produced as an SVG object using RaphaelJS and is overlaid on top of the map. The SVG format is selected for its increasing popularity in web-native information visualization [58]. On average, each map update with top 200 sites needs to acquire 2.4KB data from the server and each tag cloud update needs about 850 Bytes. ## 5 Example 1: NSF Grant Data To demonstrate the use of Textiverse, we here give an example based on 489,151 U.S. National Science Foundation awards over the period 1976 to 2012. This data set is publicly available on [http://nsf.gov/](http://nsf.gov/) and it fits the desired data model for Textiverse: textual data are proposal abstracts with on average 1,829 characters per abstract; each award is associated with an institution or a company, mostly with a detailed address and a zip code; and any grant has an active period. We run our keyphrase extraction algorithm on proposal abstracts to compute up to 4 keyphrases (with a maximum of four words in a phrase) per abstract. The weight of a keyphrase is determined by the adapted tf-idf score, the adjustment based on the number of words in a phrase, and the grant's award amount are aggregated by institution (we assume that each institution has a unique zip code). We measure the execution time of our keyphrase extraction algorithm using 179 nodes in Hadoop to process 1, 10, 100, to 100k, and finally all 489,151 NSF award abstracts. The documents are randomly selected from the NSF award data. We further test the difference between 2-gram, 3-gram, and 4-gram extraction. An n-gram keyphrase extraction refers to the maximum number of words allowed to form a phrase. For instance, a 3-gram algorithm may recognize "information visualization technique" as a keyphrase, while a 2-gram algorithm cannot. A higher n consumes more computational resource but is able to identify longer phrases. As shown in Fig. 6, the execution time remains almost the same for processing 10,000 or fewer documents. For 100k and above, the execution time increases slowly, with the longest processing time of 62.4 seconds for 4-gram extraction of all 489,151 awards. Also in Fig. 6, the difference in n has a very limited effect on execution time when the number of documents is 100k or fewer. The time cost of processing only one document (about 26 seconds) is a clear indicator of the total overhead of starting/quitting the program, Hadoop scheduling, database I/O, file I/O, and building hashmaps for the ANC database. These efforts are mandated regardless of how many documents are processed. We also implement a sequential program following the same algorithm and the execution time for processing all 489,151 documents is 6,172 seconds (almost 100 times longer). The sequential approach has the time advantage for processing less than 1,000 documents because it does not need to pay the overhead of the map-reduce procedure. However, as more documents are added, the execution time of the sequential solution increases almost linearly with the number of documents. Regarding the user interface, this example uses circle size Figure 5: Example 1 performance. Performance of keyphrase extraction algorithm (Hadoop). Figure 6: NSF investments. An overview of NSF investments in the U.S. in 2008. to denote the numerical value associated with each location. The total award amount received by an institution is presented on a log scale. Top 200 sites are returned for the user's current view port. Fig. 6 shows an overview of the funding distribution across institutions in the U.S. in 2008. Based on the location of prominent nodes, regions near New York City, Washington D.C., Chicago, San Francisco, San Diego, Houston/San Antonio, Seattle, and Denver all seem to have institutions that receive significant amounts of NSF funding. Suppose a user is interested in what NSF has been investing in Texas and North Carolina; to investigate further, they click the two markers and open the tag clouds, as shown in Fig. 7. It becomes obvious that institutions in Houston have magnetosphere and REU as their primary research foci, whereas Duke University played an important role in polar research (R/V). From 2008 to 2009 in Fig. 7, a new 'nanoparticle' suddenly becomes the most prominent topics in Houston and magnetosphere and REU remains important but not as much as in 2008 (indicated by red). On the other hand, K-12 research becomes more significant in Duke University (shown in blue). ## 6 Example 2: Twitter Feeds Our second experiment applies Texiverse to show 1.2 million Twitter feeds about Android published during March 2013. Compared to NSF award data, Twitter feeds shares the same multidimensional and large-scale nature. Tweet content, timestamp, and geotags are also available and therefore fit the data model of Texiverse. However, tweets are often shorter in length, come from a wider range of geospatial locations, and include a large number of informal language and external links. Therefore, we adjust a few parameters for this scenario such as reducing the number of keyphrases extracted to no more than three, introducing the spoken American English database from ANC, and removing URLs from the tweets. Among all the tweets, only English-written ones are included in this example. If a tweet contains non-ASCII characters or has more than half of its words that cannot be matched to any word in ANC, it is considered as non-English. Twitter feeds are recognized as related to Android if 'An-droid' appears at least once in the tweet content. Since the geotags included in tweets are formatted as city, state, and country, we use a different address to latitude-longitude mapper to translate it into coordinates. Tweets without geotags or timestamps are excluded in this example. Because the tweet content and sentences are much shorter than NSF awards, the total execution time for extracting keyphrases from 1,210,473 tweets is only 1,219 seconds for the sequential algorithm and 41.6 seconds for the parallel one when deployed in the same Hadoop cluster with the same configuration (n=4). Again, we subset tweets to different numbers (such as 1, 10, 100,..., 1 million, and all 1,210,473) and similar results as the NSF case have been found, as shown in Fig. 9. The execution times for processing 1 million or fewer tweets are largely influenced by the minor variations in overhead. A slight increase in time is observed for analyzing more Twitter feeds. Also, the difference in how many words per phrase (n-gram) contributes insignificantly to the total execution time. This means, the cluster and technique used in this paper are able to outpace the new tweet updates. In this example, the mark size indicates the number of tweets and each tweet is weighted equally. Fig. 9 shows the main topics discussed related to Android in March 2013 in two cities in Canada. Although Twitter is a social networking site without any geographical barrier in communication, the two cities differ significantly in their major topics. The left one has more conversations on a particular cell phone model and wallpapers, whereas Twitter users in the right site are Figure 8: Example 2 performance. Performance of keyphrase extraction algorithm (Hadoop). Figure 7: NSF investment changes. Changes of funded research topics in 2009. more interested in discussing smartphones in general, comparing Android devices with iPhone, and sharing Facebook news/links. ## 7 Conclusion and Future Work In this paper, we propose a novel visual analytics application called Textiverse for mining and representing geotagged timestamped textual data on a map. Spatial regions are aggregated into discrete sites and rendered as animated tag clouds on a map. Because our intended use of the Textiverse technique is for very large datasets, we present a practical web-based implementation of the tool that integrates with Google Maps to visualize textual content of massive scale. Also, we discuss the advantage of an adapted keyphrase extraction algorithm to increase parallelization. Our solution of using Hadoop in extracting keyphrases from a large number of documents obtains a huge performance gain when processing a large number of texts. To demonstrate the intended use of our technique, we show how it can be used to characterize NSF funding status based on all 489,151 awards over the year 1976 to 2012, as well as popular mobile topics worldwide based on 1.2 million Twitter feeds in March 2013. There are many other potential usages for Textiverse. For example, Textiverse can be used to demonstrate the geographical distribution and timely trend of the U.S. job market based on positions posted on LinkedIn, Monster, and Indeed. Another possible application of Textiverse is to compare and review policies based on policy documents issued by different countries, states, and cities. Also, it can be used to study how the same product is adopted and regarded differently around the world by analyzing customer reviews. ## Acknowledgments This work was partially supported by the U.S. National Science Foundation grant TUES-1123108. Any opinions, findings, and conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the views of the funding agencies.
2301.11361
Distributed Optimization Methods for Multi-Robot Systems: Part II -- A Survey
Although the field of distributed optimization is well-developed, relevant literature focused on the application of distributed optimization to multi-robot problems is limited. This survey constitutes the second part of a two-part series on distributed optimization applied to multi-robot problems. In this paper, we survey three main classes of distributed optimization algorithms -- distributed first-order methods, distributed sequential convex programming methods, and alternating direction method of multipliers (ADMM) methods -- focusing on fully-distributed methods that do not require coordination or computation by a central computer. We describe the fundamental structure of each category and note important variations around this structure, designed to address its associated drawbacks. Further, we provide practical implications of noteworthy assumptions made by distributed optimization algorithms, noting the classes of robotics problems suitable for these algorithms. Moreover, we identify important open research challenges in distributed optimization, specifically for robotics problem.
Ola Shorinwa, Trevor Halsted, Javier Yu, Mac Schwager
2023-01-26T19:17:41Z
http://arxiv.org/abs/2301.11361v1
# Distributed Optimization Methods for Multi-Robot Systems: Part II -- A Survey ###### Abstract Although the field of distributed optimization is well-developed, relevant literature focused on the application of distributed optimization to multi-robot problems is limited. This survey constitutes the second part of a two-part series on distributed optimization applied to multi-robot problems. In this paper, we survey three main classes of distributed optimization algorithms -- distributed first-order methods, distributed sequential convex programming methods, and alternating direction method of multipliers (ADMM) methods -- focusing on fully-distributed methods that do not require coordination or computation by a central computer. We describe the fundamental structure of each category and note important variations around this structure, designed to address its associated drawbacks. Further, we provide practical implications of noteworthy assumptions made by distributed optimization algorithms, noting the classes of robotics problems suitable for these algorithms. Moreover, we identify important open research challenges in distributed optimization, specifically for robotics problem. distributed optimization, multi-robot systems, distributed robot systems, robotic sensor networks ## I Introduction In this paper we survey the literature in distributed optimization, specifically with an eye toward problems in multi-robot coordination. As we demonstrated in the first paper in this two-part series [1], many multi-robot problems can be written as a sum of local objective functions, subject to a union of local constraint functions. Such problems can be solved with a powerful and growing arsenal of distributed optimization algorithms. Distributed optimization consists of multiple computation nodes working together to minimize a common objective function through local computation iterations and network-constrained communication steps, providing both computational and communication benefits by eliminating the need for data aggregation. Distributed optimization is also robust against the failure of individual nodes, as it does not rely on a central computation station, and many distributed optimization algorithms have inherent privacy-preserving properties, keeping the local data, objective function, and constraint function private to each robot, while still allowing for all robots to benefit from one another. Distributed optimization has not yet been widely employed in robotics, and there exist many open opportunities for research in this space, which we highlight in this survey. Although the field of distributed optimization is well-established in many areas such as computer networking and power systems, problems in robotics have a number of distinguishing features which are not often considered in the major application areas of distributed optimization. Notably, robots move, unlike their analogous counterparts in these other disciplines, which makes their networks time-varying and prone to bandwidth limitations, packet drops, and delays. Robots often use optimization within a receding horizon or model predictive control loop, so fast convergence to an optimal solution is essential in robotics. In addition, optimization problems in robotics are often constrained (e.g., with safety constraints, input constraints, or kino-dynamics constraints in planning problems), and non-convex (for example, simultaneous localization and mapping (SLAM) is a non-convex optimization, as is trajectory planning and state estimation for any nonlinear robot model). Many existing surveys on distributed optimization do not address these unique characteristics of robotics problems. This survey constitutes the second part of a two-part series on distributed optimization for multi-robot systems. In this survey, we highlight relevant distributed optimization algorithms and note the classes of robotics problems to which these algorithms can be applied. Noting the large body of work in distributed optimization, we categorize distributed optimization algorithms into three broad classes and identify the practical implications of these algorithms for robotics problems, including the challenges arising in the implementation of these algorithms on robotics platforms. This survey is aimed at robotics researchers, who are interested in research at the intersection of distributed optimization and multi-robot systems, as well as robotics practitioners who want to harness the benefits of distributed optimization algorithms in solving practical robotics problems. In this survey, we limit our discussion to optimization problems over real-valued decision variables. Although discrete optimization problems (i.e., integer programs or mixed integer programs) arise in some robotics applications, these problems are beyond the scope of this survey. However, we note that distributed algorithms for integer and mixed integer problems have been discussed in a number of different works [2, 3, 4]. Further, we limit our discussion to derivative-based methods, in contrast to derivative-free (zeroth-order) distributed optimization algorithms. We note that derivative-free optimization methods have been discussed extensively in [5, 6, 7, 8, 9, 10]. In many robotics applications, such as field robotics, communication with a central computer (or the cloud) might be in feasible, even though each robot can communicate locally with other neighboring robots. Consequently, we focus particularly on distributed optimization algorithms that permit robots to use local robot-to-robot communication to compute an optimal solution, rather than algorithms that require coordination by a central computer. We note that these methods yield a globally optimal solution for convex problems and, in general, a locally optimal solution for non-convex problems, producing the same quality solution that would be obtained if a centralized method were applied. Although many distributed optimization algorithms are not inherently "online," we note that many of these algorithms can be applied in online problems within the model predictive control (MPC) framework, where a new optimization problem is solved periodically from streaming data. Nevertheless, we highlight a number of distributed optimization algorithms specifically designed for online problems. In this survey, we provide a taxonomy of the different algorithms for performing distributed optimization based on their defining mathematical characteristics. We identify three classes: distributed first-order algorithms, distributed sequential convex programming, and distributed extensions to the alternating direction method of multipliers (ADMM). **Distributed First-Order Algorithms:** The most common class of distributed optimization methods is based on the idea of averaging local gradients computed by each computational node to perform an approximate gradient descent update [11], and in this work, we refer to them as Distributed First-Order (DFO) algorithms. DFO algorithms can be further sub-divided into distributed (sub)-gradient descent, distributed gradient tracking, distributed stochastic gradient descent, and distributed dual averaging algorithms, with each sub-category differing from the others based on the order of the update steps and the nature of the gradients used. In general, DFO algorithms utilize a consensus procedure to achieve agreement between the robots on a common solution for the optimization problem. A number of DFO algorithms are amenable to robotics problems with dynamic communication networks (including uni-directional and bi-directional networks) [12, 13] and limited computation resources [14]. However, many DFO algorithms are not suitable for constrained problems. **Distributed Sequential Convex Programming:** Sequential Convex Optimization is a common technique in centralized optimization that involves minimizing a sequence of convex approximations to the original (usually non-convex) problem. Under certain conditions, the sequence of sub-problems converges to a local optimum of the original problem. Newton's method and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method are common examples. The same concepts are used by a number of distributed optimization algorithms, and we refer to these algorithms as Distributed Sequential Convex Programming methods. Generally, these methods use consensus techniques to construct the convex approximations of the joint objective function. One example is the Network Newton method [15], which uses consensus to approximate the inverse Hessian of the objective to construct a quadratic approximation of the joint problem. The NEXT family of algorithms [16] provides a flexible framework, which can utilize a variety of convex surrogate functions to approximate the joint problem, and is specifically designed to optimize non-convex objective functions. Although many distributed sequential convex programming methods are not suitable for problems with dynamic communication networks, a few distributed sequential convex programming algorithms are amenable to these problems [16]. **Alternating Direction Method of Multipliers:** The last class of algorithms covered in this paper is based on the alternating direction method of multipliers (ADMM) [17]. ADMM works by minimizing the augmented Lagrangian of the optimization problem using alternating updates to the primal and dual variables [18]. The method is naturally amenable to constrained problems. The original method is distributed, but not in the sense we consider in this survey. Specifically, the original ADMM requires a central computation hub to collect all local primal computations from the nodes to perform a centralized dual update step. ADMM was first modified to remove this requirement for a central node in [19], where it was used for distributed signal processing. The algorithm from [19] has since become known as Consensus ADMM (C-ADMM), although that paper does not introduce this terminology. A number of other distributed variants have been developed to address many unique characteristics, including uni-directional communication networks and limited communication bandwidth [20, 21], which are often present in robotics problems. ### _Existing Surveys_ A number of other recent surveys on distributed optimization exist, and provide useful background when working with the algorithms covered in this survey. Some of these surveys Fig. 1: A motivation for distributed optimization: consider an estimation scenario in which a robot seeks to localize a target given sensor measurements. The robot can compute an optimal solution given _only its observations_, as represented in (a). By using distributed optimization techniques, each robot in a networked system of robots can compute the optimal solution _given all robots’ observations_ without actually sharing individual sensor models or measurements with one another, as represented in (b). cover applications of distributed optimization in distributed power systems [22], big-data problems [23], and game theory [24], while others focus primarily on first-order methods for problems in multi-agent control [25]. Other articles broadly address distributed first-order optimization methods, including a discussion on the communication-computation trade-offs [26, 27]. Another survey [28] covers exclusively non-convex optimization in both batch and data-streaming contexts, but again only analyzes first-order methods. Finally, [29] covers a wide breadth of distributed optimization algorithms with a variety of assumptions, focusing exclusively on convex optimization problems. Our survey differs from all of these in that it specifically targets applications of distributed optimization to multi-robot problems. As a result, this survey highlights the practical implications of the assumptions made by many distributed optimization algorithms and provides a condensed taxonomic overview of useful methods for these applications. Other useful background material can be found for distributed computation [30][31], and on multi-robot systems in [32][33]. ### _Contributions_ This survey paper has three primary objectives: 1. Survey the literature across three different classes of distributed optimization algorithms, noting the defining mathematical characteristics of each category. 2. Highlight the practical implications of noteworthy assumptions made by distributed optimization algorithms and the challenges that arise in implementing these algorithms in multi-robot problems. 3. Propose open research problems in distributed optimization for robotics. ### _Organization_ In Section II we introduce mathematical notation and preliminaries, and in Section III we present the general formulation for the distributed optimization problem and describe the general framework shared by distributed optimization algorithms. Sections IV-VI survey the literature in each of the three categories, and provide details for representative algorithms in each category. Section VII provides notable existing applications of distributed optimization in the robotics literature. In Section VIII, we provide practical notes on implementing distributed optimization algorithms in robotics problems. In Section IX, we discuss open research problems in distributed optimization applied to multi-robot systems and robotics in general, and we offer concluding remarks in Section X. ## II Notation and Preliminaries In this section, we introduce the notation used in this paper and provide the definitions of mathematical concepts relevant to the discussion of the distribution optimization algorithms. We denote the gradient of a function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) as \(\nabla f\) and its Hessian as \(\nabla^{2}f\). We denote the vector containing all ones as \(\mathbf{1}_{n}\), where \(n\) represents the number of elements in the vector. Next, we begin with the definition of stochastic matrices which arise in distributed first-order optimization algorithms. **Definition 1** (Non-negative Matrix).: _A matrix \(W\in\mathbb{R}^{n\times n}\) is referred to as a non-negative matrix if \({w_{ij}\geq 0}\) for all \(i,j\in\{1,\cdots,n\}\)._ **Definition 2** (Stochastic Matrix).: _A non-negative matrix \(W\in\mathbb{R}^{n\times n}\) is referred to as a row-stochastic matrix if_ \[W\mathbf{1}_{n}=\mathbf{1}_{n}, \tag{1}\] _in other words, the sum of all elements in each row of the matrix equals one. We refer to \(W\) as a column-stochastic matrix if_ \[\mathbf{1}_{n}^{\top}W=\mathbf{1}_{n}. \tag{2}\] _Likewise, for a doubly-stochastic matrix \(W\),_ \[W\mathbf{1}_{n}=\mathbf{1}_{n}\text{ and }\mathbf{1}_{n}^{\top}W=\mathbf{1}_{n}. \tag{3}\] Now, we provide the definition of some relevant properties of a sequence. **Definition 3** (Summable Sequence).: _A sequence \(\{\alpha(k)\}_{k\geq 0}\), with \(k\in\mathbb{N}\), is a summable sequence if_ \[\sum_{k=0}^{\infty}\alpha(k)<\infty. \tag{4}\] **Definition 4** (Square-Summable Sequence).: _A sequence \(\{\alpha(k)\}_{k\geq 0}\), with \(k\in\mathbb{N}\), is a square-summable sequence if_ \[\sum_{k=0}^{\infty}\left(\alpha(k)\right)^{2}<\infty. \tag{5}\] We discuss some relevant notions of the connectivity of a graph. **Definition 5** (Connectivity of an Undirected Graph).: _An undirected graph \(\mathcal{G}\) is connected if a path exists between every pair of vertices \((i,j)\) where \(i,j\in\mathcal{V}\). Note that such a path might traverse other vertices in \(\mathcal{G}\)._ **Definition 6** (Connectivity of a Directed Graph).: _A directed graph \(\mathcal{G}\) is strongly connected if a directed path exists between every pair of vertices \((i,j)\) where \(i,j\in\mathcal{V}\). In addition, a directed graph \(\mathcal{G}\) is weakly connected if the underlying undirected graph is connected. The underlying undirected graph \(\mathcal{G}_{u}\) of a directed graph \(\mathcal{G}\) refers to a graph with the same set of vertices as \(\mathcal{G}\) and a set of edges obtained by considering each edge in \(\mathcal{G}\) as a bi-directional edge. Consequently, every strongly connected directed graph is weakly connected; however, the converse is not true._ In distributed optimization in multi-robot systems, robots perform communication and computation steps to minimize some global objective function. We focus on problems in which the robots' exchange of information must respect the topology of an underlying distributed communication graph, which could possibly change over time. This communication graph, denoted as \(\mathcal{G}(t)=(\mathcal{V}(t),\mathcal{E}(t))\), consists of vertices \(\mathcal{V}(t)=\{1,\ldots,N\}\) and edges \(\mathcal{E}(t)\subseteq\mathcal{V}(t)\times\mathcal{V}(t)\) over which pairwise communication can occur. For undirected graphs, we denote the set of neighbors of robot \(i\) as \(\mathcal{N}_{i}(t)\). For directed graphs, we refer to the set of robots which can _send_ information to robot \(i\) as the set of in-neighbors of robot \(i\), denoted by \(\mathcal{N}_{i}^{+}(t)\). Likewise, for directed graphs, we refer to the set of robots which can _receive_ information from robot \(i\) as the out-neighbors of robot \(i\), denoted by \(\mathcal{N}_{i}^{-}(t)\). **Definition 7** (Convergence Rate).: _Provided that a sequence \(\{x^{(k)}\}\) converges to \(x^{\star}\), if there exists a positive scalar \(r\in\mathbb{R}\), with \(r\geq 1\), and a constant \(\lambda\in\mathbb{R}\), with \(\lambda>0\), such that_ \[\lim_{k\to\infty}\frac{\|x^{(k+1)}-x^{\star}\|}{\|x^{(k)}-x^{\star}\|^{r}}=\lambda, \tag{6}\] _then \(r\) defines the order of convergence of the sequence \(\{x^{(k)}\}\) to \(x^{\star}\). Moreover, the asymptotic error constant is given by \(\lambda\)._ _If \(r=1\) and \(\lambda=1\), then \(\{x^{(k)}\}\) converges to \(x^{\star}\) sub-linearly. However, if \(r=1\) and \(\lambda<1\), then \(\{x^{(k)}\}\) converges to \(x^{\star}\) linearly. Likewise, \(\{x^{(k)}\}\) converges to \(x^{\star}\) quadratically if \(r=2\) and cubically if \(r=3\)._ **Definition 8** (Synchronous Algorithm).: _An algorithm is synchronous if each robot (computational node) has to wait at a predetermined point for a specific message from other robots (computational nodes) before proceeding. In general, the end of an iteration of the algorithm represents the predetermined synchronization point. Conversely, in an asynchronous algorithm, each robot completes each iteration at its own pace, without having to wait at a predetermined point. In other words, at any given time, the number of iterations of an asynchronous algorithm completed by each robot -- measured by its local clock -- could differ from the number of iterations completed by other robots._ ## III Problem Formulation We consider a general distributed optimization problem, which consists of a global objective function expresses as a sum over local component objective functions. Each robot only knows its own objective function. We call such an optimization problem separable. We also consider a set of global constraints consisting of a union over local constraints. Each robot only knows its own local constraints. The resulting optimization problem is given by \[\min_{x} \;\sum_{i\in\mathcal{V}}f_{i}(x)\] (7) subject to \[g_{i}(x)=0 \forall i\in\mathcal{V}\] \[h_{i}(x)\leq 0 \forall i\in\mathcal{V}\] where \(x\in\mathbb{R}^{n}\) denotes the optimization variable and \(f_{i}:\mathbb{R}^{n}\to\mathbb{R}\), \(g_{i}:\mathbb{R}^{n}\to\mathbb{R}\), and \(h_{i}:\mathbb{R}^{n}\to\mathbb{R}\) denote the local objective function, equality constraint function, and inequality constraint function of robot \(i\), respectively. The joint optimization problem (7) can be solved locally by each robot if all the robots share their objective and constraint functions with one another. Alternatively, the solution can be computed centrally, if all the local functions are collated at a central station. However, robots typically possess limited computation and communication resources, which precludes each robot from sharing its local functions with other robots, particularly in problems with high-dimensional problem data, such as images, lidar and other perception measurements. Distributed optimization algorithms enable each robot to compute a solution of (7) locally, without sharing its local objective and constraint functions, or its local data. These algorithms assign a copy of the optimization variable to each robot, enabling each robot to update its own copy locally and in parallel with other robots. Moreover, distributed optimization algorithms enforce consensus among the robots for agreement on a common solution of the optimization problem. Consequently, these algorithms solve a reformulation of the optimization problem in (7), given by \[\min_{\{x_{i},\;\forall i\in\mathcal{V}\}} \;\sum_{i\in\mathcal{V}}f_{i}(x_{i})\] (8) subject to \[x_{i}=x_{j} \forall(i,j)\in\mathcal{E}\] \[g_{i}(x_{i})=0 \forall i\in\mathcal{V}\] \[h_{i}(x_{i})\leq 0 \forall i\in\mathcal{V},\] where \(x_{i}\in\mathbb{R}^{n}\) denotes robot \(i\)'s local copy of the optimization variable. We note that the consensus constraints in (8) ensure agreement among all the robots, with the assumption that the communication graph is connected. Moreover, the consensus constraints are enforced between neighboring robots only, making it compatible with a point-to-point communication network, where robots can only communicate with their one-hop neighbors. In the following sections, we discuss three broad classes of distributed optimization methods, namely, distributed first-order methods, distributed sequential convex programming methods, and the alternating direction method of multipliers. We note that distributed first-order methods and distributed sequential convex programming methods implicitly enforce the consensus constraints in (8), while the alternating direction method of multipliers enforces these constraints explicitly. Before proceeding, we highlight the general framework that distributed optimization algorithms share. Distributed optimization algorithms are iterative algorithms in which each robot executes a number of operations over discrete iterations \(k=0,1,\dots\) until convergence, where each iteration consists of a communication and computation step. During each communication round, each robot shares a set of its local variables with its neighbors, referred to as its "communicated" variables \(\mathcal{Q}_{i}^{(k)}\), which we distinguish from its "internal" variables \(\mathcal{P}_{i}^{(k)}\), which are not shared with its neighbors. In general, each algorithm requires initialization of the local variables of each robot, in addition to algorithm-specific parameters, denoted by \(\mathcal{R}_{i}^{(k)}\). We note that some algorithms require coordination among the robots for initialization; however, these parameters can be initialized prior to deployment of the robots. ## IV Distributed First-Order Algorithms The optimization problem in (7) (in its unconstrained form) can be solved through gradient descent where the optimization variable is updated using \[x^{(k+1)}=x^{(k)}-\alpha^{(k)}\nabla f(x^{(k)}) \tag{9}\] with \(\nabla f(x^{(k)})\) denoting the gradient of the objective function at \(x^{(k)}\), given by \[\nabla f(x)=\sum_{i\in\mathcal{V}}\nabla f_{i}(x), \tag{10}\] given some scheduled step-size \(\alpha^{(k)}\). Inherently, computation of \(\nabla f(x^{(k)})\) requires knowledge of the local objective functions or gradients by all robots in the network which is infeasible in many problems. Distributed First-Order (DFO) algorithms extend the centralized gradient scheme to the distributed setting where robots communicate with one-hop neighbors without knowledge of the local objective functions or gradients of all robots. In DFO methods, each robot updates its local variable using a weighted combination of the local variables or gradients of its neighbors according to the weights specified by a stochastic weighting matrix \(W\), allowing for the dispersion of information on the objective function or its gradient through the network. The stochastic matrix \(W\) must be compatible with the underlying communication network, with a non-zero element \(w_{ij}\) when robot \(j\) can send information to robot \(i\). Many DFO algorithms use a doubly-stochastic matrix, a row-stochastic matrix [34], or a column-stochastic matrix, depending on the model of the communication network considered, while other methods use a push-sum approach. In addition, many methods further require symmetry of the doubly-stochastic weighting matrix with \(W=W^{\top}\). The weight matrix exerts a significant influence on the convergence rates of DFO algorithms, and thus, an appropriate choice of these weights are required for convergence of DFO methods. The order of the update procedures for the local variables of each robot and the gradient used by each robot in performing its local update procedures differ among DFO algorithms, giving rise to four broad classes of DFO methods: Distributed (Sub)-Gradient Descent and Diffusion Algorithms, Gradient Tracking Algorithms, Distributed Stochastic Gradient Algorithms, and Distributed Dual Averaging. While distributed (sub)-gradient descent algorithms require a decreasing step-size for convergence to an optimal solution, gradient tracking algorithms converge to an optimal solution without this condition. We discuss these distributed methods in the following subsections. ### _Distributed (Sub)-Gradient Descent and Diffusion Algorithms_ Tsitsiklis introduced a model for distributed gradient descent in the 1980s in [35] and [11] (see also [30]). The works of Nedic and Ozdaglar in [14] revisit the problem, marking the beginning of interest in consensus-based frameworks for distributed optimization over the recent decade. This basic model of distributed gradient descent consists of an update term that involves consensus on the optimization variable as well as a step in the direction of the local gradient for each node: \[x_{i}(k+1)=\sum_{j\in\mathcal{N}_{i}\cup\{i\}}w_{ij}x_{j}(k)-\alpha_{i}(k) \nabla f_{i}(x_{i}(k)) \tag{11}\] where robot \(i\) updates its variable using a weighted combination of its neighbors' variables determined by the weights \(w_{ij}\) with \(\alpha_{i}(k)\) denoting its local step-size at iteration \(k\). For convergence to the optimal joint solution, these methods require the step-size to asymptotically decay to zero. As proven in [36], if \(\alpha^{(k)}\) is chosen such that the sequence \(\{\alpha^{(k)}\}\) is square-summable but not summable, then the optimization variables of all robots converge to the optimal joint solution, given the standard assumptions of a connected network, properly chosen weights, and bounded (sub)-gradients. In contrast, the choice of a constant step-size for all time-steps only guarantees convergence of each robot's iterates to a neighborhood of the optimal joint solution. Algorithm 1 summarizes the update step for the distributed gradient descent method in [14] with the step-size \(\alpha^{(k+1)}=\frac{\alpha^{(0)}}{\sqrt{k}}\), with \(k>0\). ``` Initialization:\(k\gets 0\), \(x_{i}^{(0)}\in\mathbb{R}^{n}\) Internal variables:\(\mathcal{P}_{i}^{(k)}=\emptyset\) Communicated variables:\(\mathcal{Q}_{i}^{(k)}=x_{i}^{(k)}\) Parameters:\(\mathcal{R}_{i}^{(k)}=(\alpha^{(k)},w_{i})\)do in parallel\(\forall i\in\mathcal{V}\) Communicate \(\mathcal{Q}_{i}^{(k)}\) to all \(j\in\mathcal{N}_{i}\) Receive \(\mathcal{Q}_{j}^{(k)}\) from all \(j\in\mathcal{N}_{i}\) \(x_{i}^{(k+1)}=\sum_{j\in\mathcal{N}_{i}\cup\{i\}}w_{ij}x_{j}^{(k)}-\alpha^{(k )}\nabla f_{i}(x_{i}^{(k)})\) \(\alpha^{(k+1)}=\frac{\alpha^{(0)}}{\sqrt{k}}\) \(k\gets k+1\) whilestopping criterion is not satisfied ``` **Algorithm 1**Distributed Gradient Descent (DGD) We note that the update procedure given in (11) requires a doubly-stochastic weighting matrix, which, in general, is incompatible with directed communication networks. Other distributed gradient descent algorithms [37, 38, 39, 40] utilize the _push-sum_ consensus protocol [41] in place of the consensus terms in (11), extending the application of distributed gradient descent schemes to problems with directed communication networks. In general, with a constant step-size, distributed (sub)-gradient descent algorithms converge at a rate of \(O(1/k)\) to a neighborhood of the optimal solution in convex problems [42]. With a decreasing step-size, some distributed (sub)-gradient descent algorithms converge to an optimal solution at \(O(\log k/k)\) using an accelerated gradient scheme such as the Nesterov gradient method [43]. ### _Distributed Gradient Tracking Algorithms_ Although distributed (sub)-gradient descent algorithms converge to an optimal joint solution, the requirement of a square-summable sequence \(\{\alpha^{(k)}\}\) -- which results in a decaying step-size -- reduces the convergence speed of these methods. Gradient tracking methods address this limitation by allowing each robot to utilize the changes in its local gradient between successive iterations as well as a local estimate of the average gradient across all robots in its update procedures, enabling the use of a constant step-size while retaining convergence to the optimal joint solution. The EXTRA algorithm introduced by Shi _et al._ in [44] uses a fixed step-size while still achieving exact convergence. EXTRA replaces the gradient term with the difference in the gradients of the previous two iterates. Because the contribution of this gradient difference term decays as the iterates converge to the optimal joint solution, EXTRA does not require the step-size to decay in order to converge to the exact optimal joint solution. EXTRA achieves linear convergence [42], and a variety of gradient tracking algorithms have since offered improvements on its linear rate [45], for convex problems with strongly convex objective functions. The DIGing algorithm [46, 47], whose update equations are shown in Algorithm 2, is one such similar method that extends the faster convergence properties of EXTRA to the domain of directed and time-varying graphs. DIGing requires communication of two variables, effectively doubling the communication cost per iteration when compared to DGD, but greatly increasing the diversity of communication infrastructure that it can be deployed on. ``` Initialization:\(k\gets 0\), \(x_{i}^{(0)}\in\mathbb{R}^{n}\), \(y_{i}^{(0)}=\nabla f_{i}(x_{i}^{(0)})\) Internal variables:\(\mathcal{P}_{i}^{(k)}=\emptyset\) Communicated variables:\(\mathcal{Q}_{i}^{(k)}=\left(x_{i}^{(k)},y_{i}^{k}\right)\) Parameters:\(\mathcal{R}_{i}^{(k)}=(\alpha,w_{i})\) do in parallel\(\forall i\in\mathcal{V}\) Communicate \(\mathcal{Q}_{i}^{(k)}\) to all \(j\in\mathcal{N}_{i}\) Receive \(\mathcal{Q}_{j}^{(k)}\) from all \(j\in\mathcal{N}_{i}\) \[x_{i}^{(k+1)} =\sum_{j\in\mathcal{N}_{i}\cup\{i\}}w_{ij}x_{j}^{(k)}-\alpha y_{i }^{(k)}\] \[y_{i}^{(k+1)} =\sum_{j\in\mathcal{N}_{i}\cup\{i\}}w_{ij}y_{j}^{(k)}+\nabla f_{ i}(x_{i}^{(k+1)})-\nabla f_{i}(x_{i}^{(k)})\] \[k\gets k+1\] whilestopping criterion is not satisfied ``` **Algorithm 2**DIGing Many other gradient tracking algorithms involve variations on the variables updated using consensus and the order of the update steps, such as NIDS [48], Exact Diffusion [50, 51, 49], and [52]. These algorithms, which generally require the use of doubly-stochastic weighting matrices, have been extended to problems with row-stochastic or column-stochastic matrices [12, 53, 13, 54] and push-sum consensus [55] for distributed optimization in directed networks. To achieve faster convergence rates, many of these algorithms require each robot to communicate multiple local variables to its neighbors during each communication round. In addition, we note that some of these algorithms require all robots to use the same step-size, which can prove challenging in some situations. Several works offer a synthesis of various gradient tracking methods, noting the similarities between these methods. Under the canonical form proposed in [56, 57], these algorithms and others differ only in the choice of several constant parameters. Jakovetic also provides a unified form for various gradient tracking algorithms in [58]. Some other works consider accelerated variants using Nesterov gradient descent [59, 60, 59, 61]. Gradient tracking algorithms can be considered to be primal-dual methods with an appropriately defined augmented Lagrangian function [46, 62]. In general, gradient tracking algorithms address unconstrained distributed convex optimization problems, but these methods have been extended to non-convex problems [63] and constrained problems using projected gradient descent [64, 65, 66]. Some other methods [67, 68, 69, 70] perform dual-ascent on the dual problem of (7), where the robots compute their local primal variables from the related minimization problem using their dual variables. These methods require doubly-stochastic weighting matrices but allow for time-varying communication networks. In [71], the robots perform a subsequent proximal projection step to obtain solutions which satisfy the problem constraints. Optimization algorithms that use stochastic gradients have become widely used for problems where evaluating the underlying optimization objectives is a costly procedure, e.g., deep learning. In [72], stochastic gradients are used in place of gradients in the DGD algorithm, and the resulting algorithm is shown to converge. ### _Distributed Dual Averaging_ Dual averaging first posed in [73], and extended in [74], takes a similar approach to distributed (sub)-gradient descent methods in solving the optimization problem in (7), with the added benefit of providing a mechanism for handling problem constraints through a projection step, in like manner as projected (sub)-gradient descent methods. However, the original formulations of the dual averaging method requires knowledge of all components of the objective function or its gradient which is unavailable to all robots. The Distributed Dual Averaging method (DDA) circumvents this limitation by modifying the update equations using a doubly-stochastic weighting matrix to allow for updates of each robot's variable using its local gradients and a weighted combination of the variables of its neighbors [75]. Similar to distributed (sub)-gradient descent methods, distributed dual averaging requires a sequence of decreasing step-sizes to converge to the optimal solution. Algorithm 3 provides the update equations in the DDA algorithm, along with the projection step which involves a proximal function \(\phi(x)\), often defined as \(\frac{1}{2}\|x\|_{2}^{2}\). After the projection step, the robot's variable satisfies the problem constraints described by the constraints set \(\mathcal{X}\). Some of the same extensions made to distributed (sub)-gradient descent algorithms have been studied for DDA, including analysis of the algorithm under communication time delays [76] and replacement of the doubly-stochastic weighting matrix with push-sum consensus [77]. ## V Distributed Sequential Convex Programming Sequential Convex Programming is a class of optimization methods, typically for non-convex problems, that proceed iteratively by approximating the nonconvex problem with a convex surrogate computed from the current values of the decision variables. This convex surrogate is optimized, and the resulting decision variables are used to compute the convex surrogate for the next iterate. Newton's method is a classic example of a Sequential Convex Method, in which the convex surrogate is a quadratic approximation of the original objective function. Several methods have been proposed for distributed Sequential Convex Programming, as we survey here. ### _Approximate Newton Methods_ Newton's method, and its variants, are commonly used for solving convex optimization problems, and provide significant improvements in convergence rate when second-order function information is available [78]. While the distributed gradient descent methods exploit only information on the gradients of the objective function, Newton's method uses the Hessian of the objective function, providing additional information on the function's curvature which can improve convergence. To apply Newton's method to the distributed optimization problem in (7), the Network Newton-\(K\) (NN-\(K\)) algorithm [15] takes a penalty-based approach which introduces consensus between the robots' variables as components of the objective function. The NN-\(K\) method reformulates the constrained form of the distributed problem in (7) as the following unconstrained optimization problem: \[\min_{\{x_{i},\ \forall i\in\mathcal{V}\}}\ \alpha\sum_{i\in\mathcal{V}}f_{i}(x _{i})+x_{i}^{\top}\left(\sum_{j\in\mathcal{N}\cup\{i\}}\bar{w}_{ij}x_{j}\right) \tag{12}\] where \(\bar{W}=I-W\), and \(\alpha\) is a weighting hyperparameter. ``` Initialization:\(k\gets 0\), \(x_{i}^{(0)}\in\mathbb{R}^{n}\) Internal variables:\(\mathcal{P}_{i}^{(k)}=\left(g_{i}^{(k)},D_{i}^{(k)}\right)\) Communicated variables:\(\mathcal{Q}_{i}=\left(x_{i}^{(k)},d_{i}^{(k+1)}\right)\) Parameters:\(\mathcal{R}_{i}=(\alpha,\epsilon,K,\bar{w}_{i})\) do in parallel\(\forall i\in\mathcal{V}\) \(D_{i}^{(k+1)}=\alpha\nabla^{2}f_{i}(x_{i}^{(k)})+2\bar{w}_{ii}I\) Communicate \(x_{i}^{(k)}\) to all \(j\in\mathcal{N}_{i}\) \(g_{i}^{(k+1)}=\alpha\nabla f_{i}(x_{i}^{(k)})+\sum_{j\in\mathcal{N} :\cup\{i\}}\bar{w}_{ij}x_{j}^{(k)}\) \(d_{i}^{(0)}=-\left(D_{i}^{(k+1)}\right)^{-1}g_{i}^{(k+1)}\) for\(p=0\)to\(K-1\)do Communicate \(d_{i}^{(p)}\) to all \(j\in\mathcal{N}_{i}\) \(d_{i}^{(p+1)}=\left(D_{i}^{(k+1)}\right)^{-1}\left[\bar{w}_{ii}d_{i}^{(p)}-g_{ i}^{(k+1)}\right.\) \(\left.\hskip 142.26378pt-\sum_{j\in\mathcal{N}_{i}\cup\{i\}}\bar{w}_{ij}d_{j}^{( p)}\right]\) end for \(x_{i}^{(k+1)}=x_{i}^{(k)}+\epsilon\,d_{i}^{(K)}\) whilestopping criterion is not satisfied ``` **Algorithm 4**Network Newton-\(K\) (NN-\(K\)) However, the Newton descent step requires computing the inverse of the joint problem's Hessian which cannot be directly computed in a distributed manner as its inverse is dense. To allow for distributed computation of the Hessian inverse, NN-\(K\) uses the first \(K\) terms of the Taylor series expansion \((I-X)^{-1}=\sum_{j=0}^{\infty}X^{j}\) to compute the approximate Hessian inverse, as introduced in [79]. Approximation of the Hessian inverse comes at an additional communication cost, and requires an additional \(K\) communication rounds per update of the primal variable. Algorithm 4 summarizes the update procedures in the NN-\(K\) method in which \(\epsilon\) denotes the local step-size for the Newton's step. Selection of the step-size parameter does not require any coordination between robots. As presented in Algorithm 4, NN-\(K\) proceeds through two sets of update equations: an outer set of updates that initializes the Hessian approximation and computes the decision variable update and an inner Hessian approximation update; a communication round precedes the execution of either set of update equations. Increasing \(K\), the number of intermediary communication rounds, improves the accuracy of the approximated Hessian inverse at the cost of increasing the communication cost per primal variable update. A follow-up work optimizes a quadratic approximation of the augmented Lagrangian of the general distributed optimization problem (7) where the primal variable update involves computing a \(P\)-approximate Hessian inverse to perform a Newton descent step, and the dual variable update uses gradient ascent [80]. The resulting algorithm Exact Second-Order Method (ESOM) provides a faster convergence rate than NN-\(K\) at the cost of one additional round of communication for the dual ascent step. Notably, replacing the augmented Lagrangian in the ESOM formulation with its linear approximation results in the EXTRA update equations, showing the relationship between both approaches. In some cases, computation of the Hessian is impossible because second-order information is not available. Quasi-Newton methods like the Broyden-Fletcher-Goldman-Shanno (BFGS) algorithm approximate the Hessian when it cannot be directly computed. The distributed BFGS (D-BFGS) algorithm [81] replaces the second-order information in the primal update in ESOM with a BFGS approximation (i.e., replaces \(D_{i}^{(k)}\) in a call to the Hessian approximation equations in Algorithm 4 with an approximation), and results in essentially a "doubly" approximate Hessian inverse. In [82] the D-BFGS method is extended so that the dual update also uses a distributed Quasi-Newton update scheme, rather than gradient ascent. The resulting primal-dual Quasi-Newton method requires two consecutive iterative rounds of communication doubling the communication overhead per primal variable update compared to its predecessors (NN-\(K\), ESOM, and D-BFGS). However, the resulting algorithm is shown by the authors to still converge faster in terms of required communication. In addition, asynchronous variants of the approximate Newton methods have been developed [83]. ### _Convex Surrogate Methods_ While the approximate Newton methods in [80, 81, 82] optimize a quadratic approximation of the augmented Lagrangian of (12), other distributed methods allow for more general and direct convex approximations of the distributed optimization problem. These convex approximations generally require the gradient of the joint objective function which is inaccessible to any single robot. In the NEXT family of algorithms [16] dynamic consensus is used to allow each robot to approximate the global gradient, and that gradient is then used to compute a convex approximation of the joint objective function locally. A variety of surrogate functions, \(U(\cdot)\), are proposed including linear, quadratic, and block-convex functions, which allows for greater flexibility in tailoring the algorithm to individual applications. Using its surrogate of the joint objective function, each robot updates its local variables iteratively by solving its surrogate for the problem, and then taking a weighted combination of the resulting solution with the solutions of its neighbors. To ensure convergence, NEXT algorithms require a series of decreasing step-sizes, resulting in generally slower convergence rates as well as additional hyperparameter tuning. The SONATA [84] algorithm extends the surrogate function principles of NEXT, and proposes a variety of non-doubly-stochastic weighting schemes that can be used to perform gradient averaging similar to the push-sum protocols. The authors of SONATA also show that several configurations of the algorithm result in already proposed distributed optimization algorithms including Aug-DGM [85], Push-DIG [47], and ADD-OPT [53]. ``` Initialization:\(k\gets 0\), \(x_{i}^{(0)}\in\mathbb{R}^{n}\), \(y_{i}^{(0)}=\nabla f_{i}(x_{i}^{(0)})\), \(\tilde{\pi}_{i}^{(k+1)}=Ny_{i}^{(0)}-\nabla f_{i}(x_{i}^{(0)})\) Internal variables:\(\mathcal{P}_{i}=\left(x_{i}^{(k)},\tilde{x}_{i}^{(k)},\tilde{\pi}_{i}^{(k)}\right)\) Communicated variables:\(\mathcal{Q}_{i}^{(k)}=\left(z_{i}^{(k)},y_{i}^{(k)}\right)\) Parameters:\(\mathcal{R}_{i}^{(k)}=\left(\alpha^{(k)},w_{i},U(\cdot),\mathcal{K}\right)\)do in parallel\(\forall i\in\mathcal{V}\) \(\tilde{x}_{i}^{(k)}=\underset{x\in\mathcal{K}}{\operatorname{argmin}}\;U\left(x ;x_{i}^{(k)},\tilde{\pi}_{i}^{(k)}\right)\) \(z_{i}^{(k)}=x_{i}^{(k)}+\alpha^{(k)}\left(\tilde{x}_{i}^{(k)}-x_{i}^{(k)}\right)\) Communicate \(\mathcal{Q}_{i}^{(k)}\) to all \(j\in\mathcal{N}_{i}\) Receive \(\mathcal{Q}_{j}^{(k)}\) from all \(j\in\mathcal{N}_{i}\) \(x_{i}^{(k+1)}=\sum_{j\in\mathcal{N}_{i}\cup\{i\}}w_{ij}z_{j}^{(k)}\) \(y_{i}^{(k+1)}=\sum_{j\in\mathcal{N}_{i}\cup\{i\}}w_{ij}y_{j}^{(k)}\) \(\qquad\qquad+\left[\nabla f_{i}(x_{i}^{(k+1)})-\nabla f_{i}(x_{i}^{(k)})\right]\) \(\tilde{\pi}_{i}^{(k+1)}=N\cdot y_{i}^{(k+1)}-\nabla f_{i}(x_{i}^{(k+1)})\) \(k\gets k+1\) whilestopping criterion is not satisfied ``` **Algorithm 5**NEXT ## VI Alternating direction method of multipliers Considering the optimization problem in (8) with only agreement constraints, we have \[\min_{x_{i},\forall i\in\mathcal{V}}\sum_{i\in\mathcal{V}}f_{i}(x_ {i}) \tag{13}\] \[\text{subject to }x_{i}=x_{j}\qquad\forall(i,j)\in\mathcal{E}. \tag{14}\] The _method of multipliers_ solves this problem by alternating between minimizing the augmented Lagrangian of the optimization problem with respect to the primal variables \(x_{1},\ldots,x_{n}\) (the "primal update") and taking a gradient step to maximize the augmented Lagrangian with respect to the dual (the "dual update"). The augmented Lagrangian of (13) is given by \[\begin{split}\mathcal{L}_{a}(\mathbf{x},q)&=\sum_{ i=1}^{N}f_{i}(x_{i})\\ &+\sum_{i=1}^{N}\sum_{j\in\mathcal{N}_{i}}\left(q_{i,j}^{\top}(x_ {i}-x_{j})+\frac{\rho}{2}\|x_{i}-x_{j}\|_{2}^{2}\right),\end{split} \tag{15}\] where \(q_{i,j}\) represents a dual variable for the consensus constraints between robots \(i\) and \(j\), \(q=\left[q_{i,j}^{\top},\;\forall(i,j)\in\mathcal{E}\right]^{\top}\), and \(\mathbf{x}=\left[x_{1}^{\top},x_{2}^{\top},\cdots,x_{N}^{\top}\right]^{\top}\). The parameter \(\rho>0\) represents a penalty term on the violations of the consensus constraints. In the _alternating direction method of multipliers_ (ADMM), given the separability of the global objective function, the primal update is executed as successive minimizations over each primal variable (i.e., choose the minimizing \(x_{1}\) with all other variables fixed, then choose the minimizing \(x_{2}\), and so on). Most ADMM-based approaches do not satisfy our definition of distributed in that either the primal updates take place sequentially rather than in parallel or the dual update requires centralized computation [86, 87, 88]. However, the _consensus alternating direction method of multipliers_ (C-ADMM) provides an ADMM-based optimization method that is fully distributed: the nodes alternate between updating their primal and dual variable and communicating with neighboring nodes [19, 89]. In order to achieve a distributed update of the primal and dual variables, C-ADMM alters the agreement constraints between agents with an existing communication link by introducing auxiliary primal variables in (8) (instead of the constraint \(x_{i}=x_{j}\), we have two constraints: \(x_{i}=z_{ij}\) and \(x_{j}=z_{ij}\)). Considering the optimization steps across the entire network, C-ADMM proceeds by optimizing the original primal variables, then the auxiliary primal variables, and then the dual variables, as in the original formulation of ADMM. We can perform minimization with respect to the primal variables and gradient ascent with respect to the dual variables on an augmented Lagrangian that is fully distributed among the robots. Algorithm 6 summarizes the update procedures for the local primal and dual variables of each agent, where \(y_{i}\) represents the dual variable that enforces agreement between robot \(i\) and its neighbors. We have incorporated the solution of the auxiliary primal variable update into the update procedure for \(x_{i}^{(k+1)}\), noting that the auxiliary primal variable update can be performed implicitly (\(z_{ij}^{*}=\frac{1}{2}\left(x_{i}+x_{j}\right)\)). The parameter \(\rho\) that weights the quadratic terms in \(\mathcal{L}_{a}\) is also the step-size in the gradient ascent of the dual variable. We note that the update procedure for \(x_{i}^{(k+1)}\) requires solving an optimization problem which might be computationally intensive for certain objective functions. To simplify the update complexity, the optimization can be solved inexactly using a linear approximation of the objective function such as [90, 91, 92] or a quadratic approximation using the Hessian such as DQM [93]. The convergence rate of ADMM methods depends on the value of the penalty parameter \(\rho\). Several works discuss effective strategies for optimally selecting \(\rho\)[94]. In general, convergence of C-ADMM and its variants is only guaranteed when the dual variables sum to zero, a condition that could be challenging to satisfy in problems with unreliable communication networks. Other distributed ADMM variants which do not require this condition have been developed [95, 96]. However, these methods incur a greater communication overhead to provide robustness in these problems. Gradient tracking algorithms are related to C-ADMM, when the minimization problem in the primal update procedure is solved using a single gradient decent update. C-ADMM, as presented in Algorithm 6, requires each robot to optimize over a local copy of the global decision variable \(x\). However, many robotic problems have a fundamental structure that makes maintaining global knowledge at every individual robot unnecessary: each robot's data relate only to a subset of the global optimization variables, and each agent only requires a subset of the optimization variable for its role. For instance, in distributed SLAM, a memory-efficient solution would require a robot to optimize only over its local map and communicate with other robots only messages of shared interest. Other examples arise in distributed environmental monitoring by multiple robots [97]. The SOVA method [98] leverages the separability of the optimization variable to achieve orders of magnitude improvement in convergence rates, computation, and communication complexity over C-ADMM methods. ``` Initialization:\(k\gets 0\), \(x_{i}^{(0)}\in\mathbb{R}^{n}\), \(y_{i}^{(0)}=0\) Internal variables:\(\mathcal{P}_{i}^{(k)}=y_{i}^{(k)}\) Communicated variables:\(\mathcal{Q}_{i}^{(k)}=x_{i}^{(k)}\) Parameters:\(\mathcal{R}_{i}^{(k)}=\rho\)doin parallel\(\forall i\in\mathcal{V}\) \(x_{i}^{(k+1)}=\operatorname*{argmin}_{x_{i}}\left\{f_{i}(x_{i})+x_{i}^{\top}y _{i}^{(k)}\cdots\right.\) \(\left.\hskip 14.226378pt+\rho\sum_{j\in\mathcal{N}_{i}}\left\|x_{i}-\frac{1}{2} \left(x_{i}^{(k)}+x_{j}^{(k)}\right)\right\|_{2}^{2}\right\}\) Communicate \(\mathcal{Q}_{i}^{(k)}\) to all \(j\in\mathcal{N}_{i}\) Receive \(\mathcal{Q}_{j}^{(k)}\) from all \(j\in\mathcal{N}_{i}\) \(y_{i}^{(k+1)}=y_{i}^{(k)}+\rho\sum_{j\in\mathcal{N}_{i}}\left(x_{i}^{(k+1)}-x _{j}^{(k+1)}\right)\) \(k\gets k+1\) whilestopping criterion is not satisfied ``` **Algorithm 6**C-ADMM In SOVA, each agent only optimizes over variables relevant to its data or role, enabling robotic applications in which agents have minimal access to computation and communication resources. SOVA introduces consistency constraints between each agent's local optimization variable and its neighbors, mapping the elements of the local optimization variables, given by \[\Phi_{ij}x_{i}=\Phi_{ji}x_{j}\quad\forall j\in\mathcal{N}_{i},\ \forall i\in \mathcal{V}\] where \(\Phi_{ij}\) and \(\Phi_{ji}\) map elements of \(x_{i}\) and \(x_{j}\) to a common space. C-ADMM represents a special case of SOVA where \(\Phi_{ij}\) is always the identity matrix. The update procedures for each agent reduce to the equations given in Algorithm 7. ``` Initialization:\(k\gets 0\), \(x_{i}^{(0)}\in\mathbb{R}^{n_{i}}\), \(y_{i}^{(0)}=0\) Internal variables:\(\mathcal{P}_{i}^{(k)}=y_{i}^{(k)}\) Communicated variables:\(\mathcal{Q}_{i}^{(k)}=x_{i}^{(k)}\) Parameters:\(\mathcal{R}_{i}^{(k)}=\rho\)doin parallel\(\forall i\in\mathcal{V}\) \(x_{i}^{(k+1)}=\operatorname*{argmin}_{x_{i}}\left\{f_{i}(x_{i})+x_{i}^{\top}y _{i}^{(k)}\cdots\right.\) \(\left.\hskip 14.226378pt+\rho\sum_{j\in\mathcal{N}_{i}}\left\|\Phi_{ij}x_{i}- \frac{1}{2}\left(\Phi_{ij}x_{i}^{(k)}+\Phi_{ji}x_{j}^{(k)}\right)\right\|_{2}^{ 2}\right\}\) Communicate \(\mathcal{Q}_{i}^{(k)}\) to all \(j\in\mathcal{N}_{i}\) Receive \(\mathcal{Q}_{j}^{(k)}\) from all \(j\in\mathcal{N}_{i}\) \(y_{i}^{(k+1)}=y_{i}^{(k)}+\rho\sum_{j\in\mathcal{N}_{i}}\Phi_{ij}^{\top}\left( \Phi_{ij}x_{i}^{(k)}-\Phi_{ji}x_{j}^{(k)}\right)\) \(k\gets k+1\) whilestopping criterion is not satisfied ``` **Algorithm 7**SOVA ## VII Applications of Distributed Optimization in Robotics Literature In this section, we discuss some existing applications of distributed optimization to robotics problems. To simplify the presentation, we highlight a number of these applications in the following notable problems in robotics: synchronization, localization, mapping, and target tracking; online and deep learning problems; and task assignment, planning, and control. We refer the reader to the first paper in this two-part series [1] for a case study on multi-drone target tracking, which compares solutions using a number of different distributed optimization algorithms. ### _Synchronization, Localization, Mapping, and Target Tracking_ Distributed optimization algorithms have found notable applications in robot localization from relative measurements [99, 100], including in networks with asynchronous communication [101]. More generally, distributed first-order algorithms have been applied to optimization problems on manifolds, including \(SE(3)\) localization [102, 103, 104, 105], synchronization problems [106], and formation control in \(SO(3)\)[107, 108]. In pose graph optimization, distributed optimization has been employed through majorization-minimization schemes, which minimize an upper-bound of the objective function [109]; using gradient descent on Riemannian manifolds [110, 111]; and block-coordinate descent [112]. Other pose graph optimization methods have utilized distributed sequential programming algorithms using a quadratic approximation model of the non-convex objective function with Gauss-Seidel updates to enable distributed local computations among the robots [113]. Further, ADMM has been employed in bundle adjustment and pose graph optimization problems, which involve the recovery of the 3D positions and orientations of a map and camera [114, 115, 116]. However, many of these algorithms require a central node for the dual variable updates, making them semi-distributed. Nonetheless, a few fully-distributed ADMM-based algorithms exist for bundle adjustment and cooperative localization problems [117, 118]. Other applications of distributed optimization arise in target tracking [119], signal estimation [19], and parameter estimation in global navigation satellite systems [120]. ### _Online and Deep Learning Problems_ Distributed optimization has been applied in online, dynamic problems, where the objective function of each robot changes with time. In these problems, each robot gains knowledge of its time-varying objective function in an online fashion, after taking an action or decision. A number of distributed first-order algorithms have been designed for these problems [121, 122, 123]. Similarly, DDA has been adapted for online scenarios with static communication graphs [124, 125] and time-varying communication topology [126, 127]. The push-sum variant of dual averaging has also been used for distributed training of deep-learning algorithms, and has been shown to be useful in avoiding pitfalls of other synchronous distributed training frameworks, which face notable challenges in problems with communication deadlocks [128]. In addition, distributed sequential convex programming algorithms have been developed for a number of learning problems where data is distributed, including semi-supervised support vector machines [129], neural network training [130], and clustering [131]. Moreover, ADMM has been applied to online problems, such as estimation and surveillance problems involving wireless sensor networks [132, 133]. ADMM has also be applied to distributed deep learning in robot networks in [134] ### _Task Assignment, Planning, and Control_ Distributed optimization has been applied to task assignment problems, posed as optimization problems. Some works [135] employ distributed optimization using a distributed simplex method [136] to obtain an optimal assignment of the robots to a desired target formation. Other works employ C-ADMM for distributed task assignment [137]. Further applications of distributed optimization arise in motion planning [138], trajectory tracking problems involving teams of robots using non-linear model predictive control [139], and collaborative manipulation [140, 141], which employ fully-distributed variants of ADMM. ## VIII Practical Notes on the Implementation of Distributed Optimization Algorithms Here, we highlight some relevant issues that arise in the application of distributed optimization algorithms in robotics problems. In particular, we provide alternative distributed algorithms that address these issues, often at the expense of algorithmic performance with respect to convergence. #### Vi-B1 Selection of a Stochastic Matrix Distributed first-order algorithms and distributed sequential convex programming algorithms require the specification of a stochastic matrix, which must be compatible with the underlying communication network. As a result, the nature of the communication network available to all robots influences the choice of a stochastic matrix. In general, generating compatible row-stochastic and column-stochastic matrices for directed communication networks does not pose a significant challenge. To obtain a row-stochastic matrix, each robot assigns a weight to all its in-neighbors such that the sum of all its weights equals one. Conversely, to obtain a column-stochastic matrix, each robot assigns a weight to all its out-neighbors such that the sum of all its weights equals one. In contrast, significant challenges arise when generating doubly-stochastic matrices for directed communication networks. Consequently, in general, algorithms which require doubly-stochastic matrices are unsuitable for problems with directed communication networks. A number of distributed first-order algorithms allow for the specification of row-stochastic or column-stochastic matrices, making this class of algorithms more amenable to problems with directed communication networks, unlike distributed sequential convex programming algorithms, which generally require the specification of a doubly-stochastic weighting matrix. Further, a number of distributed sequential convex programming algorithms require symmetry of the doubly-stochastic weighting matrix [15, 80, 82, 83], posing an even greater challenge in problems with directed networks. The specific choice of a doubly-stochastic weighing matrix may vary depending on the assumptions made on what global knowledge is available to the robots on the network. The problem of choosing an optimal weight matrix is discussed thoroughly in [142], in which the authors show that achieving the fastest possible consensus can be posed as a semidefinite program, which a computer with global knowledge of the network can solve efficiently. However, we cannot always assume that global knowledge of the network is available, especially in the case of a time-varying topology. In most cases, Metropolis weights facilitate fast mixing without requiring global knowledge, with the assumption that the communication network is undirected with bi-directional communication links. Each robot can generate its own weight vector after a single communication round with its neighbors. In fact, Metropolis weights perform only slightly sub-optimally compared to centralized optimization-based methods [143]: \[w_{ij}=\begin{cases}\frac{1}{\max\{|\mathcal{N}_{i}|,|\mathcal{N}_{j}|\}}&j \in\mathcal{N}_{i},\\ 1-\sum_{j^{\prime}\in\mathcal{N}_{i}}w_{ij^{\prime}}&i=j,\\ 0&\text{else}.\end{cases} \tag{16}\] Distributed algorithms based on the alternating direction method of multipliers do not require the specification of a stochastic weighting matrix. However, C-ADMM and other distributed variants assume that the communication network between all robots is bi-directional, which makes these algorithms unsuitable for problems with directed communication networks. However, a number of distributed ADMM algorithms for problems with directed communication networks have been developed [144, 20, 145]. Owing to the absence of bi-directional communication links between the robots, these algorithms utilize a dynamic average consensus scheme to update the slack variables at each iteration, which merges information from a robot and its neighbors using a stochastic weighting matrix. However, some of these distributed algorithms require the specification of a doubly-stochastic weighting matrix [145], which introduces notable challenges in problems with directed communication networks, while others allow for the specification of a column-stochastic weighting matrix [20]. #### V-B2 Initialization In general, distributed optimization algorithms allow for an arbitrary initialization of the initial solution of each robot, in convex problems. However, these algorithms often place stringent requirements on the initialization of the algorithms' parameters. DFO methods require initialization of the step-size and often place conditions on the value of the step-size to guarantee convergence. Some distributed gradient tracking algorithms [47, 51] assume all robots use a common step-size, requiring coordination among all robots. Selecting a common step-size might involve the execution of a consensus procedure by all robots, with additional computation and communication overhead. In algorithms which utilize a fixed step-size, this procedure only needs to be executed once, at the beginning of the optimization algorithm. ADMM and its distributed variants require the selection of a common penalty parameter \(\rho\). Consequently, all robots must coordinate among themselves in selecting a value for \(\rho\), introducing some challenges, particularly in problems where the convergence rate depends strongly on the value of \(\rho\). Initialization of these algorithm-specific parameters have a significant impact on the performance of each algorithm. We refer the reader to the first paper in this series [1], where we empirically study the sensitivity of a number of distributed optimization algorithms to the choice of the algorithm-specific parameters. #### V-B3 Synchronization of the Robots' Local Clocks In general, distributed first-order, distributed sequential programming, and distributed ADMM algorithms require the synchronization of the local clock of each robot to a global clock to ensure that the local updates proceed in a synchronous fashion. The global clock keeps track of the current number of iterations executed by all robots. Synchronization of the robots' local clocks via a distributed scheme presents notable challenges, especially in situations where the time taken by each robot to complete an iteration varies widely. To address these issues, a number of asynchronous distributed first-order algorithms have been developed [146, 147]. Similarly, some asynchronous variants of distributed sequential programming algorithms exist [81, 83], which allow each robot to perform its local updates asynchronously, eliminating the need for synchronization of the local clocks. In addition, a few asynchronous distributed ADMM variants exist [118]. These asynchronous variants are guaranteed to converge to an optimal solution, provided that an integer \(T\in\mathbb{Z}\) exists such that each robot performs at least one iteration of the algorithm over \(T\) time-steps. #### V-B4 Dynamic Communication Networks In practical situations, the communication network between robots changes over time as the robots move, giving rise to a time-varying communication graph. Networked robots in the real world can also suffer from dropped message packets as well as failed hardware or software components. Generally, distributed first-order optimization algorithms are amenable to problems with dynamic communication networks and are guaranteed to converge to the optimal solution provided that the communication graph is \(B\)-connected for undirected communication graphs or \(B\)-strongly connected for directed communication graphs [47], which implies that the union of the communication graphs over \(B\) consecutive time-steps is connected or strongly-connected respectively. This property is also referred to as bounded connectivity. This assumption ensures the diffusion of information among all robots. Unlike DFO algorithms, many distributed sequential convex programming algorithms assume the communication network remains static. Nevertheless, a few distributed sequential programming algorithms are amenable to problems with dynamic communication networks [16, 84] and converge to the optimal solution of the problem under the assumption that the sequence of communication graphs is \(B\)-strongly connected. In general, distributed ADMM algorithms are not amenable to problems with dynamic communication networks. This is an interesting avenue for future research. ## IX Open Challenges in Distributed Optimization for Robotics In this section, we highlight some notable unresolved challenges in the application of distributed optimization to robotics problems. We note, however, that the following discussion does not represent an exhaustive enumeration of these challenges. ### _Constrained Robotics Problems_ Distributed optimization methods have primarily focused on solving unconstrained convex optimization problems, which constitute a notably limited subset of robotics problems. Generally, robotics problems involve non-convex objectives and constraints, which render these problems not directly amenable to many existing distributed optimization methods. For example, problems in multi-robot motion planning, SLAM, learning, distributed manipulation, and target tracking are often non-convex and/or constrained. Both DFO methods and C-ADMM methods can be modified for non-convex and constrained problems; however, few examples of practical algorithms or rigorous analyses of performance for such modified algorithms exist in the literature. Specifically, while C-ADMM is naturally amenable to constrained optimization, there are many possible ways to adapt C-ADMM to non-convex objectives, which have yet to be explored. One way to implement C-ADMM for non-convex problems is to solve each primal update step as a non-convex optimization (e.g., through a quasi-Newton method, or interior point method). Another option is to perform successive quadratic approximations in an outer loop, and use C-ADMM to solve each resulting quadratic problem in an inner loop. The trade-off between these two options has not yet been explored in the literature, especially in the context of non-convex problems in robotics. ### _Bandwidth-Constrained, Lossy Communication Networks_ In many robotics problems, each robot exchanges information with its neighbors over a communication network with a limited communication bandwidth, which effectively limits the size of the message packets that can be transmitted between robots. Moreover, in practical situations, the communication links between robots sometimes fail, resulting in packet losses. However, many distributed optimization methods do not consider communication between agents as an expensive, unreliable resource, given that many of these methods were developed for problems with reliable communication infrastructure (e.g., multi-core computing, or computing in a hard-wired cluster). Information quantization has been extensively employed in many disciplines to allow for efficient exchange of information over bandwidth-constrained networks. Quantization involves encoding the data to be transmitted into a format which utilizes a fewer number of bits, often resulting in lower precision. Transmission of the encoded data incurs a lower communication overhead, enabling each robot to communicate with its neighbors within the bandwidth constraints. A few distributed optimization algorithms have been designed for these problems, including quantized distributed first-order algorithms. Some of these algorithms assume that all robots can communicate with a central node [148, 149], making them unsuitable for a variety of robotics of problems, while others do not make this assumption [150, 151, 152, 153]. In addition, quantized distributed variants of ADMM also exist [154, 21, 155]. Generally, quantization introduces error between each robot's solution and the optimal solution. However, in some of these algorithms, the quantization error decays during the execution of the algorithms under certain assumptions on the quantizer and the quantization interval [150, 151]. However, quantization in distributed optimization algorithms generally results in slower convergence rates, which poses a challenge in robotics problems where a solution is required rapidly, such as model predictive control problems, highlighting the need for the development of more effective algorithms. Further, only a few distributed optimization algorithms consider problems with lossy communication networks [156, 157, 158]. ### _Limited Computation Resources_ Another valuable direction for future research is in developing algorithms specifically for computationally limited robotic platforms, in which the timeliness of the solution is as important as the solution quality. In general, many distributed optimization methods involve computationally challenging procedures that require significant computational power, especially distributed methods for constrained problems. These methods ignore the significance of computation time, assuming that agents have access to significant computational power. These assumptions often do not hold in robotics problems. Typically, robotics problems unfold over successive time periods with an associated optimization phase at each step of the problem. As such, agents must compute their solutions fast enough to proceed with computing a reasonable solution for the next problem which requires efficient distributed optimization methods. Developing such algorithms specifically for multi-robot systems is an interesting topic for future work. ### _Hardware Implementation_ Finally, there are very few examples of distributed optimization algorithms implemented and running on multi-robot hardware. This leaves a critical gap in the existing literature, as the ability of these algorithms to run efficiently and robustly on robots has still not be thoroughly proven. As a notable exception, we provide empirical results of a hardware implementation of C-ADMM over XBee radios in the first paper in this series [1]. ## X Conclusion Despite the amenability of many robotics problems to distributed optimization, few applications of distributed optimization to multi-robot problems exist. In this work, we have categorized distributed optimization methods into three broad classes--distributed first-order methods, distributed sequential convex programming methods, and the alternating direction method of multipliers (ADMM)--highlighting the distinct mathematical techniques employed by these algorithms. In addition, we have provided practical notes on the implementation of distributed optimization algorithms, with a view towards advancing the application of distributed optimization in robotics. Further, we have identified a number of important open challenges in distributed optimization for robotics, which could be interesting areas for future research. In general, the opportunities for research in distributed optimization for multi-robot systems are plentiful. Distributed optimization provides an appealing unifying framework from which to synthesize solutions for a large variety of problems in multi-robot systems.
2310.11420
Revisiting Map Relations for Unsupervised Non-Rigid Shape Matching
We propose a novel unsupervised learning approach for non-rigid 3D shape matching. Our approach improves upon recent state-of-the art deep functional map methods and can be applied to a broad range of different challenging scenarios. Previous deep functional map methods mainly focus on feature extraction and aim exclusively at obtaining more expressive features for functional map computation. However, the importance of the functional map computation itself is often neglected and the relationship between the functional map and point-wise map is underexplored. In this paper, we systematically investigate the coupling relationship between the functional map from the functional map solver and the point-wise map based on feature similarity. To this end, we propose a self-adaptive functional map solver to adjust the functional map regularisation for different shape matching scenarios, together with a vertex-wise contrastive loss to obtain more discriminative features. Using different challenging datasets (including non-isometry, topological noise and partiality), we demonstrate that our method substantially outperforms previous state-of-the-art methods.
Dongliang Cao, Paul Roetzer, Florian Bernard
2023-10-17T17:28:03Z
http://arxiv.org/abs/2310.11420v1
# Revisiting Map Relations for Unsupervised Non-Rigid Shape Matching ###### Abstract We propose a novel unsupervised learning approach for non-rigid 3D shape matching. Our approach improves upon recent state-of-the art deep functional map methods and can be applied to a broad range of different challenging scenarios. Previous deep functional map methods mainly focus on feature extraction and aim exclusively at obtaining more expressive features for functional map computation. However, the importance of the functional map computation itself is often neglected and the relationship between the functional map and point-wise map is underexplored. In this paper, we systematically investigate the coupling relationship between the functional map from the functional map solver and the point-wise map based on feature similarity. To this end, we propose a self-adaptive functional map solver to adjust the functional map regularisation for different shape matching scenarios, together with a vertex-wise contrastive loss to obtain more discriminative features. Using different challenging datasets (including non-isometry, topological noise and partiality), we demonstrate that our method substantially outperforms previous state-of-the-art methods. ## 1 Introduction 3D shape matching is a fundamental problem in shape analysis, computer vision and computer graphics with a broad range of applications, including texture transfer [15], deformation transfer [64] and statistical shape analysis [20, 40, 44]. Even though 3D shape matching is a long-standing problem and has been studied for decades [65, 67], finding correspondences between two non-rigidly deformed 3D shapes is still a challenging problem, especially for shapes with large non-isometric deformation, topological noise, or partiality. Notably, in the case of 3D shapes represented by triangle meshes, the functional map framework [50] is one of the most dominant pipelines in this area and has been extended by many follow-up works due to its efficiency and well-justified theoretical properties [17, 49, 53, 57]. Meanwhile, with the recent rapid development in deep learning, many learning-based methods for non-rigid 3D shape matching are also based on the functional map framework, including both supervised [4, 16, 42] and unsupervised [10, 12, 18, 23, 30, 39, 59, 62] approaches. Most of them mainly focus on training the feature extraction module to obtain functional maps based on the extracted features and then rely on off-the-shelf post-processing [47] to obtain final point-wise correspondences. In contrast, the recent work by Cao et al. [12] explicitly models the relationship between functional maps and pointwise maps and thus leads to more robust matching in a broad range of challenging scenarios. However, the method only focuses on extracting more expressive features and ignores the importance of the functional map computation itself. Further, it lacks a discussion about insights between the relationship between the functional map and point-wise map. In this paper, we improve upon the recent work by Cao et al. [12] by proposing a novel functional map solver that is self-adaptive to different shape matching scenarios. Moreover, we systematically analyse the relationship between the functional map and the point-wise map and introduce a vertex-wise contrastive loss to obtain more discriminative features leading to more accurate correspondences. We summarise our main contributions as follows: * For the first time we propose a functional map solver that is self-adaptive for different challenging matching scenarios. * We introduce a vertex-wise contrastive loss to obtain more discriminative features that can be used directly for matching via nearest neighbour search. * We set the new state-of-the-art performance on numerous challenging benchmarks in diverse settings, including non-isometric, topologically noisy and partial shape matching, even compared to recent supervised methods. ## 2 Related work 3D shape matching is a long-standing problem that has been studied for decades. In the following we focus on reviewing those methods that are most relevant to our work. A more comprehensive overview can be found in [60, 65, 67]. ### Axiomatic shape matching methods Shape matching can be formulated as establishing point-wise correspondences between a given pair of shapes. A simple formulation for doing so is the linear assignment problem (LAP) [48]. However, the LAP cannot take geometric relations into account and thus leads to spatially non-smooth matchings. To compensate for this, several shape matching approaches [31, 58, 69] establish correspondences by explicitly incorporating geometric constraints. Some methods [6, 21, 26, 33] attempt to solve the problem based on non-rigid shape registration. Overall, directly establishing point-wise correspondences often leads to complex optimisation problems that are difficult to solve. In contrast, the functional map framework finds correspondences in the functional domain [50]. Here, the correspondence relationship can be encoded with a small matrix, namely the functional map. Due to its simple yet efficient formulation, the functional map framework has been extended by many follow-up works, e.g. in terms of improving the matching accuracy and robustness [25, 54], extending it to more challenging scenarios (e.g. non-isometry [22, 37, 45, 53, 56], partiality [43, 57]), considering multi-shape matching [13, 28, 32, 34], and matching with non-unique solutions [55]. Nevertheless, axiomatic functional map methods rely on handcrafted features (e.g. HKS [9], WKS [5], SHOT [61]), which limits their performance. In contrast, our method (among others) directly learns discriminative features from training data and achieves more accurate and robust matching performance on challenging settings. ### Deep functional map methods In contrast to axiomatic approaches, deep functional map methods aim to learn features directly from training data. The supervised FMNet [42] is the pioneer work that learns a non-linear transformation of SHOT feature [61] based on a point-wise MLP. Later works [30, 59] enable unsupervised training of FMNet by introducing isometry regularisation in the spatial and spectral domain, respectively. Instead of using simple point-wise MLPs, follow-up works [16, 62] replace FMNet by point-based networks [52, 66] and lead to better matching performance. More recently, Sharp et al. [63] introduces DiffusionNet with a learnable diffusion process and has set the new state-of-the-art matching performance for a broad range of shape matching scenarios, including near-isometry [3, 10], non-isometry [2, 18, 39], partiality [4, 12], as well as shapes represented as point clouds [11]. Despite the rapid progress of deep functional map methods, existing approaches mostly focus on learning more expressive features for functional map computation, while ignoring the importance of the functional map computation itself. In this work, we systematically investigate the functional map computation process and introduce a self-adaptive functional map solver to better regularise the functional map structure for different kinds of input shapes. ## 3 Background In this section we explain the background and introduce the notation used throughout the rest of the paper in Tab. 1. ### Functional map framework We consider a pair of 3D shapes \(\mathcal{X}\) and \(\mathcal{Y}\) represented as triangle meshes, with \(n_{\mathcal{X}}\) and \(n_{\mathcal{Y}}\) vertices, respectively. Here we summarise the common pipeline of the functional map framework. 1. [leftmargin=*] 2. Compute the associated positive semi-definite Laplacian matrices \(L_{\mathcal{X}},L_{\mathcal{Y}}\)[51]. The Laplacian matrix can be computed as \(L_{\mathcal{X}}=M_{\mathcal{X}}^{-1}W_{\mathcal{X}}\), where \(M_{\mathcal{X}}\) is the diagonal lumped mass matrix and \(W_{\mathcal{X}}\) is the cotangent weight matrix. 2. Compute the first \(k\) eigenfunctions \(\Phi_{\mathcal{X}},\Phi_{\mathcal{Y}}\) and the corresponding eigenvalues \(\Lambda_{\mathcal{X}},\Lambda_{\mathcal{Y}}\) of the respective Laplacian matrices (i.e. LBO eigenfunctions/eigenvalues). 3. Compute \(c\)-dimensional features \(F_{\mathcal{X}},F_{\mathcal{Y}}\) defined on each shape either from handcrafted feature descriptors or from a learnable feature extractor. 4. Compute the functional map \(C_{\mathcal{X}\mathcal{Y}}\) associated with the LBO eigenfunctions by solving (variants of) the least squares problem \[C_{\mathcal{X}\mathcal{Y}}=\operatorname*{argmin}_{C}\,E_{\operatorname*{ data}}\left(C\right)+\lambda E_{\operatorname*{reg}}\left(C\right).\] (1) Here, minimising \(E_{\operatorname*{data}}=\left\|CA_{\mathcal{X}}-A_{\mathcal{Y}}\right\|_{F}^ {2}\) enforces descriptor preservation, while minimising the regularisation term \(E_{\operatorname*{reg}}\) imposes some form of structural properties (e.g. Laplacian commutativity [50]). 5. Recover the point-wise map \(\Pi_{\mathcal{Y}\mathcal{X}}\) based on the relationship \(C_{\mathcal{X}\mathcal{Y}}=\Phi_{\mathcal{Y}}^{\dagger}\Pi_{\mathcal{Y} \mathcal{X}}\Phi_{\mathcal{X}}\), e.g. either by nearest neighbour search in the spectral domain or by other post-processing techniques [27, 47, 68]. We emphasise that most deep functional methods mainly focus on the third step that aims to extract more expressive features for functional map computation while ignoring the importance of the other steps (i.e. the functional map computation and point-wise map conversion). However, we argue that this may lead to sub-optimal performance, since the three interrelated aspects feature learning, functional map computation, and point-wise map conversion are considered in an isolated rather than a joint manner. Therefore, in this paper we systematically investigate the functional map computation step and the relationship between the functional map and the associated point-wise map. ### Deep functional maps Instead of relying on handcrafted features [5, 9, 61] to compute functional maps, many deep functional map methods [59, 62] have been proposed. The common pipeline of those methods is shown in Fig. 2 (left). The common deep functional map framework mainly consists of two modules: a feature extractor and a functional map solver. The feature extractor is used to extract vertex-wise features and the functional map solver is used to compute functional maps based on the extracted features. To train the feature extractor, structural regularisation (e.g. orthogonality, bijectivity [59]) is imposed on the computed functional maps, i.e. \[L_{\operatorname*{fmap}}=\lambda_{\operatorname*{bij}}L_{\operatorname*{bij} }+\lambda_{\operatorname*{orth}}L_{\operatorname*{orth}}, \tag{2}\] where \[L_{\operatorname*{bij}}=\left\|C_{\mathcal{X}\mathcal{Y}}C_{\mathcal{Y} \mathcal{X}}-I\right\|_{F}^{2}+\left\|C_{\mathcal{Y}\mathcal{X}}C_{\mathcal{X} \mathcal{Y}}-I\right\|_{F}^{2}, \tag{3}\] \[L_{\operatorname*{orth}}=\left\|C_{\mathcal{X}\mathcal{Y}}^{\top}C_{\mathcal{ Y}\mathcal{X}}-I\right\|_{F}^{2}+\left\|C_{\mathcal{X}\mathcal{Y}}^{\top}C_{ \mathcal{Y}\mathcal{X}}-I\right\|_{F}^{2}. \tag{4}\] After training, off-the-shelf post-processing techniques [47, 68] are used to convert functional maps to point-wise maps. As pointed out by recent works [3, 12, 56], a major downside of this common pipeline is that the relation between the functional maps and associated point-wise maps is ignored, so that the performance is often sub-optimal, especially in the presence of large non-isometry, topological noise or partiality. To compensate for this, Cao et al. [12] proposed to directly obtain point-wise maps based on the extracted features and introduced a coupling loss \(L_{\operatorname*{couple}}\) to explicitly regularise the relation between the point-wise map \(\Pi_{\mathcal{Y}\mathcal{X}}\) and the corresponding functional map \(C_{\mathcal{X}\mathcal{Y}}\), i.e. \[L_{\operatorname*{couple}}=\left\|C_{\mathcal{X}\mathcal{Y}}-C_{\mathcal{X} \mathcal{Y}}^{\Pi}\right\|_{F}^{2}, \tag{5}\] where \(C_{\mathcal{X}\mathcal{Y}}^{\Pi}=\Phi_{\mathcal{Y}}^{\dagger}\Pi_{\mathcal{Y} \mathcal{X}}\Phi_{\mathcal{X}}\). By explicitly modelling the relationship between functional maps and point-wise maps, the method proposed by Cao et al. [12] substantially outperforms existing methods and is robust in different challenging scenarios. However, there are two major limitations: * In their approach, one functional map \(C_{\mathcal{X}\mathcal{Y}}\) is computed from the functional map solver, while the other one \(C_{\mathcal{X}\mathcal{Y}}^{\Pi}\) is converted from the point-wise map based on the feature similarity. Yet, the underlying relationship between them is not well-understood. * The coupling loss \(L_{\operatorname*{couple}}\) regularises the functional maps computed from the functional map solver based on the extracted features, while the functional map computation itself is not optimised in a data-driven manner. In the following, we theoretically analyse the relationship between \(C_{\mathcal{X}\mathcal{Y}}\) and \(C_{\mathcal{X}\mathcal{Y}}^{\Pi}\), and revisit the map relations by introducing a self-adaptive functional map solver and a vertex-wise contrastive loss. ## 4 Theoretical analysis of map relations In this section, we analyse the underlying relation between the functional map computed from the functional map solver and the functional map converted by the point-wise map based on the deep feature similarity. \begin{table} \begin{tabular}{l l} **Symbol** & **Description** \\ \hline \(\mathcal{X},\mathcal{Y}\) & 3D shapes with \(n_{\mathcal{X}}\), \(n_{\mathcal{Y}}\) vertices \\ \(L_{\mathcal{X}}\) & \(\mathbb{R}^{n_{\mathcal{X}}\times n_{\mathcal{X}}}\) Laplacian matrix of shape \(\mathcal{X}\) \\ \(\Lambda_{\mathcal{X}}\) & \(\mathbb{R}^{k\times k}\) eigenvalue matrix of Laplacian \(L_{\mathcal{X}}\) \\ \(\Phi_{\mathcal{X}}\) & \(\mathbb{R}^{n_{\mathcal{X}}\times k}\) LBO eigenfunctions of shape \(\mathcal{X}\) \\ \(\Phi_{\mathcal{X}}^{\dagger}\) & \(\mathbb{R}^{k\times n_{\mathcal{X}}}\) Moore-Penrose inverse of \(\Phi_{\mathcal{X}}\) \\ \(F_{\mathcal{X}}\) & \(\mathbb{R}^{n_{\mathcal{X}}\times c}\) vertex-wise features of shape \(\mathcal{X}\) \\ \(A_{\mathcal{X}}\) & \(\Phi_{\mathcal{X}}^{\dagger}F_{\mathcal{X}}\) projected feature coefficients of shape \(\mathcal{X}\) \\ \(C_{\mathcal{X}\mathcal{Y}}\) & \(\mathbb{R}^{k\times k}\) functional map between shapes \(\mathcal{X}\) and \(\mathcal{Y}\) \\ \(\Pi_{\mathcal{Y}\mathcal{X}}\) & point-wise map between shapes \(\mathcal{Y}\) and \(\mathcal{X}\) \\ \hline \end{tabular} \end{table} Table 1: Summary of the notation used in this paper. W.l.o.g. we assume that \(n_{\mathcal{Y}}\leq n_{\mathcal{X}}\). With that, a (partial or full) shape \(\mathcal{Y}\) is matched to a full shape \(\mathcal{X}\) and thereby the point-wise map \(\Pi_{\mathcal{Y}\mathcal{X}}\) should be a (partial) permutation matrix, i.e. \[\mathcal{P}:=\left\{\Pi\in\{0,1\}^{n_{\mathcal{Y}}\times n_{\mathcal{X}}}:\Pi \mathbf{1}_{n_{\mathcal{X}}}=\mathbf{1}_{n_{\mathcal{Y}}},\mathbf{1}_{n_{ \mathcal{Y}}}^{\top}\Pi\leq\mathbf{1}_{n_{\mathcal{X}}}^{\top}\right\}, \tag{6}\] where \(\Pi_{\mathcal{Y}\mathcal{X}}(i,j)\) indicates whether the \(i\)-th point in \(\mathcal{Y}\) corresponds to the \(j\)-th point in \(\mathcal{X}\). Firstly, we analyse the point-wise map computed based on the feature similarity. We note that \[\Pi_{\mathcal{Y}\mathcal{X}}=\operatorname*{argmin}_{\Pi\in\mathcal{P}}\left\| \Pi F_{\mathcal{X}}-F_{\mathcal{Y}}\right\|_{F}^{2}. \tag{7}\] **Lemma 4.1**.: _If there exists a unique solution to Eq. (7), then the rows of \(F_{\mathcal{X}}\) and \(F_{\mathcal{Y}}\) have non-repeated rows._ Proof.: If \(F_{\mathcal{X}}\) has repeated rows, we can find a (full) permutation matrix \(\Pi_{\mathcal{X}\mathcal{X}}\neq I\) that satisfies \(\Pi_{\mathcal{X}\mathcal{X}}F_{\mathcal{X}}=F_{\mathcal{X}}\). Therefore, any solution \(\Pi_{\mathcal{Y}\mathcal{X}}\) has an equivalent solution \(\Pi^{\prime}_{\mathcal{Y}\mathcal{X}}:=\Pi_{\mathcal{Y}\mathcal{X}}\Pi_{ \mathcal{X}\mathcal{X}}\) and is thus not unique. Due to the orthogonal invariance of the Frobenius norm, an analogous statement can be made for \(F_{\mathcal{Y}}\). **Discussion.** To obtain a valid point-wise map based on the feature similarity, the features \(F_{\mathcal{X}}\), \(F_{\mathcal{Y}}\) should have non-repeated rows. To this end, based on Lemma 4.1 we propose a vertex-wise contrastive loss to encourage more discriminative features. **Theorem 4.2**.: _Consider the following conditions:_ 1. \(\Pi_{\mathcal{Y}\mathcal{X}}F_{\mathcal{X}}=F_{\mathcal{Y}},\Pi_{\mathcal{Y} \mathcal{X}}\in\mathcal{P}\)_, where_ \(n_{\mathcal{Y}}\leq n_{\mathcal{X}}\)_._ 2. \(F_{\mathcal{X}}\) _is in the span of_ \(\Phi_{\mathcal{X}}\) _and_ \(F_{\mathcal{Y}}\) _is in the span of_ \(\Phi_{\mathcal{Y}}\)_._ 3. \(\lambda=0\) _in Eq. (_1_) and_ \(A_{\mathcal{X}}\in\mathbb{R}^{k\times c}\) _(_\(k\leq c\)_) is full rank._ _If conditions (i)-(iii) hold, then we have \(C_{\mathcal{X}\mathcal{Y}}=C_{\mathcal{X}\mathcal{Y}}^{\Pi}\), and \(\left\|C_{\mathcal{X}\mathcal{Y}}A_{\mathcal{X}}-A_{\mathcal{Y}}\right\|_{F}^ {2}=0\)._ Proof.: By condition (i), we have \(\Pi_{\mathcal{Y}\mathcal{X}}F_{\mathcal{X}}=F_{\mathcal{Y}}\) and from condition (ii) we know that \(F_{\mathcal{X}}=\Phi_{\mathcal{X}}A_{\mathcal{X}}\) (since \(A_{\mathcal{X}}\) is the matrix of projected feature coefficients). The same holds for \(\mathcal{Y}\). Putting these together, \[\Pi_{\mathcal{Y}\mathcal{X}}\Phi_{\mathcal{X}}A_{\mathcal{X}}=\Phi_{\mathcal{ Y}}A_{\mathcal{Y}}. \tag{8}\] Pre-multiplying Eq. (8) by \(\Phi_{\mathcal{Y}}^{\dagger}\) we obtain \[\Phi_{\mathcal{Y}}^{\dagger}\Pi_{\mathcal{Y}\mathcal{X}}\Phi_{\mathcal{X}}A_ {\mathcal{X}}=C_{\mathcal{X}\mathcal{Y}}^{\Pi}A_{\mathcal{X}}=A_{\mathcal{Y}}, \tag{9}\] where the definition \(C_{\mathcal{X}\mathcal{Y}}^{\Pi}=\Phi_{\mathcal{Y}}^{\dagger}\Pi_{\mathcal{Y} \mathcal{X}}\Phi_{\mathcal{X}}\) is used. Thus \(\left\|C_{\mathcal{X}\mathcal{Y}}^{\Pi}A_{\mathcal{X}}-A_{\mathcal{Y}}\right\|_ {F}^{2}=0\) and \(C_{\mathcal{X}\mathcal{Y}}=C_{\mathcal{X}\mathcal{Y}}^{\Pi}\) (\(C_{\mathcal{X}\mathcal{Y}}=\operatorname*{argmin}_{C}\left\|CA_{\mathcal{X}}-A_ {\mathcal{Y}}\right\|_{F}^{2}\) achieves \(0\), and \(A_{\mathcal{X}}\) is full rank so that the solution is unique, implying \(C_{\mathcal{X}\mathcal{Y}}=C_{\mathcal{X}\mathcal{Y}}^{\Pi}\)). **Discussion.** Theorem 4.2 builds a connection between the functional map \(C_{\mathcal{X}\mathcal{Y}}\) computed from the functional map solver, i.e. Eq. (1), and the functional map \(C_{\mathcal{X}\mathcal{Y}}^{\Pi}\) converted from the point-wise map \(\Pi_{\mathcal{Y}\mathcal{X}}\). It explicitly shows that \(C_{\mathcal{X}\mathcal{Y}}\) and \(C_{\mathcal{X}\mathcal{Y}}^{\Pi}\) are equal under certain conditions. However, in practical situations, the assumptions are too restrictive and often **not** satisfied. For example, when computing functional maps using Eq. (1), structural regularisation \(E_{\mathrm{reg}}\) is typically needed to preserve the structure of the functional map (e.g. Laplacian commutativity for isometry). Furthermore, we often do not want to constrain the feature \(F_{\mathcal{X}}\) to lie in the span of the corresponding LBO eigenfunctions \(\Phi_{\mathcal{X}}\), which limits its discriminative power and expressiveness, since the first \(k\) LBO eigenfunctions correspond to the \(k\) smoothest orthonormal functions defined on the surface w.r.t. the Dirichlet energy [7]. Even though the conditions of the theoretical results are not strictly met, the results give insights about the relations between variables, which Figure 2: **Left: Common pipeline of deep functional map methods.** A Siamese feature extractor computes vertex-wise features for each shape. The extracted features are used for functional map computation. During training, structural regularisation \(L_{\mathrm{map}}\) is imposed on the functional maps. During inference, the computed functional maps are typically converted to point-wise maps via map conversion. **Right: Our proposed shape matching pipeline.** Point-wise correspondences are obtained based on feature similarity. A coupling loss \(L_{\mathrm{couple}}\) regularises the relation between the point-wise map \(\Pi_{\mathcal{Y}\mathcal{X}}\) and the functional map \(C_{\mathcal{X}\mathcal{Y}}\)[12]. To better balance the functional map regularisation and the coupling relationship between the functional map and the point-wise map, we introduce a self-adaptive functional map solver (with learnable parameters \(\lambda\) and \(\gamma\)) to adjust the regularisation strength and structure, respectively. Additionally, a vertex-wise contrastive loss \(L_{\mathrm{contrast}}\) is introduced to improve the discriminative power of the features. we transfer into soft constraints that approximate the conditions. For instance, we note that the functional map solver plays a crucial role to balance the functional map regularisation and the coupling relation between the functional map and the point-wise map. On the one hand, the regularisation term \(E_{\mathrm{reg}}\) in Eq. (1) preserves the functional map structure. On the other hand, it may result an invalid functional map (i.e. a functional map without an associated point-wise map). Therefore, it is important to adjust the functional map regularisation in a data-driven manner. ## 5 Revisiting the map relations In the previous section, we theoretically analyse the relationship between the functional map computed from the functional map solver and the functional map converted from the point-wise map based on deep feature similarity. Motivated by our analysis, we propose two simple yet efficient extensions from the existing framework, which we introduce in the following. We highlight the different parts in Fig. 2 (right) with red colour, compared to the common deep functional map pipeline shown in Fig. 2 (left). ### Self-adaptive functional map solver As discussed in Theorem 4.2., we only consider \(E_{\mathrm{data}}\) and ignore \(E_{\mathrm{reg}}\) in Eq. (1) with some additional assumptions (i.e. \(F_{\mathcal{X}},F_{\mathcal{Y}}\) in the span of \(\Phi_{\mathcal{X}},\Phi_{\mathcal{Y}}\), and \(A_{\mathcal{X}}\) is full rank). With that, the functional map computed by the functional map solver \(C_{\mathcal{X}\mathcal{Y}}\) is equal to the functional map \(C_{\mathcal{X}\mathcal{Y}}^{\Pi}\) converted by the point-wise map. However, as shown in Fig. 1, the valid functional maps often exhibit certain structures that needs to be imposed by the regularisation term \(E_{\mathrm{reg}}\). To this end, we propose a self-adaptive functional map solver that can optimise the regularisation strength and the regularisation structure based on the training data. Specifically, we use the regularisation term proposed by Ren et al. [54], which is an extension of the standard Laplacian commutativity. The standard Laplacian commutativity can be formulated as \[\begin{split} E_{\mathrm{lap}}&=\left\|C_{ \mathcal{X}\mathcal{Y}}\Lambda_{\mathcal{X}}-\Lambda_{\mathcal{Y}}C_{\mathcal{ X}\mathcal{Y}}\right\|_{F}^{2}\\ &=\sum_{ij}(\Lambda_{\mathcal{Y}}(i,i)-\Lambda_{\mathcal{X}}(j,j ))^{2}\left[C_{\mathcal{X}\mathcal{Y}}\right]_{ij}^{2}\\ &=\sum_{ij}\left[M_{\mathrm{lap}}\right]_{ij}\left[C_{\mathcal{ X}\mathcal{Y}}\right]_{ij}^{2}.\end{split} \tag{10}\] Ren et al. [54] extended the standard mask \(M_{\mathrm{lap}}\) to a resolvent mask \(M_{\mathrm{res}}\) in the form \[\left[M_{\mathrm{res}}^{\gamma}\right]_{ij}=\left[M_{\mathrm{res}}^{\gamma} \right]_{ij}+\left[M_{\mathrm{im}}^{\gamma}\right]_{ij}, \tag{11}\] where \[\left[M_{\mathrm{res}}^{\gamma}\right]_{ij}=\left(\frac{\Lambda_{\mathcal{Y}}^ {\gamma}(i,i)}{\Lambda_{\mathcal{Y}}^{2\gamma}(i,i)+1}-\frac{\Lambda_{ \mathcal{X}}^{\gamma}(j,j)}{\Lambda_{\mathcal{X}}^{2\gamma}(j,j)+1}\right)^{ 2}, \tag{12}\] \[\left[M_{\mathrm{im}}^{\gamma}\right]_{ij}=\left(\frac{1}{\Lambda_{\mathcal{ Y}}^{2\gamma}(i,i)+1}-\frac{1}{\Lambda_{\mathcal{X}}^{2\gamma}(j,j)+1}\right)^{ 2}. \tag{13}\] The parameter \(\gamma\) in the resolvent mask controls the regularisation structure of the functional map as shown in Fig. 3. In general, the \(\gamma\) is chosen in the range \((0,1]\) to keep the funnel-structure regularisation to be similar to the ground-truth functional map, i.e. more diagonal-dominant entries for smaller eigenvalues. Additionally, a larger \(\gamma\) imposes larger penalisation on the non-zero off-diagonal entries and a smaller \(\gamma\) provides more flexibility of the functional map (see Fig. 3). Instead of manually choosing the regularisation strength (i.e. \(\lambda\) in Eq. (1)) and structure (i.e. \(\gamma\) in Eq. (12), Eq. (13)), we propose to learn these parameters during training. To this end, the functional map solver is optimised from the input shapes to find a better balance between the data term \(E_{\mathrm{data}}\) and regularisation term \(E_{\mathrm{reg}}\) and thus to better couple \(C_{\mathcal{X}\mathcal{Y}}\) and \(C_{\mathcal{X}\mathcal{Y}}^{\Pi}\). In the experiment, we show the simple modification leads to better matching performance especially for the most challenging scenarios. We also visualise the regularisation for each evaluated datasets in Fig. 9. ### Vertex-wise contrastive loss As discussed in Theorem 4.1, a valid point-wise map based on the feature similarity requires \(F_{\mathcal{X}},F_{\mathcal{Y}}\) both have distinct rows. To this end, we propose a vertex-wise contrastive loss to encourage more discriminative features. We first compute a point-wise map \(\Pi_{\mathcal{X}\mathcal{X}}\) that maps shape \(\mathcal{X}\) to itself. To make the computation differentiable, we use the softmax operator to approximate a soft point-wise map, i.e. \[\Pi_{\mathcal{X}\mathcal{X}}=\mathrm{Softmax}\left(F_{\mathcal{X}}F_{ \mathcal{X}}^{T}/\tau\right), \tag{14}\] where parameter \(\tau\) is to determine the softness of the point-wise map. Similar to [12], the computed point-wise map \(\Pi_{\mathcal{X}\mathcal{X}}\) is projected to the associated functional map \(C_{\mathcal{X}\mathcal{X}}\), i.e. \[C_{\mathcal{X}\mathcal{X}}=\Phi_{\mathcal{X}}^{\dagger}\Pi_{\mathcal{X} \mathcal{X}}\Phi_{\mathcal{X}}. \tag{15}\] Figure 3: **The resolvent mask \(M_{\mathrm{res}}^{\gamma}\) for different \(\gamma\).** The red region indicates large penalty, while the blue region indicates small penalty. We notice the funnel-like structure changes w.r.t. the change of \(\gamma\) and it reverses the direction for \(\gamma>1\). Our vertex-wise contrastive loss regularises the functional map \(C_{\mathcal{X}\mathcal{X}}\) to be an identity matrix, i.e. \[L_{\mathrm{contrast}}=\left\|C_{\mathcal{X}\mathcal{X}}-I\right\|_{F}^{2}. \tag{16}\] Similarly, we also apply the vertex-wise contrastive loss \(L_{\mathrm{contrast}}\) for the functional map \(C_{\mathcal{Y}\mathcal{Y}}\). There are two main advantages of applying the regularisation on the functional map domain. The first advantage is to make \(L_{\mathrm{contrast}}\) comparable to other loss terms (i.e. Eq. (2) and Eq. (5)). The second advantage is to make the \(L_{\mathrm{contrast}}\) discretisation-agnostic. Overall, the total unsupervised loss can be expressed as \[L_{\mathrm{total}}=L_{\mathrm{fmap}}+\lambda_{\mathrm{couple}}L_{\mathrm{ couple}}+\lambda_{\mathrm{contrast}}L_{\mathrm{contrast}}. \tag{17}\] ## 6 Experimental results In this section we compare our method to previous methods on diverse benchmark shape matching datasets with different settings (including near-isometric, non-isometric, topological noisy, and partial shape matching). ### Near-isometric shape matching **Datasets.** We evaluate our method on three standard benchmark datasets, namely the FAUST [8], SCAPE [1] and SHREC'19 [46] datasets. Following prior works, we choose the more challenging remeshed versions from [16, 53]. The FAUST dataset consists of 100 shapes, where the train/test split is 80/20. The SCAPE dataset contains 71 shapes, where the last 20 shapes are used for evaluation. The SHREC'19 dataset is a more challenging dataset with significant variance in the mesh connectivity and shape geometry. It has a total of 430 pairs for evaluation. **Results.** The mean geodesic error [35] is used as quantitative measure. We compare our method with state-of-the-art axiomatic, supervised and unsupervised methods. The results are summarised in Tab. 2. Our method outperforms the previous state of the art, even in comparison to the supervised methods. Meanwhile, our method achieves substantially better cross-dataset generalisation ability compared to existing learning-based methods. ### Non-isometric shape matching **Datasets.** In the context of non-isometric shape matching, we consider the SMAL [70] dataset and the DT4D-H [45] dataset. The SMAL dataset contains 49 animal shapes of eight species. Following Donati et al. [18], five species are used for training and three different species are used for testing (i.e. 29/20 shapes for train/test split). The DT4D-H dataset based on DeformingThings4D [41] is introduced by Magnet et al. [45]. Following Li et al. [39], nine classes of humanoid shapes are used for evaluation, resulting in 198/95 shapes for train/test split. **Results.** Tab. 3 summarises the matching results on the SMAL and DT4D-H datasets. In the context of inter-class shape matching, our approach outperforms the ex \begin{table} \begin{tabular}{l c c c} \hline \hline Train & \multicolumn{2}{c}{**FAUST**} & \multicolumn{1}{c}{**SCAPE**} & \multicolumn{1}{c}{**FAUST + SCAPE**} \\ \cline{2-4} Test & **FAUST** & **SCAPE** & **SHREC’19** \\ \hline \multicolumn{4}{c}{Axiomatic Methods} \\ BCICP [53] & 6.1 & 11.0 & - \\ ZoomOut [47] & 6.1 & 7.5 & - \\ Smooth Shells [22] & 2.5 & 4.7 & - \\ DiscreteOp [56] & 5.6 & 13.1 & - \\ \hline \multicolumn{4}{c}{Supervised Methods} \\ FMNet [42] & 11.0 & 17.0 & - \\ 3D-CODED [29] & 2.5 & 31.0 & - \\ GeomFMaps [16] & 2.6 & 3.0 & 7.9 \\ \hline \multicolumn{4}{c}{Unsupervised Methods} \\ WSupFMNet [62] & 3.8 & 4.4 & - \\ Deep Shells [23] & 1.7 & 2.5 & 21.1 \\ DUO-FMNet [18] & 2.5 & 2.6 & 6.4 \\ AttentiveFMaps [39] & 1.9 & 2.2 & 5.8 \\ AttentiveFMaps-Fast [39] & 1.9 & 2.1 & 6.3 \\ URSSM [12] & 1.6 & 1.9 & 4.6 \\ Ours & **1.5** & **1.8** & **3.4** \\ \hline \hline \end{tabular} \end{table} Table 2: **Near-isometric shape matching and cross-dataset generalisation on FAUST, SCAPE and SHREC’19. The best results in each column are highlighted. Our method outperforms previous axiomatic, supervised and unsupervised methods and demonstrates better cross-dataset generalisation ability.** \begin{table} \begin{tabular}{l c c c} \hline \hline **Geo. error (\(\times\)100)** & **SMAL** & \multicolumn{2}{c}{**DT4D-H**} \\ \cline{2-4} & \multicolumn{2}{c}{**intra-class**} & \multicolumn{1}{c}{**inter-class**} \\ \hline \multicolumn{4}{c}{Axiomatic Methods} \\ ZoomOut [47] & 38.4 & 4.0 & 29.0 \\ Smooth Shells [22] & 36.1 & 1.1 & 6.3 \\ DiscreteOp [56] & 38.1 & 3.6 & 27.6 \\ \hline \multicolumn{4}{c}{Supervised Methods} \\ FMNet [42] & 42.0 & 9.6 & 38.0 \\ GeomFMaps [16] & 8.4 & 2.1 & 4.1 \\ \hline \multicolumn{4}{c}{Unsupervised Methods} \\ WSupFMNet [62] & 7.6 & 3.3 & 22.6 \\ Deep Shells [23] & 29.3 & 3.4 & 31.1 \\ DUO-FMNet [18] & 6.7 & 2.6 & 15.8 \\ AttentiveFMaps [39] & 5.4 & 1.7 & 11.6 \\ AttentiveFMaps-Fast [39] & 5.8 & 1.2 & 14.6 \\ URSSM [12] & 3.9 & **0.9** & 4.1 \\ Ours & **3.6** & 1.0 & **4.0** \\ \hline \hline \end{tabular} \end{table} Table 3: **Non-isometric matching on SMAL and DT4D-H. Our method outperforms all existing methods for challenging non-isometric inter-class shape matching on both SMAL and DT4D-H datasets and shows comparable performance on intra-class shape matching on DT4D-H dataset.** isting state of the art on both challenging non-isometric datasets, even in comparison to supervised methods. Meanwhile, our method demonstrates comparable and near-perfect matching results for intra-class matching on the DT4D-H dataset. Fig. 4 shows the PCK curves and the corresponding AUC of our method compared to existing state-of-the-art methods. Fig. 5 demonstrates the qualitative results of our method applied on the challenging SHREC'20 dataset [19]. ### Matching with topological noise **Datasets.** The mesh topology is often degraded due to self-intersections of separate parts of real-world scanned objects. Such topological noise presents a large challenge to matching methods based on the functional map framework as it distorts the intrinsic shape geometry [38]. To evaluate our method for matching with topologically noisy shapes, we use the TOPKIDS dataset [38]. Due to the small amount of data (26 shapes), we consider only axiomatic and unsupervised methods for comparison. **Results.** We compare our method with state-of-the-art axiomatic methods and unsupervised methods. The quantitative results are summarised in Tab. 4. Our method outperforms the existing methods substantially, even in comparison to methods relying on additional extrinsic alignment information. We show the PCK curves of our method in Fig. 6 (left) and qualitative results in Fig. 7. animals). Each class has a complete shape to be matched by the other partial shapes. The dataset is divided into two subsets, namely CUTS (missing a large part) with 120/200 train/test split, and HOLES (missing many small parts) with 80/200 train/test split. Results.We summarise the quantitative results on the SHREC'16 datasets in Tab. 5 and the corresponding PCK curve in Fig. 6 (right). Compared to existing methods, our approach is more robust to partiality. We qualitatively compare our method to existing approaches in Fig. 8. ### Analysis of self-adaptive functional map solver We summarise the learned parameters of the functional map solver for different kinds of datasets to better understand the learned regularisation strength and structure. Fig. 9 visualises the different regularisation strength (i.e. \(\lambda\)) and different regularisation structure (i.e. \(\gamma\)) for different datasets. We obverse that the regularisation strength (i.e. \(\lambda\)) for near-isometric shape matching (FAUST, SCAPE) is stronger than the strength for non-isometric shape matching (SMAL, DT4D-H), since in theory functional maps for isometric shape matching are diagonal matrices. In the context of regularisation structure, the funnel-like structure is narrower for topological noisy (TOPKIDS) and partial shapes (CUTS, HOLES). ## 7 Limitation and future work We build upon the existing state-of-the-art method [12] by introducing the self-adaptive functional map solver and the vertex-wise contrastive loss, and thereby achieve the new state of the art on a wide range of benchmark datasets. Yet, there are also some limitations that give rise to interesting future researches. Our unsupervised method is applicable in various settings. However, it can not be used for partial-to-partial shape matching. Therefore, it is interesting to investigate how to extend the current framework for partial-to-partial shape matching. For functional map computation, we optimise the two parameters (i.e. \(\gamma,\lambda\)) that control the regularisation strength and structure. Meanwhile, the number of LBO eigenfunctions is also an important parameter for functional map computation. How to automatically select the best number of LBO eigenfunctions is thereby an another interesting future work direction. ## 8 Conclusion We theoretically analyse the relationship between the functional map from the functional map solver and the functional map from the point-wise map. Based on our theoretical analysis, we extend the current state-of-the-art methods. We evaluate our proposed method on diverse shape matching benchmark datasets with different settings and demonstrate the new state-of-the-art performance. We believe a more accurate and robust non-rigid 3D shape matching method would be beneficial for the shape analysis community to better explore the shape relationship. Figure 8: **Qualitative results on SHREC’16 dataset.** Compared to existing methods, our method is more robust to partiality. \begin{table} \begin{tabular}{l c c c c} \hline \hline Train & \multicolumn{2}{c}{**CUTS**} & \multicolumn{2}{c}{**HOLES**} \\ \cline{2-5} Test & **CUTS** & **HOLES** & **CUTS** & **HOLES** \\ \hline \multicolumn{5}{c}{Axiomatic Methods} \\ PFM [57] & 9.7 & 23.2 & 9.7 & 23.2 \\ FSP [43] & 16.1 & 33.7 & 16.1 & 33.7 \\ \hline \multicolumn{5}{c}{Supervised Methods} \\ GeomFMaps [16] & 12.8 & 20.6 & 19.8 & 15.3 \\ DPFM [4] & 3.2 & 15.8 & 8.6 & 13.1 \\ \hline \multicolumn{5}{c}{Unsupervised Methods} \\ DPFM-unsup [4] & 9.0 & 22.8 & 16.5 & 20.5 \\ ConsistFMaps [10] & 8.4 & 23.7 & 15.7 & 17.9 \\ URSSM [12] & 3.3 & **13.7** & 5.2 & 9.1 \\ Ours & **2.3** & 15.2 & **5.1** & **6.9** \\ \hline \hline \end{tabular} \end{table} Table 5: **Partial shape matching on SHREC’16 dataset.** Our method substantially outperforms state-of-the-art methods and shows comparable cross-dataset generalisation ability, even in comparison to the supervised approached. Figure 9: **Different regularisation strength and structure for different datasets.** The self-adaptive functional map solver enables to adjust the regularisation based on the training data.
2301.08630
Evaluating approaches for on-the-fly machine learning interatomic potential for activated mechanisms sampling with the activation-relaxation technique nouveau
In the last few years, much efforts have gone into developing universal machine-learning potentials able to describe interactions for a wide range of structures and phases. Yet, as attention turns to more complex materials including alloys, disordered and heterogeneous systems, the challenge of providing reliable description for all possible environment become ever more costly. In this work, we evaluate the benefits of using specific versus general potentials for the study of activated mechanisms in solid-state materials. More specifically, we tests three machine-learning fitting approaches using the moment-tensor potential to reproduce a reference potential when exploring the energy landscape around a vacancy in Stillinger-Weber silicon crystal and silicon-germanium zincblende structure using the activation-relaxation technique nouveau (ARTn). We find that a a targeted on-the-fly approach specific and integrated to ARTn generates the highest precision on the energetic and geometry of activated barriers, while remaining cost-effective. This approach expands the type of problems that can be addressed with high-accuracy ML potentials.
Eugène Sanscartier, Félix Saint-Denis, Karl-Étienne Bolduc, Normand Mousseau
2023-01-20T15:25:00Z
http://arxiv.org/abs/2301.08630v2
Evaluating approaches for on-the-fly machine learning interatomic potential for activated mechanisms sampling with the activation-relaxation technique nouveau ###### Abstract In the last few years, much efforts have gone into developing universal machine-learning potentials able to describe interactions for a wide range of structures and phases. Yet, as attention turns to more complex materials including alloys, disordered and heterogeneous systems, the challenge of providing reliable description for all possible environment become ever more costly. In this work, we evaluate the benefits of using specific versus general potentials for the study of activated mechanisms in solid-state materials. More specifically, we tests three machine-learning fitting approaches using the moment-tensor potential to reproduce a reference potential when exploring the energy landscape around a vacancy in Stillinger-Weber silicon crystal and silicon-germanium zincblende structure using the activation-relaxation technique nouveau (ARTn). We find that a a targeted on-the-fly approach specific and integrated to ARTn generates the highest precision on the energetic and geometry of activated barriers, while remaining cost-effective. This approach expands the type of problems that can be addressed with high-accuracy ML potentials. ## I Introduction As computational materials scientists turn to attention to ever more complex systems, they are faced with two major challenges : (i) how to describe correctly their physics and (ii) how to reach the appropriate size and time scale to capture the properties of interest. The first challenge is generally solved by turning to _ab initio_ methods,[1] that allow the solution Heisenberg's equation with reasonably controlled approximations. Theses approaches, however, suffer from \(N^{4}\) scaling which limits their application to small system sizes and short time scales. The second challenge is met by a variety of methods that cover different scales. Molecular dynamics[2], for example, which directly solves Newton's equation, accesses typical time scales between picoseconds and microseconds, at the very best. Other approaches, such as lattice[3; 4] and off-lattices kinetic Monte-Carlo[5; 6], by focusing on physically relevant mechanisms, can extend this time scale to seconds and more, as long the diffusion takes place through activated processes. Even though these methods are efficient, each trajectory can require hundreds of thousands to millions of forces evaluations, which becomes too costly with _ab initio_ approaches, forcing modellers to use empirical potentials in spite of their incapacity at describing correctly complex environments. Building on _ab initio_ energy and forces, machine-learned potentials[7; 8; 9; 10] open the door to lifting some of this difficulties, by offering much more reliable physics as a small fraction of the cost of _ab initio_ evaluations. Since their introduction, ML potentials have been largely coupled with MD and focusing on the search for universal potentials, able to describe a full range of structures and phases for a given material[11; 12; 13]. As we turn to more complex systems such as alloys and disordered and heterogeneous systems, it becomes more and more difficult to generate such universal potentials, since the number of possible environments grows rapidly with this complexity. In this context, the development of specific potentials, with on-the-fly learning that makes it possible to adapt to new environments, becomes a strategy worth exploring. In this work, we focus on the construction of machine-learned potentials adapted to the sampling of energy landscape dominated by activated mechanisms, i.e., solid-state systems with local activated diffusion and evolution. A correct computational sampling, using methods such as the activation-relaxation technique (ART)[14] and its revised version (ART nouveau or ARIn)[15; 16], requires a precise description of local minima and of the landscape surrounding the first-order saddle points that characterize diffusion according to the transition-state theory (TST)[17]. These barriers can be high -- reaching many electron-volts -- and involve strained configurations that can be visited only very rarely with standard molecular dynamics. More specifically, we compare three machine learning procedures in which we change the context where leaning on-the-fly occur to train a Moment Tensor Potential (MTP)[10; 18] that describes the diffusion of vacancy in Stillinger-Weber silicon[19] and silicon-germanium[20] as sampled with ARTn. The first one uses a pure MD learning procedure, fitted at various temperatures, in a procedure that echoes the work of Novoselov _et al.[21]_, a second-one adds an on-the-fly adjustment during an ARTn run and the third one focuses on purely OTF-ARTn potential adjustment. Results underline the efficiency gain in developing targeted ML potentials for specific applications, comparing the cost of fitting Si with SiGe, it also shows the rapid increase in computation complexity associated with moving from element to alloy systems, which emphasizes the usefulness of a specific approach such as the one applied here to activated processes. Methodology ### ML Potential The Moment Tensor Potential (MTP) [10; 18] is a linear model of functions \(B_{\alpha}(\mathbf{r}_{i})\) built from contractions of moment tensor descriptors defined by the local neighborhood relative position \(\mathbf{r}_{i}\) of atom \(i\) within a sphere of influence of radius \(r_{c}\) respecting a set invariances. This model has been shown to be fast while giving accuracy on the order of \(\sim\)meV/atom and requiring few hundreds to thousands of reference potential calls [22] on-the-fly. MTP have been used on a wide variety of problems including on-the-fly MD simulation [18; 21; 23], search and minimization of new alloys [24; 25] and diffusion processes [21] on systems counting one or multiple species. MTP approximates atomic configuration energy as sum of local contributions. A local contribution is obtained through a sum over the included basis \(\{B_{\alpha}(\mathbf{r}_{i})\}\) as a linear combination of \(B(\mathbf{r}_{i})\) and \(\xi_{\alpha}\), \[V(\mathbf{r}_{i})=\sum_{\alpha=1}^{m}\xi_{\alpha}B_{\alpha}(\mathbf{r}_{i}) \tag{1}\] The "level" of a potential gives the number of different possible tensor \(M_{\mu,\nu}\left(\mathbf{r}_{i}\right)\) descriptors. The \(\{B_{\alpha}(\mathbf{r}_{i})\}\) functions of Eq. 1 are constructed by a tensorial contraction of different \(M_{\mu,\nu}\left(\mathbf{r}_{i}\right)\) and the number of different tensorial contraction sets \(m\) in Eq. 1. More information on MTP is available in Ref. [18]. The total energy of a N-atom configuration (\(\mathfrak{R}\)) is then given by the sum of N local contributions \[E(\mathfrak{R})=\sum_{i=1}^{N}V(\mathbf{r}_{i})=\sum_{i=1}^{N}\sum_{\alpha=1 }^{m}\xi_{\alpha}B_{\alpha}(\mathbf{r}_{i}) \tag{2}\] and the forces are obtained by taking the gradient of this quantity \[\mathbf{F}(\mathfrak{R})=-\nabla\sum_{i=1}^{N}\sum_{\alpha=1}^{m}\xi_{\alpha} B_{\alpha}(\mathbf{r}_{i}) \tag{3}\] The parameters \(\xi_{\alpha}\) are obtained by minimizing the loss function: \[\sum_{\mathfrak{R}\in\mathrm{A}}\left[w_{e}\left|E(\mathfrak{R})-\hat{E}( \mathfrak{R})\right|_{2}^{2}+w_{f}\sum_{i}^{N}\left|\mathbf{f}_{i}(\mathfrak{R })-\mathbf{\hat{f}}_{i}(\mathfrak{R})\right|_{2}^{2}\right]\rightarrow\min_{\xi} \tag{4}\] Here \(\mathrm{A}\) is the training set made of configurations with known energy and forces. The goal is to minimize the difference between \(E(\mathfrak{R})\), \(\mathbf{f}_{i}(\mathfrak{R})\)(real value) and \(\hat{E}(\mathfrak{R})\), \(\mathbf{\hat{f}}_{i}(\mathfrak{R})\)(predicted by model), respectively, for all element in \(\mathrm{A}\). Weights on contribution from energy and forces (\(w_{e}\) and \(w_{f}\)) are set to one. ### Learning On-The-Fly Tools On-the-fly atomic machine learning potential (OTF) involves the repeated training of the model potential as new atomic environments are generated through various procedures. Following the work of Shapeev and collaborators [18], the reliability of the potential to describe a given configuration is evaluated using the D-optimality criterion to grade to which extend a configuration extrapolate. This grade is used along with a selection algorithm (MaxVol) to assess whether the new configuration should be added to the training set or replace a configuration already in it. While a detailed description can be found in Ref. [23], we provide here a brief summary of the retained approach. The selection and extrapolation-grade algorithm can be applied using either a local-energy or a global-energy descriptor. The local-energy descriptor is presented as a rectangular matrix \(\mathrm{G}_{\mathrm{m}\times N}\) formed by the basis elements \(\{B_{\alpha}(\mathbf{r}_{i})\}\) associated with the neighborhood \(\mathbf{r}_{i}\) of all \(N\) atoms: \[\mathrm{G}=\left(\begin{array}{ccc}B_{1}(\mathbf{r}_{1})&\ldots&B_{m}( \mathbf{r}_{1})\\ \vdots&\ddots&\vdots\\ B_{1}(\mathbf{r}_{N})&\ldots&B_{m}(\mathbf{r}_{N})\end{array}\right)^{\mathrm{T}}\] For a given configuration, the global-energy description reduces this information to a vector \(\mathbf{g}\) \[\mathbf{g}=\left[\begin{array}{ccc}b_{1}(\mathfrak{R})&\ldots&b_{m}( \mathfrak{R})\end{array}\right]\] where each term, \(\{b_{\alpha}(\mathfrak{R})\}\) is a sum over all neighborhoods for a specific basis element \(\{B_{\alpha}(\mathbf{r}_{i})\}\): \[\{b_{\alpha}(\mathfrak{R})\}=\sum_{i=0}^{N}\{B_{\alpha}(\mathbf{r}_{i})\}\] For the global-energy descriptor, evaluating the overlap of a new configuration with the training set \(A\) is done by solving for \(c_{j}\), in \[\mathrm{A}\left[\begin{array}{ccc}c_{1}&\ldots&c_{m}\end{array}\right]= \mathbf{g}, \tag{5}\] The coefficients \(\{c_{j}\}\) can be understood as expressing \(\mathbf{g}\) through \(\mathrm{A}\). The extrapolation grade, \(\gamma\), is then defined as the largest component of \(\{c_{j}\}\), \[\gamma(\mathfrak{R})=\max\left|c_{j}\right|. \tag{6}\] The same approach is used for the local-energy description, applying Eq. 5 with the rows of matrix \(\mathrm{G}\) rather than the vector \(\mathbf{g}\) and solve for a matrix of \(c_{j,k}\) and Eq. 6 becomes \(\gamma(\mathfrak{R})=\max\left|c_{j,k}\right|\). For \(\gamma(\mathfrak{R})\) below a certain threshold \(\gamma_{0}\), the new configuration is considered to overlap sufficiently with the training set to allow the model to interpolate with confidence. For \(\gamma_{0}<\gamma(\mathfrak{R})<\gamma_{max}\), the model cannot be applied with confidence, but can be adapted by adding this configuration to the training set. When \(\gamma(\mathfrak{R})>\gamma_{max}\), the configuration is too far from the training set and it is rejected as the model cannot be adapted with confidence. In this work, we set \(\gamma_{0}=1.1\) and \(\gamma_{max}=2.2\), unless specified otherwise. ### On-The-Fly Learning Cycle Workflow Our workflow is similar to that of Ref. [18], with main differences discussed in Section II.6. We follow the same general machine-learning on-the-fly workflow for all sampling approaches tested here. We split each simulation in one or multiple sequences of atomic configurations generated using either MD or ARTn. Each run unrolls as follows (see fig. 1): 1. Launch a sequence during which configurations are generated according to a sampling algorithm (MD or ARTn). At each iteration step the extrapolation-grade \(\gamma\) is evaluated. 1. If \(0<\gamma<\gamma_{max}\), the energy and forces of the configuration are evaluated with MTP; 2. if \(\gamma_{0}<\gamma<\gamma_{max}\), the configuration is set aside for an update of MTP parameters; 3. else if \(\gamma>\gamma_{max}\), energy and forces of the configuration are not evaluated with MTP and the configuration is not kept for update. The sequence is stopped and we go directly to the update step (step 3). 2. Move on next to the iteration in the sequence (step 1). 3. The model is updated, if at at least one configuration as been set aside for an update of MPT (i) at the end of a sequence or (ii) at any moment during the sequence if \(\gamma>\gamma_{max}\). 4. If there is an update, restart a new sequence (go to step 1), else stop if no configurations with \(\gamma>\gamma_{0}\) have been set aside during the predefined maximum length of the sequence. The moment tensor potential model update is defined as follows (see Fig. 1, right-hand side): 1. A selection is made from the set aside configurations (with \(\gamma>\gamma_{0}\)) using MaxVol [23]. 2. Each selected configuration is evaluated by the reference model 3. The training set is updated with the new evaluated configurations 4. The moment tensor potential is fitted on the new training set accordingly to Eq. 4 More details of this procedure can be found in Ref. [23]. ### MD and ARTn Two sampling approaches are used to generate a sequence of configurations: (1) molecular dynamics (MD) as implemented within LAMMPS [26] and (2) the activation-relaxation technique nouveau (ARTn) algorithm developed by Mousseau and collaborators [14; 15; 27]. Since MD is well known, we only give below a brief summary of ARTn. ARTn is designed to explore the potential energy landscape of atomic systems through the identification of local transition states connecting nearby local minima. Its workflow can be summarized in three main steps (see, for a recent in depth discussion of the ARTn version used in this work, see Ref. [27]): 1. _Leaving the harmonic well:_ starting from an energy minimum, an atom and its neighbours are moved iteratively in a direction selected at random until a direction of negative curvature on the potential energy surfaces, \(\mathbf{d}(\lambda_{\mathrm{min}})\) with \(\lambda_{\mathrm{min}}\), the lowest eigenvalue of the Hessian matrix, smaller than zero, emerges; this indicates the presence of a nearby first-order saddle point; 2. _Converging to a first-order saddle point_: the system is then pushed in the direction of negative curvature \(\mathbf{d}(\lambda_{\mathrm{min}})\) while the force is minimized in the perpendicular plane, until the total force \(F\) passes below a threshold near \(F_{0}\), which indicates the saddle point have been reached; 3. _Relaxing into a new minimum_: the system is then pushed over the saddle point and relaxed into a connected new minimum. Figure 1: On-the-fly machine learning workflow used with MD and ARTn (on the left). A potential update can take place at two points: when the sequence ends or when \(\gamma>\gamma_{max}\). The updating procedures are given in the box on the right. At each step \(\lambda_{\min}\) and \(\mathbf{d}(\lambda_{\min})\) are found using an iterative Lanczos method [28; 29; 16]. Perpendicular relaxation during activation and global minimization are done using the Fast Inertial Relaxation Engine (FIRE) algorithm [30]. Generated events are accepted or rejected according to the Metropolis algorithm, where the acceptation probability \(p\) is given by \[p=\min\left(1,e^{-\beta\Delta E}\right) \tag{7}\] with \(\Delta E=E_{\text{saddle}}-E_{\text{minimum}}\), the energy difference between the saddle and a connected minima and \(\beta=1/k_{B}T\) where \(k_{B}\) is the Boltzmann factor and \(T\) is a fictitious temperature, since thermal deformations are not taken into account. Potential energy landscape exploration consist of generating a number of event. ### Systems studied The fitting approaches are tested on two physical systems: (i) a Si diamond structure with Stillinger-Weber as a reference potential [19]; and (ii) a SiGe zincblende structure using the Stillinger-Weber potential with parameters from Ref. [20]. Both models count 215 atoms and a vacancy. The Si system is fitted with a ML potential set at level 16, with 92 moment tensor functions (\(B(\mathfrak{R})\), Eq. 1). For SiGe, a potential at this level (16) generates errors on the barrier of the order of 0.5 eV, which indicates that a richer set of parameters is needed to describe the chemical diversity and a level 20 is chosen for this system, with 288 moment tensor functions. The relation between the number of moment tensor functions for Si and energy error is presented in Supplemental Fig. 1. ### Fitting approaches To evaluate the reliability of the various on-the-fly approaches to reproduce the reference potential on configurations of interest for complex materials, the training set is limited to structures visited during MD or ARTn simulations within the conditions described below. No additional information regarding alternative crystalline structures, defects, surfaces, pressure, etc. is provided. For each of these two systems, we compare the following approaches: 1. ML-MD: The MTP potential is train OTF on MD simulations. The potential is then evaluated, _without further update_, in ARTn simulation. 2. OTF-MDART: Starting from the ML-MD generated potential, the MTP is re-trained following the OTF procedure during ARTn simulations. 3. OTF-ART: Training of the potential is done uniquely during ARTn runs with OTF. The ML-MD approach is in line with [21] where a potential is trained OTF during MD. However, while the potential is trained with MD, its accuracy is evaluated during ARTn activated process search. #### iii.6.1 ML-MD: simulations details Nine sets of MTP ML-MD potentials are developed and trained independently during NVT MD simulations. Each set is trained at one specific simulation temperature ranging from 300 K to 2700 K by step of 300 K and starting from the same 215 atom crystalline structure with a vacancy. Each set consists of ten independently constructed MTP potentials for statistical purpose. Training takes place on a series of sequences, each run for a maximum of 100 ps, with steps of 1 fs, with an average of 75 ps per cycle. MTP potentials require about \(34\pm 14\) and \(93\pm 43\) learning cycles for Si and SiGe to be converged: the MTP potential is considered having learned the potential when no configuration generated during a 100 ps second is found in the extrapolating zone of the potential (with \(\gamma>\gamma_{max}\)). As long as this is not the case, the sequence is restarted from the same initial structure with different initial velocities. To facilitate convergence, ML-MD potentials are fitted over three sets of progressively more restricted reliability extrapolation parameter \(\gamma_{0}\). Moreover because MD leads to global deformation, the extrapolation is computed using global descriptors (see tab. 1). The final potential is then evaluated, in a fixed form, in ARTn simulations. #### iii.6.2 OTF ARTn simulations details Each ARTn simulation is launched for 1500 events, with 24 parallel independent searches, for a total of 36 000 generated events. For ARTn, a sequence is either a search for a saddle point (successful or failed) or a minimization from the saddle to minimum. At each point, 24 sequences are generated in parallel, and the configuration selected for an update of the potential is made on the combined set of configurations to generate one training set. Sequence are restarted from the last accepted position or, in the case of the vacancy in Si, the ground state. When an activation step generates a configuration with \(\gamma(\mathfrak{R})>\gamma_{max}\), it is relaunched \begin{table} \begin{tabular}{l r r r} \hline \hline approach: & \(\gamma_{0}\) & \(\gamma_{max}\) & \begin{tabular}{c} grade- \\ mode \\ \end{tabular} \\ \hline ML-MD & 5.5/3.3/1.1 & 60/10/2.2 & global \\ OTF-MDART & 1.1 & 2.2 & local \\ \hline \hline \end{tabular} \end{table} Table 1: Extrapolation and selection hyper-parameter values used for the three on-the-fly approaches used in this work. with the same initial deformation. As with MD, ten independent ARTn runs are launched for statistics. In the bulk, diffusion of the vacancy in Si takes place through a symmetric mechanism bringing the vacancy from one state to an identical one so all ARTn event searches are effectively started from the same state. Starting from a zincblende structure, SiGe evolves according to an accept-reject Metropolis with a fictitious temperature of 0.5 eV [31]. Since the configurations explored by ARTn are locally deformed; the extrapolation grade for ARTn generated configurations used for the OTF-MDART and OTF-ART approaches are evaluated with the local descriptors. ### Analysis Following the standard approach, the error is computed on the energy and force differences between the MLP and reference potentials computed on the same structures. Here, however, this error is only measured on configurations generated during the ARTn procedure. For the energy: \[\Delta E=|E_{MLP}(X_{MLP})-E_{ref}(X_{MLP})|, \tag{8}\] For the forces: \[\Delta F=\frac{1}{N}\sum_{i=0}^{N}\sqrt{\|\mathbf{f}_{MLP}^{(i)}(X_{MLP})- \mathbf{f}_{ref}^{(i)}(X_{ref})\|^{2}}, \tag{9}\] where the positions \(X_{MLP}\) are obtained from a simulation run with the machine-learned potential and the energy on this exact configuration is computed with the reference and the machine-learned potentials. The same is done for the error on forces. Since this work is focused on the correct description of first-order transition states, we also compute the minimum and saddle barrier positions and energy convergence errors(\(\Delta X_{\text{conv}}\), \(\Delta E_{\text{conv}}\)) as \[\Delta X_{\text{conv}} =\sqrt{\sum_{i=0}^{N}\|\mathbf{x}_{MLP}^{(i)}-\mathbf{x}_{ref}^{ (i)}\|^{2}}, \tag{10}\] \[\Delta E_{\text{conv}} =|E_{MLP}(X_{MLP})-E_{ref}(X_{ref})|, \tag{11}\] where \(X_{MLP}\) and \(X_{ref}\) are the positions corresponding to minimum or saddle point as defined by the MLP and the reference potentials respectively, with \(E_{MLP}(X_{MLP})\) and \(E_{ref}(X_{ref})\) the corresponding energies; by definition, forces are zero at these points defined by the respective potentials. While \(X_{MLP}\) and \(E_{MLP}(X_{MLP})\) are obtained on the ARTn trajectories, \(X_{ref}\) and \(E_{ref}(X_{ref})\) are obtained after reconverging the minima or the saddle point using the reference potential starting from \(X_{MLP}\) and following the ARTn procedure. From an energy barrier \(\delta E(X)\), the energy barrier error \(\Delta\delta E_{barrier}\) is given by, \[\Delta\delta E_{barrier}=|\delta E_{MLP}(X_{MLP})-\delta E_{ref}(X_{ref})| \tag{12}\] If no trend is observed between the different temperatures where potentials are trained, we calculate their average and deviation in order to to effectively compare them with other approach. ## III Results In this section, we first examine results for a vacancy in c-Si to establish the methods then consider the same approaches on the more complex SiGe alloy. ### ML-MD The ML-MD approach serves as a benchmark to assess the efficiency of the various approaches in sampling energy barriers and diffusion mechanisms. Here, ten independent ML potentials are generated through on-the-fly MD simulations at 9 different target temperatures ranging from 300 to 2700 K by step of 300 K and require between \(253\pm 60\), at 300 K, and \(369\pm 85\) evaluations of the reference potential, at 2700 K, to complete learning cycles (see Fig. 2). For the purpose of this work, the quality of the ML-MD potential is evaluated on configurations generated with ARTn as local activated events associated with vacancy in a crystalline environment are generated. To avoid non-physical results, when a ARTn-generated configuration shows a \(\gamma>200\), the configuration is rejected, the event search is stopped and a new event search is launched from the same initial minimum. Figure 2: Number of calls to the reference potential for each of the machine-learned potentials developed for Si as function of the temperature referring to the one used during MD training. Since configurations are relaxed to zero K in ARTn simulations, there is no associated temperature for this procedure. Fig. 3 shows the standard validation error on energy and forces calculated over _all_ configurations generated along pathways for the 36 000 successful events and 10 080 failed saddle searches (a success rate of 78 %). The error on energy increases almost exponentially with the sampling temperature, ranging from \(0.44\pm 0.36\) meV/atom at 300 K to \(5.1\pm 1.7\) meV/atom at 2700K. The error on forces is essentially constant at 0.0123 eV/A, on average, between 300 and 1800 K, and increases rapidly at high temperature, to reach 0.0256 eV/A at 2700 K. Since, the focus of this work is on transition states, Fig. 4 displays the error on the energy barriers as a function of MD-fitting temperature, computed with Eq. 10 and averaged over all generated barriers. This error is relatively uncorrelated of the MD temperature simulation with an average of \(0.056\pm 0.022\) eV, with minimum error of \(0.024\pm 0.01\) eV at 2400 K and maximum of \(0.08\pm 0.03\) eV at 1200 K. This error is lower than that for a general point on the energy landscape (Fig. 3) in part because it is computed as a difference between saddle and initial minimum. a vacancy in crystalline silicon at a low computational costs (263 to 369 evaluations of the reference potential). ### Revisiting ML-MD potential in ARTn: the OTF-MDART adjusting approach To evaluate the possibility of improving on ML-MD potentials for activated events, potentials are on-the-fly re-trained during ARTn learning cycles (OTF-MDART). Fig. 2 gives the number of calls to the reference potential for this procedure during the ARTn runs (dashed orange line) as well as the total number of calls, including those made during ML-MD fitting (solid orange line). The number of calls during ARTn learning cycles ranges from \(979\pm 153\) at 300 K to to \(136\pm 38\) at 2700 K for a total of \(1232\pm 177\) to \(505\pm 109\) respectively, when including ML-MD calls. The error on energy and forces remains correlated with the ML-MD temperature: it is higher when the error is higher at ML-MD trained temperature. This correlation is particularly strong when retraining MD potentials fitted between 1500 and 2700 K (Fig. 3, solid orange line). Error on energy for OTF-MDART is almost constant between 300 and 2400 K, at 0.22 meV/atom, rising to 1.9 meV/atom at 2700 K, lower by 50 to 63 % than ML-MD. As similar improvement is observed on the forces, which range from 0.0103 eV/A, on average, between 300 and 1800 K, increasing to 0.0173 eV/A at 2700 K, representing a 16 % to 32 % decrease in error. Between 300 and 1500 K, retrained potentials with OTF-MDART show more constant energy barrier errors than pure ML-MD models (Fig. 4), with an error of about 0.036 eV (OTF-MDART) vs average of 0.064 eV (ML-MD) a 44 % improvement. At the highest temperature -- 1800 to 2700 K, however, as OTF-MDART calls for less learning cycles, errors and fluctuations are not reduced with respect to ML-MD. Interestingly, though, improvements on the saddle position is observed at all temperatures for OTF-MDART (Fig. 5) with an average error of \(0.072\pm 0.010\) A. Overall, by retraining ML-MD potential in ARTn, errors are reduced and results are more consistent, i.e., error distributions are narrower, irrespective of the temperature used in the initial MD training. This additional retraining leads to a 50 % to 96 % decrease in energy error (Fig. 3), a 29 % improvement for average energy barrier errors (Tab. 2) and a 37 % reduction on mean saddle positions errors but with an additional number of calls to the reference potential increasing between 37 to 490 %. These results can be understood by looking at the fraction of MD-generated configurations that remain in the training set at the end of the simulation (Fig. 6): at temperatures between 300 and 1200 K, none of the ML-MD configurations remain in the final training set for training temperatures between 300 and 1200 K; this proportions goes from from 1.3 to 38 % between 1500 and 2700 K (left-hand axis, blue line). At these temperatures, the system melts and generates a wider range of configurations. Since these configurations are far from ARTn-generated configurations, the selection algorithm keeps them in the set even though they do not help reduce errors for the configurational space of interest with ARTn. ### The OTF-ART adjusting approach Given the results for OTF-MDART, we now turn to an OTF approach entirely integrated in ARTn, in an attempt to increase accuracy, and reduce the cost and waste of evaluations of the reference potential. Ten independents on-the-fly ML potential are generated entirely in ARTn for a total of 36 000 events starting from the same initial minimum. Each potential is trained initially from the same one configuration (the initial minimum), in the training set. Each parallel event search goes two a learning cycle if needed and as the simulation progresses learning cycle become rarer. The values are averaged over the ten simulation and as the simulation go through learning. With an average total of \(628\pm 283\) reference potential evaluations, the cost of the OTF-ART is be \begin{table} \begin{tabular}{c c c c} \hline Errors & ML-MD & OTF-MDART & OTF-ART \\ \hline Energy (eV) & 0.056\(\pm\)0.022 & 0.039\(\pm\)0.008 & 0.040\(\pm\)0.012 \\ Position (Å) & 0.114\(\pm\)0.029 & 0.072\(\pm\)0.006 & 0.072\(\pm\)0.010 \\ \hline \end{tabular} \end{table} Table 2: Average energy barrier error and mean position error on all saddle point for Si. The average error for ML-MD and OTF-MDART training is taken over all temperature sets. Figure 6: Fraction of original MD configurations (left scale) and total number of MD configurations (right scale) remaining in the final training set (TS) for Si. Temperature refers to the one used during MD training. tween that of ML-MD and OTF-MDART. Along pathways, the average energy error for these potentials is of \(0.22\pm 0.03\) meV/atom, on par with OTF-MDART potential based on low-temperature ML-MD fitting, and 49 % lower than the 300 K ML-MD potential. Errors on forces, at 0.011\(\pm\)0.001 eV/A, are in between ML-MD (0.012 eV/A) and OTF-MDART (0.010 eV/A) at low training temperature. Comparing with the 2700 K potential fitting in MD, OFT-ART error is 57 % lower than ML-MD (0.026 eV/A) and 36 % lower than OTF-MDART (0.017 eV/A). Focusing on barrier energy, the average error is \(0.039\pm 0.008\) eV (see Fig. 4), about 2.5 % lower than OTF-MDART and 30.3 % better than ML-MD. The error of \(0.072\pm 0.006\) A on the converged saddle position is similar to the \(0.072\pm 0.010\) A obtained with OTF-MDART and 37 % lower than with ML-MD (0.114 A). ### Reproducing the dominant diffusion mechanism The exploration of the energy landscape around the vacancy leads to a generation of wide range of activated mechanisms and associated barriers (both forward, associated with the diffusion of the vacancy, and backward, from the final minima back to the saddle point). Fig. 7 presents the complete distribution of generated direct and inverse barriers connected to the ground state. The peak near 0 eV (around 10\({}^{-2}\) to 10\({}^{-1}\) eV) is associated with the inverse barrier to to direct saddle at 2.38, 2.70 eV and higher (up to 5.5 eV), except for the inverse 0.45 eV barrier which is linked to the 2.87 eV direct barrier. Direct barriers at 0.51 eV represent symmetric first neighbor vacancy diffusion while barriers at 2.38 and 2.70 eV are associated with more complex vacancy diffusion mechanism [32]. Events with barriers at 2.38, 2.70 eV, for example, involve a vacancy diffusion through complex bond-exchanges. Spectator events [33] where the diamond network around the vacancy is transformed by a bond switching are also generated. This mechanism was proposed by Wooten, Winer, and Weaire (WWW) to describe the amorphization of silicon [34]. The main spectator event occurs as two neighbors of the vacancy are pushed together allowing the creation of a bound associated with the 2.87 eV barrier. Other mechanisms involve strong lattice distortion and bond formation not involving direct neighbors of the vacancy with very high energy barriers [32] of in between 3.2 and 4.0 eV. Since vacancy diffusion for this system is dominated by a 0.51 eV single barrier mechanism, with the next barrier at 2.35 eV, an accurate description of the dominant mechanism is essential to correctly capture defect kinetics in Si. Tab. 3 presents the error on this barrier for the three approaches described above. With an error of 0.019\(\pm\)0.005 eV, a relative error of 3.7 %, OTF-ART offers the closest reproduction of the reference barrier, followed by OTF-MDART and ML-MD, with a respective error of 0.022\(\pm\)0.011 (relative error of 4.3 %) and 0.026\(\pm\)0.015 (5.1 %). Overall, the error on energy barrier is lower than that on the total energy presented above (0.046\(\pm\)0.006 eV for OTF-ART, for example), due to a partial error cancellation associated with energy difference taken to measure the barrier. The validity of the barrier is also measured by the precision on the saddle geometry. For the e 0.51 eV barrier, ML-MD converges with an error on the position of 0.088\(\pm\)0.036A, with OTF-MDART and OTF-ART giving error almost 50 % lower, at 0.040\(\pm\)0.017A and 0.047\(\pm\)0.018A respectively. ### SiGe system Having shown the interest of developing a specific potential by applying on-the-fly learning directly to activated events on a simple system such as c-Si with a vacancy, we test this approach with a more complex alloy with the same overall reference potential to facilitate comparison. Starting from a ordered zincblende structure, the diffusion of a vacancy creates chemical disorder that complexifies the landscape visited as shown by the \begin{table} \begin{tabular}{l c c c} \hline \hline Errors & ML-MD & OTF-MDART & OTF-ART \\ \hline Energy (eV) & 0.026\(\pm\)0.015 & 0.022\(\pm\)0.011 & 0.019\(\pm\)0.005 \\ Position (Å) & 0.088\(\pm\)0.036 & 0.040\(\pm\)0.017 & 0.047\(\pm\)0.018 \\ \hline \hline \end{tabular} \end{table} Table 3: Average energy barrier errors and mean saddle position error on the 0.51 eV vacancy diffusion for Si. The average error for ML-MD and OTF-MDART training is taken over all temperature sets. Figure 7: ARTn-generated energy barrier distributions for vacancy-diffusion events in Si, including direct barrier (from ground state) and inverse barriers (from excited states), as generated with the MTP model (orange) and re-converged using the reference model (blue) from saddles and minima position originally found with the MTP model. continuous distribution of activated barriers, including both direct and inverse barriers, found as the vacancy diffuses (Fig. 8); we note that the lowest barrier for a vacancy diffusing is around 0.6 eV, with lower barriers associated, as for Si, with reverse jumps from metastable states. The energy barrier distribution for a vacancy diffusing in SiGe (Fig. 8) is much more complex than for Si due to the chemical disorder that builds as the vacancy diffuses. As stated in the methodology, the additional complexity of the system imposes a richer machined-learning potential, with a larger set of parameters to encompass the greater diversity in the components and the configurations, due to chemical disorder. Combined, these two levels of complexity (set of parameters and configurational) result in an overall higher numbers of calls to the reference potential as compared to Si, irrespective of the approach used (see Fig. 9 (SiGe) vs. Fig. 2(Si)): while ML-MD requires between 380 evaluations of the reference potential at 300 K and 1549 at 2700 K, OTF-MDART needs a total of around 3465 calculations of the reference \begin{table} \begin{tabular}{c c c c} Errors & ML-MD & OTF-MDART & OTF-ART \\ \hline Energy (eV) & 0.082\(\pm\)0.024 & 0.072\(\pm\)0.014 & 0.066\(\pm\)0.015 \\ Position (Å) & 0.091\(\pm\)0.020 & 0.076\(\pm\)0.013 & 0.070\(\pm\)0.014 \\ \hline \end{tabular} \end{table} Table 4: Average energy barrier errors and mean saddle position error on all barriers for SiGe. The average error for ML-MD and OTF-MDART training is taken over all temperature sets. Figure 8: SiGe barrier histogram, including direct barrier (from ground state) and inverse barriers (from excited states), as found on-the-fly by the MTP model(orange) and re-converge by the reference model(blue) from saddles and minimums position originally given by MTP. Figure 10: Average energy (top) and mean absolute forces (bottom) errors for SiGe measured over all configurations generated along pathways in ARTn for the three approaches. Temperature refers to the one used during MD training. Figure 9: Number of calls to the reference potential for each of the OTF machine-learned potentials developed for SiGe as a function of the temperature referring to the one used during MD training. Since configurations are relaxed to zero K in ARTn simulations, there is no associated temperature for this procedure. potential, irrespective of the temperature as original ML-MD configurations are progressively removed from the training set. This efforts results in a number of calls to the reference potential for OTF-MDART 4 % higher than with OTF-ART (3329 on average). To reduce computational costs, we omit the 1500 K run, as statistical behavior is smooth in this temperature region. To disentangle the two contributions, we compare with the cost of fitting a Si potential with the same level 20 potential as used for SiGe. Following the full OTF-ART procedure, creating a Si MLP requires 2926 calls to the reference potential. The intrinsic complexity of the landscape contributes therefore to about a 14 % increase of the Si baseline calls count. In terms of accuracy, the Si MLP level 20 leads to an average error on energy of 0.1 meV/atom, about 50 % lower than with the level 16 potential described above (0.22 meV/atom). For SiGe, this error is (0.42 meV/atom), two times higher than for Si MLP level 16 and four times that of Si MLP level 20. This can be understood by the number of different configurations visited: as opposed to the Si system where each initial minimum is identical (as the vacancy moves in an otherwise perfect elemental crystal), the binary system is transformed as the vacancy diffuses, as the chemical order is slowly destroyed: each of the 24 ARTn parallel trajectories used to define the potential over 1500 events evolves independently according to a probability given by the Metropolis algorithm with a fictitious temperature (since network itself is structurally at 0K) of 0.5 eV (Eq. 7), providing a rich range of local environments. Fitting a potential is clearly harder: with the parameters used -- when a configuration graded at \(\gamma>200\) is encountered, the ARTn event search is stopped --, not significantly enough event could be generated using the ML-MD potential at 300 K and 600 K, which explains the absence of data for this temperatures in Fig. 10 and IV. For SiGe, the error on energy (see Fig. 10) with the ML-MD at 900 K and above ranges from 0.5 meV/atom to 1.4 meV/atom, as a function of temperature. On average, these errors are between 14 % and 69 % lower with OTF-MDART or OTF-ART at around 0.43 meV/atom. The OTF-ART approach gives an error in energy barrier of 0.066 + 0.015 eV which represent a 19.5 % and 8.3 % lower error from the ML-MD (0.082\(\pm\)0.024 eV) and OTF-MDART (0.072 + 0.014 eV) respectively (Tab. IV). The errors on the converged saddle position for OTF-ART and OTF-MDART are similar at \(0.070\pm 0.014\) A and \(0.076\pm 0.013\) A, respectively, and represent a 23 % lower error than with ML-MD (0.092 A). This accuracy is similar to that obtained with Si, in contrast to total energy and energy barrier errors. We note that the advantage of ML-MD for SiGe is overstated as shown by the proportion of events generated with ML-MD potential that are interrupted due to a too large extrapolation grade, \(\gamma>200\) for both SiGe and Si (Fig. 11): for SiGe between 85 % and 30 % of events are aborted between 300 K and 1200 K, respectively. This proportion falls to zero percent failure at 1800 K. ## IV Discussion We compare three approaches aimed at the construction of potentials with machine learning on-the-fly for the exploration of activated mechanism of the potential energy landscape. We evaluate these by computing their efficiency at reproducing the energy landscape around a vacancy in two systems, a relatively simple Si diamond system (Fig. 7) and a more complex SiGe zincblende system that disorders under vacancy diffusion (Fig. 8). The first approach, which sets the comparison level, constructs a more general machine learning potential with molecular dynamics (ML-MD), the second on-the-fly adjusts this this generated potential, during the search for activated events using ARTn, while the third approaches constructs a specifically on-the-fly trained a potential during search of activated events (OTF-ART). The efficiency of these three procedures is measured on the quality of the reproduction of the reference potential during the search for activated event. The baseline, defined by the ML-MD, is competitive with previously published work. Energy errors for the more standard ML MD approach with a level 16 potential range from \(0.44\pm 0.36\) meV/atom at 300 K to \(5.1\pm 1.7\) meV/atom at 2700 K (Fig. 3), an order of magnitude lower or similar than the 4 meV/atom on an MTP potential of level 24 for Si obtained by Zuo _et al._[22], with the difference explained by the fact that activated events involve local deformations from a zero-temperature crystal with a vacancy and that DFT potentials are more difficult to fit than empirical ones [18]. Similarly, the relative energy error on the dominant 0.51 eV diffusion barrier for SW Si is of 5.1 % (0.026 eV) Figure 11: Percentage of search interruptions during ML-MD potential evaluation in ARTn (\(\gamma>200\)) for Si and SiGe as function of ML-MD training temperature.
2302.11631
Decoding probabilistic syndrome measurement and the role of entropy
In realistic stabiliser-based quantum error correction there are many ways in which real physical systems deviate from simple toy models of error. Stabiliser measurements may not always be deterministic or may suffer from erasure errors, such that they do not supply syndrome outcomes required for error correction. In this paper, we study the performance of the toric code under a model of probabilistic stabiliser measurement. We find that, even under a completely continuous model of syndrome extraction, the threshold can be maintained at reasonably high values of $1.69\%$ by suitably modifying the decoder using the edge-contraction method of Stace and Barrett (Physical Review A 81, 022317 (2010)), compared to a value of $2.93\%$ for deterministic stabiliser measurements. Finally, we study the role of entropic factors which account for degenerate error configurations for improving on the performance of the decoder. We find that in the limit of completely continuous stabiliser measurement any advantage further provided by these factors becomes negligible in contrast to the case of deterministic measurements.
João F. Doriguello
2023-02-22T20:12:48Z
http://arxiv.org/abs/2302.11631v1
# Decoding probabilistic syndrome measurement and the role of entropy ###### Abstract In realistic stabiliser-based quantum error correction there are many ways in which real physical systems deviate from simple toy models of error. Stabiliser measurements may not always be deterministic or may suffer from erasure errors, such that they do not supply syndrome outcomes required for error correction. In this paper, we study the performance of the toric code under a model of probabilistic stabiliser measurement. We find that, even under a completely continuous model of syndrome extraction, the threshold can be maintained at reasonably high values of 1.69% by suitably modifying the decoder using the edge-contraction method of Stace and Barrett (Physical Review A 81, 022317 (2010)), compared to a value of 2.93% for deterministic stabiliser measurements. Finally, we study the role of entropic factors which account for degenerate error configurations for improving on the performance of the decoder. We find that in the limit of completely continuous stabiliser measurement any advantage further provided by these factors becomes negligible in contrast to the case of deterministic measurements. ## I Introduction To achieve scalable quantum computation, quantum error correction is required to address unavoidable noise on physical qubits. Quantum error-correcting codes [1; 2] can encode quantum information and, combined with a decoder, can enable fault-tolerant computation despite the existence of errors on physical qubits. There are useful benchmarking methods to analyse the performance of error-correcting codes, such as using simple toy error models which abstract away many of the details of a physical system that would actually be used to implement such a code. However, to make progress towards quantum error correction in practice, it is important to analyse the performance of codes and decoders when features of a hardware system are reintroduced. Here we isolate and analyse one such feature which deviates from a simple toy error model: asynchronous measurement. To perform quantum error correction, parity check measurements are repeatedly made on the qubits of the code. A usual setting in topological codes is that these measurement are made deterministically in 'rounds', i.e., on demand (not necessarily error-free), such that in each round every qubit of the code is involved in one parity check. However, when any of the operations that are used to perform a parity check are non-deterministic, this assumption does not apply. Parity checks may be inherently probabilistic, as is the case when they depend on ancillary states from non-deterministic entanglement generation or distillation procedures, e.g. in modular quantum-computing architectures [3; 4; 5]. In other systems, parity checks may be subject to measurement erasure, where measurement outcomes are not always returned, e.g. in photonic quantum computing [6] where single-photon detectors suffer optical loss [7; 8; 9]. In this work we study a model of asynchronous parity check measurement in the toric code. In this model the stabiliser measurements are attempted at discrete times and each attempt provides a parity outcome with probability \(s\), called the _synchronicity_ parameter. We push this to the limit \(s\to 0\) where parity checks are performed continuously. For an independent and identical distributed (i.i.d.) error model and a minimum-weight perfect matching (MWPM) decoder [10; 11; 12], the toric code exhibits a threshold of 2.93% when parity checks are entirely synchronous [13]. We show that, by marking unsuccessful parity checks as erased in the syndrome graph (the 'history' of stabiliser measurement outcomes), it is possible to contract erased edges in the syndrome graph into multi-edges following the method described by Stace and Barrett [14]. This gives a clear framework on how to properly include non-identical error probabilities arising from asynchronism into a MWPM decoder which, when appropriately modified, can maintain the threshold at a reasonably high value of 1.69% in the completely continuous regime. A secondary motivation for studying this model is to explore the role of degeneracy in the MWPM decoder under asynchronous measurements. It is known [14; 15] that accounting for degeneracy, i.e., the number of shortest paths that are consistent with a matching, can improve the usual MWPM decoder's threshold: for an i.i.d. error model with faultless, fully synchronous (\(s=1\)) stabiliser measurements, path counting boosts the MWPM decoder threshold from 10.3% to 10.65% [14]. Moreover, degeneracy has also been used to close the gap between minimum-weight perfect matching and optimal methods [16], as well as to compare different variants of the toric code with a comparable number of qubits [17]. Here we study how to introduce degeneracy into the MWPM decoder under asynchronism by considering multi-path counting on top of the edge-contraction method, and we observed a mild improvement from 1.69% to 1.70% on the decoder's threshold. We argue, and provide numerical evidence, that the presence of asynchronism increases the predominance, i.e., the relative probability, of the most likely error configuration over all the others, thus diminishing the role of degeneracy on decoding. Sec. II reviews the toric code and introduces our toy model of asynchronism. Sec. III discusses the approach to decoding and the way in which degeneracy appears. It also introduces our proposed decoding algorithms. Their performance is then benchmarked in Sec. IV. We further discuss our results and conclude in Sec. V. ## II Asynchronism in the toric code ### The toric code The toric code [1] is a topological code defined on an \(L\times L\) square lattice with periodic boundary condition, where a qubit is located on each edge of the lattice. There is an operator \(X_{v}\) and \(Z_{f}\) associated with each vertex \(v\) and each face \(f\) of the lattice, respectively. The code space is defined as the simultaneous '\(+1\)' eigenstate of the operators \(X_{v}\) and \(Z_{f}\). \(X_{v}\) is the product of the Pauli-\(X\) matrices acting on edges incident to \(v\), i.e., \(X_{v}=\prod_{e\ni v}X_{e}\), while \(Z_{f}=\prod_{e\in f}Z_{e}\) is the product of the Pauli-\(Z\)s acting on all edges of the face \(f\). These operators, and any product of them, form the stabiliser group, \(S\). Logical operators are made up of \(X\) and \(Z\) operators acting on a string of qubits that span the lattice, giving rise to logical operators \(\overline{X}_{1}\) and \(\overline{Z}_{2}\) along one direction, and \(\overline{X}_{2}\) and \(\overline{Z}_{1}\) along the other direction. To achieve fault tolerance the stabiliser operators \(X_{v}\) are measured. If there is an error \(E_{Z}\in\{I,Z\}^{\otimes n}\), any stabiliser \(X_{v}\) that anticommutes with the error returns a '\(-1\)' outcome. To account for the fact that the stabiliser measurements themselves can also be subject to error, the stabilisers are measured multiple times, and parity check operators are defined as the product of two subsequent measurements of the same stabiliser generator. If no error occurs during both measurements, then the parity check will return a '\(+1\)' outcome. If a Pauli error occurs between the first and second measurement, or if there is a measurement error in one of the measurements, then the parity check will return a '\(-1\)' outcome, which can be seen as a quasi-particle and is called _anyon_. The subset of parity checks with '\(-1\)' measurement outcomes is called the _syndrome_\(\sigma\). Given a syndrome \(\sigma\), a decoder can then be applied to find a correction operator \(\mathcal{C}(\sigma)\) such that \(\mathcal{C}(\sigma)E_{Z}\in S\). That is, if the correction operator is applied to the code, the error is corrected up to a stabiliser. We note that in quantum computation it is not necessary to physically apply any correction operator to the qubits, rather the correction can be thought of as a reference frame through which the measurement outcomes can be interpreted. ### Asynchronous stabiliser measurement We now introduce a model of asynchronous stabiliser measurement. This model is designed to isolate the effects of measurement asynchronicity while leaving all other features of the system the same. But it is worth highlighting that there could be many things about this model that could be changed depending on the physical system. Consider a square toric code of size \(L\times L\). The toric code is subject to repeated measurements for a time \(T\). Each attempted stabiliser measurement provides a parity outcome with probability \(s\), a parameter we called the _synchronicity_ of the system. Otherwise, with probability \(1-s\), no outcome is obtained, which is marked as a '\(0\)' outcome, i.e., erased. Parity measurements are successfully recorded at a rate \(1\) per unit time on average, meaning that \(1/s\) measurements are attempted in one unit of time. We define two measures of errors on qubits: the _simulation error_\(p_{\Delta}\) and the _physical error_\(p\). The simulation error is the probability that a qubit suffers an error between two consecutive parity check attempts. The physical error is the probability that a qubit suffers an error per unit time, i.e., after \(1/s\) parity check attempts. The physical and simulation errors are related as follows: the probability that a qubit suffers an error after \(n\) measurement rounds equals the probability that during these \(n\) rounds its state is flipped an odd number of times (each with probability \(p_{\Delta}\)), i.e., \[\sum_{m\text{ odd}}^{n}\binom{n}{m}p_{\Delta}^{m}(1-p_{\Delta})^{n-m}=\frac{1 }{2}\left(1-(1-2p_{\Delta})^{n}\right). \tag{1}\] Since a time unit represents \(1/s\) measurement rounds on average, both quantities \(p\) and \(p_{\Delta}\) are related via \[p =\frac{1}{2}(1-(1-2p_{\Delta})^{1/s}), \tag{2a}\] \[p_{\Delta} =\frac{1}{2}(1-(1-2p)^{s}). \tag{2b}\] Finally, successful measurements are subject to measurement errors, which flip the outcome value with probability \(q=p\). When \(s=1\), measurements are fully synchronous. When \(s\to 0\), measurements are completely continuous. By fixing the rate \(p\) of physical errors and the rate of successful parity checks (set to \(1\)), we are able to probe the behavior of the code with respect to the parameter \(s\). We consider three distinct regimes, which are illustrated in Fig. 1: 1. **Synchronous measurement (\(s=1\)).** This corresponds to the error model with fully synchronous parity checks. 2. **Discrete asynchronous measurement (\(0<s<1\)).** Measurements are performed in discrete rounds, but are not deterministic and occur with probability \(s\). Measurement rounds are performed at a rate \(1/s\) such that the overall rate of successful stabiliser measurement remains at 1 per unit time. 3. **Continuous measurement (\(s=0\)).** Measurements are not performed in rounds, but are received continuously at a rate 1. Similarly, Pauli errors are treated as continuous. The times of the successful measurements and Pauli errors are modelled as arising from a Poisson distribution, the resulting distribution obtained from the Binomial distribution in the limit \(s\to 0\) (see Appendix A). One can see from Fig. 1 the effect of the probabilistic nature of parity checks. Successful stabiliser measurements are separated in time, thus creating a block-like structure. Every stabiliser operator \(X_{v}\) has an ordered list of measurement times for successful parity checks (\(t_{1}^{v},t_{2}^{v},\dots\)), where \(t_{1}^{v}<t_{2}^{v}<\dots\). Two consecutive measurement times define a _parity block_. More specifically, the \(i\)th parity block associated with \(v\) is defined by the pair of time coordinates \((t_{i-1}^{v},t_{i}^{v})\). If the measurement outcomes differ from each other at consecutive times \(t_{i-1}^{v}\) and \(t_{i}^{v}\), then we refer to this block as an _anyon block_. In the fully synchronous regime (\(s=1\)), two consecutive measurements with differing outcomes lead to an anyon well defined in time. On the other hand, for \(s<1\), such anyons (now anyon blocks) are spread over time. Defining their time position is one of the main issues in constructing the decoding problem and correcting for errors. ### Constructing the decoding problem To analyse fault tolerance in this system we first want to formulate the error model and structure of the code as a _syndrome graph_. In the syndrome graph, vertices represent fault-tolerant parity checks and edges represent the potential errors in the system, e.g. physical and measurement errors. This representation is the most useful way to analyse the performance of decoding algorithms as it fully describes the system, capturing both space and time behavior. Each edge in the syndrome graph is assigned a bit that indicates whether or not an error has occurred. Vertices are assigned a parity value which is computed as the parity of the values of all edges incident to that vertex. If there are no errors, all vertices will have an even parity. If an error occurs, the two vertices connected to the corresponding edge will have their values flipped. For a fault-tolerance system there may be multiple possible syndrome graph representations that capture the same error model. We consider first the _simple syndrome graph_ that is most naturally derived from the parity check structure. We then study the _contracted syndrome graph_. #### ii.3.1 Simple syndrome graph When all parity measurements are performed synchronously, the syndrome graph has a cubic structure. Time-like edges represent the possible measurement errors on parity checks with error probability \(q=p\), while space-like errors represent potential Pauli errors on the physical qubits with error probability \(p_{\Delta}=p\). As previously mentioned, the set of all odd parity checks defines the syndrome and two consecutive parity checks with differing outcomes define an anyon in between both measurement times. In our model of asynchronous measurement, we have to modify this representation since not all parity checks return an outcome. This is done by marking an edge of the Figure 1: Illustration of three regimes of asynchronous stabiliser measurement in the square toric code. (a) Fully synchronous (\(s=1\)). Repeated measurement on the code created a 3-dimensional block of parity check outcomes over time. Outcomes are measured deterministically in layers at discrete time intervals. An anyon (a violated syndrome bit) is identified when a parity check measurement changes from one round to the next (dark grey blocks). A physical error will result in two anyons separated in space. A measurement error will result in two anyons separated in time. (b) Discrete asynchronous measurement (\(0<s<1\)). Measurements are performed in discrete rounds, but an outcome is only returned with probability \(s\), where in the figure \(s=\frac{1}{3}\). Physical or measurement errors result in a pair of anyons, but these are now identified in intervals of varying size as indicated in the figure. (c) Continuous asynchronous (\(s=0\)): parity check measurements can happen at any time, at a rate 1 per unit time. graph as erased when there is a corresponding measurement erasure. The net result is that multiple sequential erasures in time lead to a 'block' of marked edges. The formulation of this system into a syndrome graph, named simple syndrome graph, is illustrated in Fig. 2. Space-like edges are still associated with error probability \(p_{\Delta}\) and non-erased time-like edges with error probability \(q=p\). As previously mentioned, anyons are no longer well defined in time, as the 'blocks' of erased edges can now have variable time lengths. We note that the graph structure, i.e., its cubic structure, is the same in the every instance, the only difference being the position of the erased edges. #### ii.1.2 Contracted syndrome graph Given a simple syndrome graph with a set of erased edges as shown in Fig. 2(b), we find an alternative representation without erased edges. When erasure is present, fault-tolerant parity checks are only complete for each cluster of erased edges [14]. In our case this means simply treating all the vertices between two successful measurements as one vertex, i.e., considering a parity block as a vertex. By contracting the graph around the erased edges, we arrive at the contracted syndrome graph. An example is shown in Fig. 2(c). The contraction resolves the problem of defining the anyons. These are now placed at contracted vertices that are between two consecutive parity checks with differing outcomes, or, in other words, at contracted vertices associated with anyon blocks. Carrying out the contraction will often result in multi-edges in the graph, where two erased components were connected by multiple edges in the simple syndrome graph. These correspond to multiple possible errors that could cause the same syndrome. An equivalent representation that is more convenient for decoding is to instead represent these as a single edge with modified error probability. Such modified error probability is related to the physical error \(p\) and the time overlap of erased components in the simple syndrome graph, as illustrated in Fig. 2, via the same reasoning that relates \(p_{\Delta}\) and \(p\) in Eq. (2). More specifically, let \(\omega_{ij}=\min(t_{i},t_{j})-\max(t_{i-1},t_{j-1})\) be the time overlap between two adjacent parity blocks \((t_{i-1},t_{i})\) and \((t_{j-1},t_{j})\). The number of merged edges is therefore just \(\omega_{ij}/s\). The probability \(\bar{p}(\omega_{ij})\) of a Pauli error occurring on the merged edge of the contracted syndrome graph is equal to the probability of a Pauli error occurring an odd number of times on the corresponding edges from the simple syndrome graph, which is given by Eq. (1): \[\bar{p}(\omega_{ij})=\frac{1-(1-2p_{\Delta})^{\omega_{ij}/s}}{2}=\frac{1-(1-2p )^{\omega_{ij}}}{2}. \tag{3}\] A merged edge between two adjacent parity blocks with time overlap \(\omega(e)\) has thus an associated error probability \(\bar{p}(\omega(e))\). Time-like (vertical) edges continue to represent possible measurement errors with probability \(q=p\). The resulting contracted syndrome graph thus offers a simple and compact framework in which decoding techniques can be straightforwardly used. We will show how to apply such decoding techniques in the following section. #### ii.1.3 Continuous stabiliser measurement In the case of continuous stabiliser measurement when \(s=0\), there is no way to construct the simple syndrome graph. In this case we build the contracted syndrome graph directly by recording the parity check measurement times. Given the locations of successful parity checks, a vertex is identified with each parity block. An edge is then placed between vertices whose adjacent parity blocks overlap in time and has an associated probability according to Eq. (3). Figure 2: Constructing the contracted syndrome graph in the case of discrete asynchronous stabiliser measurement. We follow the method described first in [14]. (a) Collection of all parity check attempts through time, including successful and unsuccessful measurements. Two consecutive successful parity checks in time define a parity block. (b) Simple syndrome graph representation. Horizontal edges represent a possible physical Pauli error and vertical edges represent an attempted stabiliser measurement outcome. A vertical edge with unsuccessful parity check is marked as erased (bold edge). (c) The contracted syndrome graph representation. All vertices and vertical edges within a parity block are contracted into a single vertex. Horizontal edges connecting adjacent parity blocks are contracted into a single edge and new edge weights are calculated in order to reflect the degeneracy of the new contracted edges. ## III Decoding The job of the decoder is to identify a _correction_ for a given syndrome graph, in the form of a predicted set of flipped edges. An optimal decoder identifies corrections that minimise the chance of logical errors. For practical use however, efficient decoding algorithms are required that approximate optimal decoding while being computationally tractable [15; 18; 19]. In this section, we describe the decoding strategies for probabilistic syndrome measurement that will be analysed in Sec. IV. ### Anyon-pairing decoders A correction in the toric code can be expressed as a pairing of anyons (odd-parity check vertices). Any two error chains that produce the same syndrome but differ by trivial cycles have the same effect on the logical state. The task that should be performed by a decoder can be understood as matching anyons in a way that minimises the chance of a logical error. A large range of decoders can be defined as performing minimum-weight perfect matching (MWPM) [20] on _matching graphs_ derived from syndrome graphs. A matching graph is specified for any syndrome graph as a _complete_ graph where the vertices correspond to the anyons, and the edge weights between vertices correspond to distances in the syndrome graph. It is then necessary to properly weight the edges in the syndrome graph, which we now describe. When constructing a matching graph, we would ideally like to compute each _anyon pairing probability_, i.e., the probability that _any_ error created an observed anyon pair. Performing decoding on a matching graph with these probabilities then reveals the _most likely pairing_ of anyons, and is independent of the type of syndrome graph (e.g. simple or contracted). In other words, we would like to compute the pairing probability between anyons \(i\) and \(j\) as given by \[P_{ij}=\sum_{E\in\mathbb{E}}P_{E}=P_{0}+P_{1}+P_{2}+\dots, \tag{4}\] where \(E\) is an error chain (a set of odd parity outcomes) whose boundaries are the vertices \(i\) and \(j\), \(P_{E}\) is its probability, and \(\mathbb{E}\) is the set of all such error chains. The error chains are indexed \(E=0,1,2,\dots\) from most to least likely. Consider now an error model where each edge \(e\) in the syndrome graph represents an independent (but not necessarily identical) error occurring with probability \(p_{e}\). The probability for each error chain can be expressed as \[P_{E}=\prod_{e\in E}p_{e}\prod_{e\notin E}(1-p_{e})=C\prod_{e\in E}\frac{p_{e} }{1-p_{e}},\] where \(C=\prod_{Ve}(1-p_{e})\) is a constant for a given syndrome graph. Commonly, a simplified MWPM decoder is used that identifies only most likely errors for the correction operator, corresponding to approximating \(P_{ij}\) by \(P_{0}\) for each anyon pairing in Eq. (4). Since \[\ln P_{0}=\max_{E}\ln P_{E}=\ln C-\min_{E}\sum_{e\in E}\ln\left(\frac{1-p_{e} }{p_{e}}\right), \tag{5}\] the error chains that are used must minimise \(\sum_{e\in E}\ln((1-p_{e})/p_{e})\). The distance between any two pairs of anyons to be inputted into the matching graph can be found by using Dijkstra's algorithm [21] on the syndrome graph with edges weighted by \(\ln((1-p_{e})/p_{e})\). The minimisation itself, i.e., the anyon pairing with overall minimum additive weight, can be found via Edmond's minimum-weight, perfect-matching algorithm [10]. A MWPM decoder can be improved by considering more terms \(P_{E}\) in the anyon pairing probability. In the usual fully synchronous (\(s=1\)) i.i.d. error model with \(q=p\) (equal physical and measurement error probabilities), \(P_{E}=Cp^{|E|}\), where \(|E|\) is the length of the error chain. It is then possible that two or more paths have the same probability \(P_{E}\), meaning they are degenerate. The question is thus reduced to counting the number of paths with a given length between two anyons. The introduction of degeneracy for the shortest path into the MWPM decoder, i.e., considering all terms \(P_{E}\) equal to \(P_{0}\) in the pairing probability, was examined in [14]. When erasure is present and we work with the contracted syndrome graph, it is important to introduce a suitable notion of degeneracy for error chains when estimating anyon pairing probabilities. Recall that the probability \(p_{e}\) of an edge \(e\) in the contracted syndrome graph is given by \(p_{e}=\frac{1}{2}(1-(1-2p)^{\omega(e)})\), where \(\omega(e)\) is the time overlap between the blocks defining \(e\) (see Eq. (3)). By approximating \(p_{e}\approx p\cdot\omega(e)\) and assuming \(p_{e}\) is small, we find \[P_{ij} =C\sum_{E\in\mathbb{E}}\prod_{e\in E}\frac{p_{e}}{1-p_{e}}\] \[\approx C\sum_{E\in\mathbb{E}}\prod_{e\in E}\frac{p\omega(e)}{1-p \omega(e)}\] \[\approx C\sum_{E\in\mathbb{E}}\prod_{e\in E}(p\omega(e)+(p\omega(e) )^{2})\] \[=C\sum_{E\in\mathbb{E}}\left(\prod_{e\in E}p\omega(e)\right) \left(\prod_{e\in E}(1+p\omega(e))\right)\] \[\approx C\sum_{E\in\mathbb{E}}p^{|E|}\delta_{E}, \tag{6}\] where in Eq. (6) we defined the quantity \(\delta_{E}=\prod_{e\in E}\omega(e)\) (which is the product of the \(\omega(e)\) values alone the error chain), and we approximated \(\prod_{e\in E}(1+p\omega(e))\approx 1\). This last approximation can be justified as follows. Time overlaps \(\omega(e)\) are expected to be smaller than \(1\) on average (since parity blocks have length \(1\) on average), so we can write \(\prod_{e\in E}(1+p\omega(e))\lessapprox e^{p|E|}\). For large chains, if the lattice size \(L\) is sufficiently large so that \(L\) is comparable to \(p^{-1}\), \(e^{p|E|}\approx e^{pL}\) might be considerable, and the actual probability of large chains is underestimated. However, for small chains, the approximation \(\prod_{e\in E}(1+p\omega(e))\approx 1\) is fairly accurate, and these are the ones that are relevant to the decoder since the most likely error configurations are typically composed of small error chains. The probability underestimation for large chains is thus ignored by the decoder. By grouping terms for which error chains have the same number of edges, we obtain the following expression for the pairing probability: \[P_{ij}\propto\sum_{l\geq l_{0}}p^{l}\sum_{E\in\mathbb{E}_{l}}\delta_{E}, \tag{7}\] where \(l_{0}\) denotes the length of the shortest error chain connecting \(i\) and \(j\), and \(\mathbb{E}_{l}\) is the set of error chains connecting \(i\) and \(j\) of length \(l\). We define the \((k+1)\)th-order _degeneracy factor_ for each term in \(p^{l}=p^{l_{0}+k}\) to be \[\Omega_{k}:=\sum_{E\in\mathbb{E}_{l_{0}+k}}\delta_{E}. \tag{8}\] These factors are closely related to counting the number of paths with the same _number_ of errors, i.e., edges. Indeed, for the i.i.d. error model with synchronicity \(s=1\), \(\delta_{E}=1\) for all paths, and so \(\Omega_{k}\) equals exactly the number of paths with \(l_{0}+k\) edges (see more in Appendix B). ### Decoding algorithms In this section we propose several decoders based on different approximations for \(P_{ij}\). #### ii.2.1 Contracted Syndrome Graph decoders We first consider a MWPM decoder on the contracted syndrome graph with the approximation \(P_{ij}=P_{0}\), which we name Contracted Graph (CG) decoder. The probability \(P_{0}\) is given by Eq. (5) with \(p_{e}=\bar{p}(\omega(e))=\frac{1}{2}(1-(1-2p)^{\omega(e)})\). Finding the most likely error chain is equivalent to \(\min_{E}\sum_{e\in E}\ln\left((1-p_{e})/p_{e}\right)\) and, hence, the CG decoder weights each edge by \(\ln((1-p_{e})/p_{e})\) and proceeds to find the path with the minimum additive weight. In other words, this weight assignment defines a metric \(d_{C}\) in the contracted syndrome graph. Therefore, the weight between two anyon blocks \(i\) and \(j\) is set as the shortest distance between them, \[w_{ij}=d_{C}(i,j). \tag{9}\] The CG decoder can be enhanced by keeping more terms in Eq. (4). From Eq. (7) we can keep the first two groups of terms with shortest lengths (\(\mathbb{E}_{l_{0}}\) and \(\mathbb{E}_{l_{0}+1}\)) with their corresponding degeneracy terms \(\Omega_{0}\) and \(\Omega_{1}\). Similarly to Eq. (5), the CG decoder should now optimise \(\max_{E}\ln P_{ij}\propto\max_{E}\ln(p^{|E|}\Omega_{0}+p^{|E|+1}\Omega_{1})= -\min_{E}\left[|E|\ln p^{-1}-\ln(\Omega_{0}+p\cdot\Omega_{1})\right]\). Therefore, the weight assignment between a pair of anyons \(i\) and \(j\) is \[w_{ij}=l_{0}(i,j)\ln p^{-1}-\tau\ln\left(\Omega_{0}+p\cdot\Omega_{1}\right) \tag{10}\] up to an additive constant, and where we included a parameter \(\tau\), named _degeneracy parameter_, that can be tuned in order to improve the decoder performance. Efficient computation of degeneracies \(\Omega_{0}\) and \(\Omega_{1}\) can be done via Dijkstra's algorithm, as explained in Appendix C. #### ii.2.2 Approximated decoders One of the drawbacks of the CG decoder is the lack of a closed-form expression for the distance between two anyons in the metric \(d_{C}\), since erased time-like edges are randomly distributed. This means that we must use Dijkstra's algorithm to compute such distances, which can be too slow for the situation at hand. It is thus interesting to propose heuristic approximations to the CG decoder that do not require the use of Dijkstra's algorithm and have a close-form expression for the distance between two anyon blocks given their coordinates. In order to do so, we work with the simple syndrome graph given its cubic structure. For our first approximation, we treat the anyon blocks as defined anyons in a fully synchronous simple syndrome graph, thus ignoring erased edges. This means taking a Manhattan distance between two anyon blocks as their weight into the matching graph. More specifically, consider two anyon blocks with coordinates \((x_{i},y_{j},t_{i1},t_{i2})\) and \((x_{j},y_{j},t_{j1},t_{j2})\). Their spacial distance is \(\Delta(x_{i},x_{j})+\Delta(y_{i},y_{j})\), where \(\Delta(x_{i},x_{j})=\min(x_{i}-x_{j}\ (\text{mod}\ L),x_{j}-x_{i}\ (\text{mod}\ L))\) is the \(x\) horizontal distance on the lattice (and similarly for the \(y\) coordinate). Moreover, if both blocks overlap in time, then their time distance is zero, since there is an error chain with minimal length with no time-like edges connecting both blocks (see Fig. 3). If the blocks do not overlap in time, we take the average number of non-erased time edges between them. Suppose, e.g. that \(t_{i1}>t_{j2}\). There are \((t_{i1}-t_{j2})/s\) time-like edges between both blocks, out of which \(t_{i1}-t_{j2}\) are non-erased on average. We thus can approximate their time distance as \(\max(t_{i1}-t_{j2},t_{j1}-t_{i2},0)\) (note this is \(0\) when the blocks overlap in time, since \(t_{i1}-t_{j2}\) and \(t_{j1}-t_{i2}\) are negative). Given these considerations, we propose the Block Graph (BG) decoder which, instead of finding the actual distances within the contracted syndrome graph using Dijkstra's algorithm, sets the distance (and thus the weight) between two anyon blocks with coordinates \((x_{i},y_{j},t_{i1},t_{i2})\) and \((x_{j},y_{j},t_{j1},t_{j2})\) as \[w_{ij}=\Delta(x_{i},x_{j})+\Delta(y_{i},y_{j})\\ +w_{\text{time}}^{\text{BG}}\max(t_{i1}-t_{j2},t_{j1}-t_{i2},0), \tag{11}\] where we introduced a tunable parameter \(w_{\text{time}}^{\text{BG}}\). The weight between anyons in the matching graph becomes then a function of only their coordinates. Our second approximation within the simple syndrome graph is to reduce the analysis back to a cubic syndrome graph by defining a specific time coordinate for each anyon. More specifically, each anyon is identified at a time location in the middle of its corresponding anyon block. For example, if an anyon block is defined by times \(t_{i-1}\) and \(t_{i}\), then the corresponding anyon is given a location \((t_{i}+t_{i-1})/2\). Our proposed Average Position (AP) decoder treats these anyons as existing in a cubic syndrome graph, and computes the weight between two anyons \(i\) and \(j\) with coordinates \((x_{i},y_{i},t_{i})\) and \((x_{j},y_{j},t_{j})\) using the Manhattan distance, \[w_{ij}=\Delta(x_{i},x_{j})+\Delta(y_{i},y_{j})+w_{\text{time}}^{\text{AP}}|t_{i }-t_{j}|, \tag{12}\] where we introduced a tunable parameter \(w_{\text{time}}^{\text{AP}}\). ## IV Results ### Simulation methods To study the performance of each decoding algorithm we perform Monte Carlo simulations of the system where errors are sampled and the resulting system is decoded and analysed to determine whether or not an error is introduced. Since we consider a model of independent \(X\) and \(Z\) errors we directly simulate only phase-flip errors and \(X\)-type parity checks, as by symmetry the performance will be the same for bit-flip errors. For each decoding algorithm we simulate its performance for a range of stabiliser synchronicity \(s\in[0,1]\). Our simulations capture both discrete probabilistic measurements, where stabiliser measurements are made at discrete time steps with varying success probability, and continuous measurement for which we sample errors and measurements over a continuous range. Here we briefly outline the simulation methods for both for these cases, and more details on simulation techniques can be found in Appendix A. #### iv.1.1 Threshold performance **Discrete Measurement.** To model discrete probabilistic stabiliser measurements we sample measurements and errors on the simple cubic syndrome graph. We define a time scale such that stabiliser measurements are obtained at a rate \(1\) after \(1/s\) time steps on average. In other words, each measurement round is performed after a time interval \(s\). At each time step each physical qubit (space-like edge) suffers a flip with probability \(p_{\Delta}\), the simulation error, where \(p_{\Delta}\) is related to the physical error rate \(p\) (error probability after \(1/s\) time steps) via \(p_{\Delta}=\frac{1}{2}(1-(1-2p)^{s})\) (Eq. (2)). Each stabiliser measurement (time-like edge) is sampled and is successfully measured with probability \(s\). If the measurement does not succeed then the edge is marked as erased, otherwise, if it does succeed, then its value is flipped with probability \(q\), the measurement error. We take \(q=p\). **Continuous measurement.** To model continuous stabiliser measurement (\(s=0\)) we cannot directly sample stabiliser measurements as probabilistic events. Instead we sample error events and measurement events over a continuous time period, aiming to keep all the error parameters equivalent to the discrete measurement case. We set a time interval \(T\) and a physical error \(p\) per unit time. For each qubit we sample the number of bit-flips it suffers in the time interval \(T\) from a Poisson distribution with parameter \(\frac{T}{2}\ln(1/(1-2p))\) (see Appendix A for a justification). Given the number of events we then sample their time coordinate from a uniform distribution along the time interval \((0,T)\). This gives us a set of space-time coordinates of qubit flip events. For each stabiliser we sample the number of successful measurements from a Poisson distribution with parameter \(T\) and distribute these measurements uniformly at random along the time interval \((0,T)\). This gives us a set of space-time coordinates for stabiliser measurements. A parity check is done by counting the number of errors of the adjacent qubits prior to the measurement time. If the number is even (odd), the measurement outcome is \(+1\) (\(-1\)). For faulty measurements this outcome is flipped with probability \(q=p\). Given the locations of parity check measurements we then directly construct the contracted syndrome graph by identifying a vertex with each successive pair of parity checks, and edges between neighboring check locations where parity blocks have a non-zero time overlap. Each edge has an associated error probability \(p_{e}=\bar{p}(\omega(e))\), where \(\omega(e)\) is the time overlap of the parity blocks defining the edge \(e\) (Eq. (3)). **Parameter optimisation.** The AP decoder, the BG decoder and the CG decoder augmented with \(\Omega_{0}\) and \(\Omega_{1}\) have tunables parameters, to know, the time and degeneracy parameters \(w_{\text{time}}^{\text{AP}}\), \(w_{\text{time}}^{\text{BG}}\) and \(\tau\), respectively. For a Figure 3: Two anyon blocks, highlighted in red, that overlap in time can be matched by a minimum-length error chain with no time-like edges, shown in green. The time distance between both blocks is zero. given value of \(s\), we probe their dependence on these parameters and pick the optimum value when comparing the threshold performance between different decoders. Their dependence on \(w_{\text{time}}^{\text{AP}}\), \(w_{\text{time}}^{\text{BG}}\) and \(\tau\) is explored in Appendix B. #### iv.1.2 Analysing entropic contributions In addition to gauging the decoders' performance via their threshold, we want to understand how good an approximation is being made to the anyon pairing probability. To do this we compute the average magnitude \(\langle P_{E}/P_{0}\rangle\) of the first few terms \(P_{E}\) from Eq. (4) relative to the zeroeth-order term \(P_{0}\). If the higher-order terms have small values, then we expect the proposed decoders to perform well. To obtain these ratios we perform a further series of numerical experiments. We fix the synchronicity \(s\), a physical error \(p\) and a measurement error \(q=p\), and, by sampling errors via Monte Carlo simulation as previously described, we obtain a syndrome and identify all pairs of anyons that _are matched by the decoder_ (not any pair of anyon). The ratio \(P_{E}/P_{0}\) is then computed for each such _matched_ pair using Yen's algorithm [22], which is a generalisation of Dijkstra's algorithm for computing the \(k\)-shortest loopless paths in a graph with non-negative edge cost. We average this value over all the matched anyon pairs and, finally, over other random contracted syndrome graphs. ### Threshold performance Fig. 4 shows our main results, the threshold performance with synchronicity \(s\in[0,1]\) for the three main different decoders introduced in the previous section. Fig. 4(a) compares all decoders. On the other hand, Fig. 4(b) specifically compares the CG decoder with and without the degeneracy terms \(\Omega_{0}\) and \(\Omega_{1}\). At \(s=1\) we have the usual MPMW decoder for the toric code with faulty measurements and i.i.d. error model [13], hence all decoders perform identically. As the synchronicity \(s\) decreases, the performance of all decoders decreases, as expected. Nonetheless, even at the limit of continuous stabiliser measurement (\(s=0\)), the threshold can be maintained at a reasonably high level, e.g. \(1.688\%\pm 0.001\%\) for the CG decoder. On the other hand, the simplification of the syndrome graph structure by the BG and AP decoders, while speeding up the decoding procedure, leads to a decrease in threshold values, e.g. \(1.20\%\pm 0.01\%\) (BG decoder) and \(1.32\%\pm 0.01\%\) (AP decoder) at \(s=0\). Interestingly, the AP decoder, even though inferior to the BG decoder for high values of \(s\), outperforms it for high asynchronism, a fact for which we do not have an explanation. In Appendix B we show more information on the AP and BG decoders, e.g. their dependence on the time parameters \(w_{\text{time}}^{\text{AP}}\) and \(w_{\text{time}}^{\text{BG}}\) at \(s=0\) and the optimal values for \(w_{\text{time}}^{\text{AP}}\) and \(w_{\text{time}}^{\text{BG}}\) as a function of asynchronicity. In addition, we also show how the inclusion of degeneracy terms like \(\Omega_{0}\) and \(\Omega_{1}\) into the BG decoder can lead to a substantial threshold increase. Something that stands out from Fig. 4(b) is the fact that, while the introduction of high-order degeneracy like \(\Omega_{1}\) does give higher threshold values compared to the base case of the CG decoder, this improvement becomes very small in the limit \(s\to 0\). Even by \(s=0.9\) the reduction is significant. While at \(s=1\) the threshold increases from \(2.937\%\) to \(3.050\%\) (an \(\sim 0.11\%\) additive improvement), at \(s=0\) it only increases from \(1.688\%\) to \(1.699\%\) (an \(\sim 0.01\%\) additive improvement). This feature is not entirely surprising, given the following. If one assumes that the set of possible error probabilities \(\{p_{e}\}_{e}\) on each edge is very diverse, e.g. consider the case of continuous asynchronism where \(p_{e}=\bar{p}(\omega(e))\) and \(\omega(e)\) can be any real number in \((0,T)\), then it becomes very unlikely to have two degenerate error chains. Therefore, for completely different error probabilities \(\{p_{e}\}_{e}\), we expect most of the terms in Eq. (4) to be different from each other. This is in contrast to the fully synchronous regime (\(s=1\)), where most first terms are equal (see more in Appendix B). Consequently, the leading term \(P_{0}\) plays a more prominent role in the sum, and any truncation to it is less disruptive to its original value when \(s=0\) compared to when \(s=1\). In order to support the above claim, we shed some light on the relative size between the first \(P_{E}\) terms and \(P_{0}\) which underlies the decrease in threshold values. Fig. 5 explores how much smaller the first few terms in \(\sum P_{E}\) are in comparison with \(P_{0}\) both in the fully synchronous (\(s=1\)) and continuous asynchronous (\(s=0\)) regimes. The average ratio \(\langle P_{E}/P_{0}\rangle\) is obtained for \(E=1,\ldots,10\). One can see that most of high-order contribution is coming from \(P_{1}\) on average, which is also where the discrepancy between high and low synchronicity regimes lies: \(P_{1}\)'s contribution is more than double in the regime \(s=1\) compared to its contribution when \(s=0\). On the other hand, asynchronism has a much smaller impact on the average \(\langle P_{E}/P_{0}\rangle\) for high-order terms \(E>1\). The relative importance of \(P_{1}\) over other high-order terms \(P_{E}\) is explicitly shown in Appendix B for the fully synchronous i.i.d. error model. ### Advantage in logical gate time Direct handling of asynchronous stabiliser measurement in decoding can also provide an advantage in the time needed to execute logical gates. In Ref. [23], probabilistic stabiliser measurements arise in a scheme for quantum computing using networked ion traps. In this situation the physical errors occur only during successful stabiliser measurement, and so there is no penalty to the threshold for lower measurement probability. Ref. [23] handles the probabilistic nature of the measurements by waiting for as many attempts as necessary to get to \(99\%\) success across all stabiliser sites, and abandoning the remaining \(1\%\), whose impact on the threshold is negligible. This essentially redefines a 'round' of stabiliser measurement to be made up of \(N_{R}\) rounds, such that \(1-s^{\prime}=(1-s)^{N_{R}}<0.01\). Once a stabiliser site is successfully measured it idles and waits until the round is completed. The cost to this approach is in the time taken to execute logical operations. In the limit of small success probability many attempts must be taken to complete a renormalised round, and during this time many of the sites spend a significant time idling. We can define the logical gate overhead, \(R_{L}\), as the ratio between the number of rounds needed to complete renormalised round vs the number needed to measure a stabiliser on average as \[R_{L}=\frac{N_{R}}{1/s}=\frac{\log{(1-s^{\prime})}}{\log{(1-s)}}s\] for \(s<0.99\), and \(R_{L}=1\) for \(s\geq 0.99\). By using asynchronous decoding, each stabiliser is measured independently of the others, allowing stabiliser measurement information to be gathered at a faster rate, and giving \(R_{L}=1\). Fig. 6 shows that, for high asynchronism, the bundling method from [23] takes more than four times on average to perform a round compared to our asynchronous decoding approach. Moreover, their bundling method, which presupposes that physical errors occur due to successful stabiliser measurements, would probably not be viable in a more stringent error model like ours where physical errors can happen in between stabiliser measurements. ## V Conclusions We have shown how asynchronism can be incorporated into MWPM decoders while still maintaining a high threshold. We considered a simple error model where a stabiliser measurement outputs an outcome with probability \(s\) called synchronicity. The limit \(s\to 0\) represents a continuous asynchronous regime were stabilisers Figure 4: Threshold comparison between all decoders. The points were calculated using an \(L\times L\) lattice with \(L\in\{10,12,14\}\) and \(N_{s}\) measurement rounds with \(N_{s}=\lceil 2/s\rfloor\,L\) for \(s\in(0,1]\), and \(N_{0}=T=2L\) for \(s=0\). Figure 5: Average ratio \(\langle P_{E}/P_{0}\rangle\) as a function of \(E\) in the fully synchronous (\(s=1\)) and continuous asynchronous (\(s=0\)) regimes for an \(L\times L\times 2L\) lattice with \(L=14\). The inner figure is in logarithmic scale. The ratios were averaged over matched anyon pairs by the CG decoder and over random contracted syndrome graphs given a physical error \(p\) and a measurement error \(q=p\). We chose \(p=2.937\%\) and \(p=1.688\%\) for \(s=1\) and \(s=0\), respectively, which are the thresholds of the CG decoder from Fig. 4. are measured completely at random in time. We tackled asynchronism by marking unsuccessful stabiliser measurements as erased in the simple syndrome graph, followed by contracting each cluster of erased edges using an edge-contraction method from Stace and Barrett [14]. The resulting graph was named contracted syndrome graph and, in opposition to the simple one, offers an easy framework to take non-identical error probabilities into consideration when decoding. We then proposed a MWPM-like decoder, named Contracted Graph (CG), using a properly weighted contracted syndrome graph. We benchmarked the CG decoder via Monte Carlo and observed that the threshold values do decrease as the synchronicity tends to zero, but a significant level can be maintained even under a completely continuous model of syndrome extraction, e.g. the CG decoder holds thresholds of \(1.69\%\) at \(s=0\). While our results were obtained with a simple error model, they show that erasure errors suffered by measurements can be efficiently handled by decoders. Studying the performance of the CG decoder under more realistic error models is a point to be considered in the future. The CG decoder is relatively simple: by being a MPWM decoder, one only requires performing Dijkstra's algorithm on a rightly weighted graph. However, even though being a polynomial-time algorithm, it could be too slow for practical applications. Indeed, running Dijkstra's algorithm requires time \(O(|E|+|V|\log|E|)\), where \(|E|\) and \(|V|\) are the number of edges and vertices of the syndrome graph, respectively. The syndrome graph is fairly sparse (\(|E|=O(|V|)\)), meaning that each application of Dijkstra's algorithm takes \(\widetilde{O}(|V|)=\widetilde{O}(L^{3})\) time in the contracted syndrome graph (the notation \(\widetilde{O}\) hides polylog factors). Since Dijkstra's algorithm must be used once for each of the \(O(pL^{3})\) anyons, this leads to the overall time complexity \(\widetilde{O}(pL^{6})\). In order to remedy this, we proposed the Block Graph (BG) and Average Position (AP) decoders that skip any use of Dijkstra's algorithm by approximating the distance between two anyons, which can be calculated in constant time. The overall time complexity improves to \(O(p^{2}L^{6})\). However, the price is a decrease in threshold value down to \(1.32\%\) at \(s=0\), which is still reasonably high. Given the simple structure of these decoders, specially the AP, it might be possible to borrow previous techniques used to improve the basic MWPM decoder [11, 24] (some of these ideas could possibly be applied to the CG decoder as well). Finally, the AP and BG decoders allow for the introduction of auxiliary parameters like the time weights \(w_{\rm time}^{\rm AP}\) and \(w_{\rm time}^{\rm BG}\), which must be tweaked depending on the error model. Understanding their performance as a function of \(w_{\rm time}^{\rm AP}\) and \(w_{\rm time}^{\rm BG}\) with more mathematical rigour is something that we did not tackle and should be considered in the future. Another aspect we explored was the role of degeneracy terms under asynchronous measurements and how they could be included into the decoder. More specifically, we study the inclusion of the first and second-order degeneracy terms \(\Omega_{0}\) and \(\Omega_{1}\) into the CG decoder. Such inclusion only produced a mild improvement in threshold, from \(1.69\%\) to \(1.70\%\) in the limit \(s\to 0\), which hints to the fact that the role of degeneracy becomes less important in the continuous asynchronous regime and considering only the lowest weight error configuration becomes an increasingly better approximation. This was further backed up by our numerical results on the relative size between the most likely error configurations. We showed that, as the synchronicity decreases, the probability of the most likely error configuration becomes relatively higher than the probability of the subsequent ones. In might be interesting to understand this behaviour in a more qualitative manner, although it might be a hard task given the similarity to the problem of counting trails. ###### Acknowledgements. We would like to specially thank Hugo Cable and Naomi Nickerson for the initial project proposal, many ideas and discussions throughout the project and initial contributions to the manuscript. We also thank Noah Linden, Ryan Mann, Ashley Montanaro, and Ronald de Wolf for useful discussions and helpful comments on the manuscript. This work was supported by the National Research Foundation, Singapore and A*STAR under the CQT Bridging Grant and the Quantum Engineering Programme Award number NRF2021-QEP2-02-P05, and by the Bristol Quantum Engineering Centre for Doctoral Training, EPSRC Grant No. EP/L015730/1, while at the University of Bristol where most of this project was conducted. This work was carried out using the computational facilities of the Advanced Computing Research Centre, University of Bristol -- [http://www.bris.ac.uk/acrc/](http://www.bris.ac.uk/acrc/) -- and the computational facilities of the National University of Singapore. Figure 6: Comparing logical gate execution times of the bundling method described in [23] with a renormalised synchronicity \(s^{\prime}=0.99\) to our asynchronous decoding approach.
2307.04384
Neural Causal Graph Collaborative Filtering
Graph collaborative filtering (GCF) has gained considerable attention in recommendation systems by leveraging graph learning techniques to enhance collaborative filtering (CF). One classical approach in GCF is to learn user and item embeddings with Graph Convolutional Network (GCN) and utilize these embeddings for CF models. However, existing GCN-based methods are insufficient in generating satisfactory embeddings for CF models. This is because they fail to model complex node dependencies and variable relation dependencies from a given graph, making the learned embeddings fragile to uncover the root causes of user interests. In this work, we propose to integrate causal modeling with the learning process of GCN-based GCF models, leveraging causality-aware graph embeddings to capture complex causal relations in recommendations. We complete the task by 1) Causal Graph conceptualization, 2) Neural Causal Model parameterization and 3) Variational inference for Neural Causal Model. Our Neural Causal Model, called Neural Causal Graph Collaborative Filtering (NCGCF), enables causal modeling for GCN-based GCF to facilitate accurate recommendations. Extensive experiments show that NCGCF provides precise recommendations that align with user preferences. We release our code and processed datasets at https://github.com/Chrystalii/CNGCF.
Xiangmeng Wang, Qian Li, Dianer Yu, Wei Huang, Guandong Xu
2023-07-10T07:43:05Z
http://arxiv.org/abs/2307.04384v2
# Causal Neural Graph Collaborative Filtering ###### Abstract Graph collaborative filtering (GCF) has gained considerable attention in recommendation systems by leveraging graph learning techniques to enhance collaborative filtering (CF) models. One classical approach in GCF is to learn user and item embeddings by modeling complex graph relations and utilizing these embeddings for CF models. However, the quality of the embeddings significantly impacts the recommendation performance of GCF models. In this paper, we argue that existing graph learning methods are insufficient in generating satisfactory embeddings for CF models. This is because they aggregate neighboring node messages directly, which can result in incorrect estimations of user-item correlations. To overcome this limitation, we propose a novel approach that incorporates causal modeling to explicitly encode the causal effects of neighboring nodes on the target node. This approach enables us to identify spurious correlations and uncover the root causes of user preferences. We introduce _Causal Neural Graph Collaborative Filtering (CNGCF)_, the first causality-aware graph learning framework for CF. CNGCF integrates causal modeling into the graph representation learning process, explicitly coupling causal effects between node pairs into the core message-passing process of graph learning. As a result, CNGCF yields causality-aware embeddings that promote robust recommendations. Our extensive experiments demonstrate that CNGCF provides precise recommendations that align with user preferences. Therefore, our proposed framework can address the limitations of existing GCF models and offer a more effective solution for recommendation systems. Graph Representation Learning, Causal Inference, Structural Causal Model, Recommendation System ## I Introduction Recommendation system (RS) has been a core in many web-based services, e.g., e-commerce, to facilitate information filtering for users from overwhelming data. Benefiting from the capability to learn from relational graph data, an emerging RS paradigm built on graph learning [1], i.e., graph collaborative filtering (GCF), has been studied extensively in recent years [2]. GCF enhances traditional collaborative filtering [3, 4] by modeling complex user-item interactions in a graph as well as auxiliary side information, e.g., user and item attributes. Thus, GCF has shown great potential in deriving knowledge (e.g., user behavior patterns) embedded in graphs. Existing GCF can be categorized as random walk-based and graph representation learning-based methods. The first branch of random walk-based methods [5, 6] uses user and item similarities to build random walk models that produce user-item co-occurrence information for downstream CF models. For instance, ItemRank [5] performs label propagation within an interaction graph and utilizes a probability model to compute inter-user and inter-item similarities. The similarities are then defined as transition probabilities of a random walk model, which produces item importance to enhance a CF model. However, the random walk model is conceptually isolated from the CF model, since it does not include model parameters to be optimized with the CF learning objective. An alternative category of graph representation learning methods utilizes graph neural networks to analyze graph connections and construct representations, commonly known as embeddings. The fundamental concept behind these methods is to acquire vectorized user and item embeddings through the application of graph neural networks, which can subsequently be utilized to optimize the collaborative filtering model. For instance, NGCF [7] exploits a graph convolutional network (GCN) to propagate neighboring node messages in the interaction graph to obtain user and item embeddings. The learned embeddings capture user collaborative behavior and are used to predict user preference scores for CF optimization. Following this paradigm, subsequent works [8, 9, 10, 11] also achieve favorable performance in different tasks, e.g., sequential recommendation [11], by using auxiliary information such as interaction timestamp [11] for user sequential behavior modeling. Despite the efforts, we argue that existing graph representation learning methods are not sufficient to yield satisfactory embeddings to enhance CF models. The main reason is that they learn user and item embeddings by directly aggregating neighboring node messages, while these messages are simple correlation signals of node pairs. Take Figure 1 (a) as a toy example. Given an interaction graph, existing graph representation learning generally learns user embeddings by sampling and aggregating users' correlated neighbors. Considering that Fig. 1: Toy example of learning user embeddings through a) correlation-based graph representation learning; b) causation-based graph representation learning. user \(u_{1}\) has a neighbor set \(\{i_{1},i_{2},a_{1},i_{3},a_{2}\}\), which is highly overlapped with user \(u_{2}\)'s neighbor set \(\{i_{1},i_{2},a_{1},i_{4},a_{3}\}\), the yield embeddings of \(u_{1}\) and \(u_{2}\) would be very similar compared with other users. The CF model takes the inner product between \(u_{1}\)'s embedding and the embeddings of items from the item set as \(u_{1}\)'s preference scores over items. Similarly, \(u_{2}\)'s preference scores are estimated based on \(u_{2}\)'s embedding and item embeddings. For item \(i_{3}\), as \(u_{1}\) and \(u_{2}\)'s embeddings are similar, the preference scores of \(u_{1}\) and \(u_{2}\) on item \(i_{3}\) would be similar too. Assuming that user \(u_{1}\) has previously interacted with item \(i_{3}\), thereby indicating a significant preference score for \(i_{3}\), the CF model would recommend \(i_{3}\) to user \(u_{2}\) based on this high preference score. However, we may infer that user \(u_{2}\) is truly interested in item attribute \(a_{3}\) that belongs to the item \(i_{4}\) interacted with the user. Consequently, the item \(i_{3}\) that is recommended based on attribute \(a_{2}\) may not align with the personal preferences of user \(u_{2}\) and, consequently, fail to meet the user's expectations. We claim that estimating the direct causal effects between node pairs in the graph could address this issue. As illustrated in Figure 1 (b), in order to determine the accurate preference of user \(u_{2}\), we might consider each node within the set of neighbors of \(u_{2}\) as the cause and the preference of \(u_{2}\) as the effect. For instance, measure the causal effect of \(a_{3}\) on \(u_{2}\) by considering \(a_{3}\) as the cause and \(u_{2}\)'s preference as the effect. By estimating the causal effect in each of the node-preference pairs, we can obtain the causal effect of \(a_{3}\) on \(u_{2}\), i.e., \(0.96\), and the causal effect of \(a_{1}\) on \(u_{2}\), i.e., \(0.91\). Given the condition that a causal effect above \(0.9\) indicates strong causation between cause and effect nodes, we thus conclude that \(a_{3}\) and \(a_{1}\) attract \(u_{2}\)'s personal interest. As such, we can use this causation signal to refine the user embedding of \(u_{2}\) towards favoring items with \(a_{3}\) and \(a_{1}\) and finally enhance the CF model for user interest modeling. Following the above intuition, we propose to inject causal modeling into graph representation learning to explicitly encode the crucial causal relations within node pairs into embeddings. Causal modeling identifies the intrinsic cause-effect relations between a node and true user preferences [12]. Considering that the message-passing mechanism suffers from ambiguous correlations of node relations within calculated messages [13], modeling node-level causal relations could help estimate the true user preferences to obtain causality-aware messages. For instance, we can estimate how a user's preference (i.e., effect) is affected by the item brand (i.e., cause). As such, by coupling with causal modeling, we could enable graph learning to uncover the true interests under user interactions, i.e., the root causes that trigger users' interests to interact with the item. We therefore propose the first causality-aware graph representation learning framework for collaborative filtering. We focus on a special class of neural networks for graph learning, namely the graph convolutional network (GCN), to inject the causal relations between nodes into the core message-passing process in the GCN computation. The underlying idea is to establish a connection between the structural causal model (SCM) and the message-passing mechanism of graph convolutional network (GCN) computation, which enables the messages to encapsulate the causal relationships between the adjacent nodes and the target node. Specifically, we construct a causal graph that induces a SCM to describe the recommendation generation process of graph representation learning that incorporates causality. Using the SCM, we formulate the recommendation process as a generative model, in which each component in the generative model describes a structural equation. We propose a novel _Causal Neural Graph Collaborative Filtering (CNGCF)_, which utilizes variational inference to quantify the components of the generative model. The CNGCF framework explicitly integrates causal relationships, as defined by the structural causal model (SCM), into the message-passing mechanism of graph convolutional network (GCN)-based graph learning. This integration facilitates the generation of accurate recommendations that uncover the true user preferences. The contributions of this work are: * We introduce a novel approach that leverages causal model-based graph representation learning for recommendation systems. Our proposed CNGCF is the first of its kind to explore causal relationships underlying the graph with the aim of generating causality-aware graph embeddings. * Our CNGCF utilizes a unified framework based on variational inference, which is driven by a causal graph encoder to model the graph topology of the causal graph and a collaborative filtering decoder to reconstruct user interactions. * We validate the effectiveness of our proposed framework through extensive experimentation. Our experimental results demonstrate that our approach outperforms existing methods in achieving satisfactory recommendation performance. ## II Related Work ### _Graph Collaborative Filtering_ Collaborative filtering (CF) [14] dominates recommendation research due to its simplicity and effectiveness. Early CF models including latent factor models [15] and neural-based CF [4] use descriptive features (e.g., IDs) to calculate user similarities, assuming that users with similar historical behaviors have similar future preferences. For example, Bayesian personalized ranking (BPR) [16] learns user and item latent vectors from the interaction matrix built by implicit user feedback, e.g., clicks. The inner products between latent vectors are used as user-item similarities to predict user preference scores. Neural collaborative filtering (NCF) [4] uses a Multi-layer perceptron (MLP) to learn a user behavior similarity function based on simple user/item one-hot encodings. Graph CF (GCF) leverages advances in graph learning [1] to model user-item interaction graphs as well as rich auxiliary data (e.g., text, image), thus boosting the recommendation by augmenting complex semantics under user-item interactions. Relevant approaches can be categorized as random walk-based and graph representation learning-based methods. The first line of random walk-based methods builds random walk models with calculated similarities among users and items from probability models. The learned random walk models give probability distributions over items to produce auxiliary user-item co-occurrence information for CF models. For instance, ItemRank [5] computes the stationary distribution of a random walk model based on estimating inter-user and inter-item similarities from a user-item interaction graph. The random walk model provides item importance for a CF model, in which the final ranking of items is based on the calculated item importance. BiRank [6] extends ItemRank to incorporate both item features and user preferences in recommendations. BiRank computes a joint stationary distribution over users and items in the graph, where the probability of transitioning from an item node to a user node is based on user ratings on items. These methods are inferior to optimization-based CF methods since they do not include model parameters that can be optimized together with the CF training. Another line of graph representation learning-based methods usually uses deep neural networks (e.g., graph convolution network) to scrutinize complex graph relations and produce user and item representations for recommendation tasks. Neural graph collaborative filtering (NGCF) [7] is one of the most representative graph representation learning-based CF approaches, which incorporates two graph convolutional networks (GCNs) to learn the collaborative signal of user interactions from a user-item interaction graph. GC-MC [17] uses a GCN-based auto-encoder to learn latent features of users and items from an interaction graph and reconstructs the rating links for matrix completion. Later, LightGCN [18] simplifies the application of the GCN in recommendations by only including neighborhood aggregation for calculating user and item representations, which further boosts the efficiency of subsequent GCF approaches, e.g., [8, 9, 10, 19]. Despite the great effort, existing GCF methods only capture correlation signals of user behaviors by modeling neighboring node messages. This would result in the limited ability of GCF models to capture the true user preferences in the presence of spurious correlations. On the contrary, we abandon the modeling of spurious correlations to pursue the intrinsic causal relations between nodes, which estimate the causal effect of a specific item on user preferences to uncover true user interests. ### _Causal Learning for Recommendation_ Recent recommendation research has largely favored causality-driven methods. A burst of relevant papers is proposed to address critical issues in RS, such as data bias and model explainability with causal learning. Among them, two representative causal frameworks are largely adopted, i.e., the potential outcome framework (POF) from Rubin et al. [20] and the structural causal model (SCM) from Pearl et al. [21]. POF-based recommendation directly estimates the causal effect of a treatment (e.g., item feature) on the outcome, i.e., recommendation results. Inverse propensity weighting (IPW) [22] is wildly adopted in POF-based recommendations. Tobias et al. [23] adopt IPW to learn unbiased matrix factorization models, in which propensity scores are estimated by a separately learned propensity model. Zhang et al. [24] integrate the learning of the propensity model and the recommendation model into a multi-task learning framework. However, POF-based recommendation is less intuitive since it does not include graphical models to describe causal relations. Besides, POF-based recommendation largely relies on the quality of propensity score estimation. The estimator usually suffers from the "propensity overfitting" [25] due to the uncertainty of unseen variables, limiting the performance of POF-based recommendations. SCM-based recommendation directly builds a graphical causal graph by extracting structural equations on causal relations between deterministic variables in recommendations. It aims to use the causal graph to conduct causal reasoning for causal effect estimation. Using the causal graph, most relevant approaches pursue mitigating the bad effects of different data biases, e.g., exposure bias [26, 27], popularity bias [28, 29]. For instance, Wang et al. [26] mitigate exposure bias in the partially observed user-item interactions by regarding the bias as the confounder in the causal graph. They propose a deconfonded model that performs Poisson factorization on substitute confounders (i.e., an exposure matrix) and partially observed user ratings. Zheng et al. [28] relate the user conformity issue in recommendations with popularity bias, and use a causal graph to guide the disentangled learning of user interest embeddings. Other approaches also achieve explainable recommendations. Wang et al. [30] define a causal graph that shows how users' true intents are related to item semantics, i.e., attributes. They propose a framework that produces disentangled semantics-aware user intent embeddings, in which each model component corresponds to a specific node in the causal graph. The learned embeddings are able to disentangle users' true intents towards specific item semantics, which explains which item attributes are favored by users. ## III Preliminaries We provide key preliminaries, including the definition of graph-based recommendations utilizing graph convolutional networks, as well as basic concepts under causal inference. ### _Recommendation with Graph Convolutional Network_ Let \(\mathcal{U}\) and \(\mathcal{I}\) denote the sets of users and items, respectively. Graph-based recommendation formulates users and items with their features into a graph \(G=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) is the node set absorbs all user and item nodes with \(|\mathcal{V}|=|\mathcal{U}\cup\mathcal{I}|\) and \(\mathcal{E}\) is the edge set denoting the connections among nodes. \(G\) induces an adjacency matrix \(\mathbf{A}\in[0,1]^{N\times N}\) and a node feature matrix \(\mathbf{D}\in\mathbb{R}^{N\times d}\), where \(N=|\mathcal{V}|\) is the number of nodes and \(d\) is the dimension of node features. Each \(\mathbf{d}_{i}\in\mathbb{R}^{d}\) is the vector-valued sample of a specific node \(i\in\mathcal{V}\) containing descriptive information of the node, e.g., user/item IDs. Using \(G\), most graph-based recommendation models rely on graph representation learning [31] to scrutinize complex graph relations and produce dense vectors (a.k.a embeddings) for recommendation tasks, e.g., rating prediction. Graph convolutional network (GCN) [32] is a typical method for graph representation learning. It employs multiple graph convolutional layers to obtain the graph representation \(\mathbf{E}\) of \(G\), where \(\mathbf{E}\in\mathbb{R}^{|\mathcal{V}|\times d^{\prime}}\) absorbs user and item node representations as \(d^{\prime}\)-dimensional dense vectors. Based on \(\mathbf{E}\), the model then infers the interaction probabilities of users over items to make recommendations. In particular, a graph convolutional layer \(g\left(\mathbf{D},\mathbf{A}\right)\) calculates each representation \(\mathbf{e}_{i}\) of a user/item node \(i\) based on its feature \(\mathbf{d}_{i}\in\mathbf{D}\) and node neighbors \(\mathcal{N}_{i}\) through the following equation 1: Footnote 1: We present the wildly-used inductive graph representation learning setting with the GCN. An inductive setting can abandon the reliance on full graph Laplacian v.s. the transductive setting. For the comparison between inductive and transductive learning, refer to [33]. \[\mathbf{e}_{i}=\phi\left(\mathbf{d}_{i},\bigoplus_{j\in\mathcal{N}_{i}}\psi \left(\mathbf{d}_{i},\mathbf{d}_{j}\right)\right) \tag{1}\] where \(\mathbf{e}_{i}\) denotes the representation of a user/item node \(i\), which is calculated by aggregating (\(\oplus\)) the messages \(\psi\) from its neighbors within \(\mathcal{N}_{i}\). \(\mathcal{N}_{i}\) is the neighbor set of \(i\) established by visiting the adjacency matrix \(\mathbf{A}\) and \(\mathbf{d}_{j}\) is the node feature of the neighboring node \(j\). The calculation of messages \(\psi\) in Eq (1) is known as message-passing [33], which is the _de facto_ of a class of GCN variants, e.g., graph attentional networks [34]. The aggregation operator \(\oplus\) may take various forms, e.g., element-wise mean [34], max-pooling [35]. ### _Causal Inference_ **Definition 1** (Causal Graph): _A causal graph [36, Def. 13] is a directed acyclic graph (DAG) \(\tilde{G}=(\{\mathcal{V},Z\},\mathcal{E})\) represents causal relations among endogenous and exogenous variables. Here, \(\mathcal{V}\) is a set of endogenous variables of interest, e.g., user and item nodes in the graph learning, and user preference variables. \(Z\) is a set of exogenous variables outside the model, e.g., item exposure. \(\mathcal{E}\) is the edge set denoting causal relations among \(\tilde{G}\). Each directed edge \((j\to i)\in\mathcal{E}\) represents a causal relation from \(j\) to \(i\), where \(i\in\mathcal{V}\) and \(j\) is a parent node of \(i\), i.e., \(j\in pa\left(i\right)\). \(\tilde{G}\) induces a user causal adjacency vector \(\tilde{\mathbf{A}}_{u}\) and an item causal adjacency vector \(\tilde{\mathbf{A}}_{v}\), which specify the adjacent neighbors of a user node \(u\) and an item node \(v\), respectively. Each element \(\tilde{\mathbf{A}}_{u}^{j}=1\) if \(j\in pa(u)\), otherwise, \(\tilde{\mathbf{A}}_{u}^{j}=0\). Similarly, \(\tilde{\mathbf{A}}_{v}^{j}=1\) if \(j\in pa(v)\)._ **Definition 2** (Structural Causal Model): _A structural causal model (SCM) [37, Ch. 7]\(\mathcal{M}=\langle\mathcal{V},Z,\mathcal{F},P(Z)\rangle\) is the mathematical form of the causal graph \(\hat{G}\) that includes a collection of structural equations \(\mathcal{F}\) on endogenous variables \(\mathcal{V}\) and a distribution \(P(Z)\) over exogenous variables \(Z\). Each structural equation \(f_{i}\in\mathcal{F}\) for a variable \(i\in\mathcal{V}\) is a mapping from \(i\)'s parents and connected exogenous variables to \(i\):_ \[i\gets f_{i}\left(pa(i),Z_{i}\right),Z_{i}\sim P(Z) \tag{2}\] _where \(pa(i)\subseteq\mathcal{V}\backslash i\) is \(i\)'s parents from the causal graph \(\tilde{G}\). \(Z_{i}\in Z\) is a set of exogenous variables connected with \(i\)._ **Definition 3** (Intervention): _An intervention [36, Def. 2] is operated with the do-operator \(do(i=x)\), which forces a variable \(i\in\mathcal{V}\) to take the value \(x\). \(do(i)\) introduces an independence of the intervened node \(i\) to its causal parents. i.e., \(i\perp\!\!\!\perp pa(i)\)._ Intervention lies at the core of causal modeling as suggested by Rubin et al. [38]. Given a SCM \(\mathcal{M}\), an intervention is to force a variable \(i\in\mathcal{V}\) to take a specific value \(x\) in order to observe the effect on another variable. Through intervention, we can determine the causal relationship between endogenous variables. For instance, in the recommendation, we want to determine the effect of a particular recommendation (e.g., a video) on user behavior (e.g., click). We can intervene by assigning this recommendation to users, and observe users' behaviors before and after interventions. If users who received the recommendation are more likely to click, we can conclude that the recommendation has a positive causal effect on user behaviors. As such, interventions allow us to determine the true causal effect by intervening to recommend items, instead of passively observing user-item correlations in training data. ## IV Problem Formulation We put forward the causal graph for causality-aware graph-based recommendations. We then formulate the generation process of recommendations based on structural equations under the causal graph. ### _A Causal View of Recommendation_ Early CF resorts to user-item associative matching by assuming the causal graph in Figure 2 (a). They typically assume \(P(Y=1\mid u,v)\propto\mathbf{u}^{\top}\mathbf{v}\), where \(\mathbf{u}\) and \(\mathbf{v}\) are user and item latent factors. Graph CF (GCF), as shown in Figure 2 (b), considers auxiliary data \(Z_{u}\) and \(Z_{v}\) (could be hidden) and the inner connections of users and items from their neighbors to model more complex user behavior patterns. They first derive dense embedding vectors (i.e., \(E\)) for users and items, then use these embeddings to infer user preferences. They assume \(P(Y=1\mid u,v)\propto E=\operatorname{NN}\left(\operatorname{agg}\left(u,z_{u}, \operatorname{msg}\left(\mathcal{N}_{u}\right)\right),\operatorname{agg}\left( v,z_{v},\operatorname{msg}\left(\mathcal{N}_{v}\right)\right)\right)\), where \(\mathcal{N}_{u}\) and \(\mathcal{N}_{v}\) are neighbor sets for users and items, respectively; \(\operatorname{NN}\) is the representation learning network (e.g., GCN), \(\operatorname{agg}\) and \(\operatorname{msg}\) are the aggregation and message-passing operations, respectively. Both Figure 2 (a) and (b) assume the co-occurrence of users and items is independent in the observational data, i.e., there is no edge \(U\to V\) or \(V\to U\). However, this assumption is unrealistic in the real world because user behaviors are influenced by the recommended items for various reasons. For instance, users may be more likely to click the items if they are recommended [26]. Besides, the Fig. 2: Paradigms of user preference modeling in a class of CF models: (a) Early CF (b) Graph CF, and (c) Causality-aware graph CF. exposure of items is determined by user preferences estimated from the recommendation model [8]. Thus, it is necessary to model the influence of users on items and vice versa, as shown in Figure 2 (c), to achieve better user preference modeling. We thus use the causal graph defined in Figure 2 (c) for user preference modeling. The causal graph induces a structural causal model, with structural equations defined as: \[\mathcal{F}(\mathcal{V},Z):=\left\{\begin{array}{l}U\gets f_{U}\left(U,V,Z_{u}\right)\\ V\gets f_{V}\left(U,V,Z_{v}\right)\\ E\gets f_{E}(U,V)\\ Y\gets f_{Y}(E)\end{array}\right. \tag{3}\] where \(\{U,V,E,Y\}\in\mathcal{V}\) are endogenous variables in the recommendation. \(f_{U}\), \(f_{V}\), \(f_{E}\) and \(f_{Y}\) are the structural equations that specify the causal modeling of \(U\) (i.e., user), \(V\) (i.e., item), \(E\) (i.e., representation) and \(Y\) (i.e., recommendation), respectively. For example, user node \(u\) whose causal mechanism is modeled by \(f_{U}\) is characterized by the structural equation \(f_{u}\). Such a structural equation models the direct causal relation from the set of causes \(pa(u)\) to user node \(u\) accounting for the effects of \(Z_{u}\) as indicated by Eq. (2). The ability to perform interventions lays a foundation for Eq. (3), as interventions enable estimating the causal effects between endogenous variables. For example, by using the do-operation \(do(\cdot)\) on users, we can estimate the causal effect of user influence on items (i.e., \(U\to V\)) by modeling \(P(y\mid v,do(u))\). Also, we can estimate the influence of items on users (i.e., \(V\to U\)) using the \(u\)-specific causal effect \(P(y\mid u,do(v))\), instead of fitting users' historical interactions by modeling \(P(y\mid u,v)\) without accounting for user-item causal relations. As such, we could model user-item causal relations to allow causality-aware graph-based recommendations. ### _Causality-aware Recommendation Generative Process_ We now present the generative process of causality-aware graph-based recommendations. The generative process is guided by the structural equations under the causal graph (cf. Eq. (3)) to capture causal relations in graph-based recommendations. In particular, we first assume the unobserved exogenous variables of users and items in Eq. (3) are drawn from a standard Gaussian prior, denoted as \(d\)-dimension latent vectors \(\mathbf{Z}_{u}\) and \(\mathbf{Z}_{v}\) for exogenous variables \(Z_{u}\) and \(Z_{v}\), respectively. For each user \(u\), we calculate the user representation \(\mathbf{u}\) based on latent vectors of user exogenous variables \(\mathbf{Z}_{u}\) and neighbor information \(f_{\varphi}(U\mid U,V)\) propagated by its connected users and items. Note that we enable the neighbor information \(f_{\varphi}(U\mid U,V)\) to capture the causal relations between neighboring nodes and the target node, and thus propose a causality-aware message passing operation that defines \(f_{\varphi}\) as a feedforward neural network with parameter \(\varphi\). \(f_{\phi}\) is a sum-aggregator for message aggregation to give the distribution of \(\mathbf{u}\). Analogously, item representation \(\mathbf{v}\) is given by aggregating \(\mathbf{Z}_{v}\) and neighbor information \(f_{\varphi}(V\mid U,V)\) through \(f_{\phi}\). The latent representation \(\mathbf{u}\) and \(\mathbf{v}\) are transformed via a non-linear function \(f_{\theta_{3}}\in\mathbb{R}^{I}\). The output of \(f_{\theta_{3}}\) is normalized via a softmax function to produce a preference probability vector \(\mathbf{e}\in\mathbb{S}^{I-1}\), where \(\mathbb{S}^{I-1}\) is an \((I-1)\)-simplex with \((I-1)\) as the size of \(\mathbf{e}\) and \(I\) is the total item number. Given the total number of interactions \(N=\sum_{i}y_{ui}\) from user \(u\), the observed user interaction vector \(\mathbf{y}\) follows multinomial priors based on the distribution of \(\mathbf{e}\). Formally, \[\left\{\begin{array}{l}\mathbf{Z}_{u}\sim\mathcal{N}\left(0,\mathbf{I}_{K} \right),\mathbf{Z}_{v}\sim\mathcal{N}\left(0,\mathbf{I}_{K}\right),\\ \mathbf{u}\propto f_{U}=\left\{f_{\phi}\left(\mathbf{Z}_{u},f_{\varphi}(U \mid U,V)\right)\right\}_{\theta_{1}},\\ \mathbf{v}\propto f_{V}=\left\{f_{\phi}\left(\mathbf{Z}_{v},f_{\varphi}(V \mid U,V)\right)\right\}_{\theta_{2}},\\ \mathbf{e}\propto f_{E}=\operatorname{softmax}\left(f_{\theta_{3}}\left( \mathbf{u},\mathbf{v}\right)\right),\\ \mathbf{y}\sim f_{Y}=\operatorname{Mult}\left(N,\mathbf{e}\right)\end{array}\right. \tag{4}\] The generative process in Eq. (4) ensures the causality-aware graph learning for recommendations by modeling causal relations induced by structural equations in Eq. (3). Later, we will use this generative process to guide our model framework design for robust recommendations. ## V Methodology We now introduce our _Causal Neural Graph Collaborative Filtering (CNGCF)_ framework that delivers causality-aware graph-based recommendations. We follow Eq. (4) to design each of the components in CNGCF, i.e., implementing \(f_{U}\), \(f_{V}\), \(f_{E}\) and \(f_{Y}\), respectively. We use variational autoencoders (VAEs) [39] to approximate the intractable posterior distributions of parameters from the four structural equations. In particular, as shown in Figure 3, CNGCF devises two major components based on the VAE structure: 1) The causal graph encoder includes a semi-implicit generative model, a user encoder and an item encoder. The semi-implicit generative model implements a causality-aware message passing to model causal relation dependencies between nodes. The user encoder and item encoder implement \(f_{U}\) and \(f_{V}\) to output user representation \(\mathbf{u}\) and item representation \(\mathbf{v}\), respectively. 2) The collaborative filtering decoder implements \(f_{E}\) to construct the user preference vector \(\mathbf{e}\) through collaborative filtering, from which user's interactions \(f_{Y}\) is sampled. ### _Semi-implicit Inference for Causal Graph Encoder_ Our causal graph encoder aims to learn user and item representations \(\mathbf{u}\) and \(\mathbf{v}\) by using a user encoder \(q_{\theta_{1}}\left(\mathbf{u}\mid\mathbf{Z}_{u},\mathbf{d}_{u},\tilde{\mathbf{ A}}_{u}\right)\) and an item encoder \(q_{\theta_{2}}\left(\mathbf{v}\mid\mathbf{Z}_{v},\mathbf{d}_{v},\tilde{ \mathbf{A}}_{v}\right)\). However, modeling \(q_{\theta_{1}}\) and \(q_{\theta_{2}}\) is not easy, since there are inherent causal relation dependencies between a user/item node and its adjacent neighbors. Besides, as indicated by Eq. (4), those causal relations should be modeled with a neural network \(f_{\varphi}\) as dependency terms of structural equations. Thus, the true posteriors of \(q_{\theta_{1}}\) and \(q_{\theta_{1}}\) do not follow Gaussian distributions due to the existence of complex causal relation dependencies parameterized by an additional neural network. As a result, traditional variational inference [39] that directly parameterizes user and item representations to simple, tractable Gaussian random vectors is not applicable in our setting. To approximate complex posteriors, we use semi-implicit variational inference (SIVI) [40] that models complex distributions through the use of implicit distributions. #### Iii-B1 Semi-implicit Generative Model SIVI approximates additional implicit posteriors with a generative model and integrates them with variational encoders to enable flexible mixture modeling of complex posteriors. Inspired by SIVI, we devise a semi-implicit generative model on top of the user and item encoder to model implicit posteriors. Notably, our semi-implicit generative model includes a causality-aware message passing to handle neighboring node dependencies of user and item nodes in the causal graph. As a result, our causal graph encoder not only captures causal relation dependencies, but also naturally allows the mixture modeling of complex posterior distributions. Formally, the semi-implicit generative model \(f_{\{\varphi,\phi\}}\) equips causality-aware message passing with a neural network \(f_{\varphi}\) and an aggregation operator \(f_{\phi}\) to learn hidden factors \(\mathbf{h}_{u}\) and \(\mathbf{h}_{v}\) for a user \(u\) and an item \(v\). Then, the user encoder \(q_{\theta_{1}}\) takes \(\mathbf{h}_{u}\) as the input to output \(\mu_{u}\), \(\sigma_{u}\), from which the user representation \(\mathbf{u}\) is sampled. Analogously, the item encoder use \(\mathbf{h}_{v}\) for \(q_{\theta_{2}}\) to calculate item representation \(\mathbf{v}\): \[\begin{split}\mathbf{h}_{u}&\sim f_{\{\varphi,\phi\} },\mathbf{u}\sim q_{\theta_{1}}(\mathbf{u}\mid\mathbf{h}_{u})=\mathcal{N}\left( \mathbf{u}\mid\mu_{u},\operatorname{diag}\left(\sigma_{u}^{2}\right)\right)\\ \mathbf{h}_{v}&\sim f_{\{\varphi,\phi\}},\mathbf{v} \sim q_{\theta_{2}}(\mathbf{v}\mid\mathbf{h}_{v})=\mathcal{N}\left(\mathbf{v }\mid\mu_{v},\operatorname{diag}\left(\sigma_{v}^{2}\right)\right)\end{split} \tag{5}\] where \(\{\varphi,\phi\}\) parameterize the semi-implicit generative model. \(\theta_{1}\) and \(\theta_{2}\) are the parameters of the user and the item encoder. Next, we detail the semi-implicit generative model that learns \(\mathbf{h}_{u}\) and \(\mathbf{h}_{v}\) by using two key components: * Causality-aware message passing: Causality-aware message passing models each of the dependency terms \(f_{\varphi}(i,j)\) for a node \(i\) and its neighbor \(j\) within a structural equation, such that the learned messages themselves become a descriptor of the causal relation for \((i\gets j)\). In particular, we define \(f_{\varphi}(i,j)\) as a learnable multi-layer perception (MLP) to capture the causal relations. Formally, for a user \(u\), given its features \(\mathbf{d}_{u}\) and its causal adjacency vector \(\tilde{\mathbf{A}}_{u}\), the messages from \(u\)'s neighbors \(j\) within \(\tilde{\mathbf{A}}_{u}\) is given by: \[\begin{split}\mathbf{m}_{u}^{(l-1)}=f_{\varphi}(u,j)=\sum_{j\in \mathcal{N}_{u}\tilde{\mathbf{A}}_{u}}\mathbf{h}_{j}^{(l-1)}\cdot\mathrm{MLP}^ {(l)}\left(\|\mathbf{h}_{u}^{(l-1)},\mathbf{h}_{j}^{(l-1)}\right)\\ =\mathrm{ReLU}\left(\mathbf{W}_{\varphi}^{(l)}\left(\|\mathbf{h} _{u}^{(l-1)},\mathbf{h}_{j}^{(l-1)}\right)\right),\text{ for }l\in\{1,\cdots,L\} \end{split}\] (6) where \(\mathbf{m}_{u}^{(l-1)}\) is the neighbor message calculated for user \(u\) at the \(l-1\)-th graph learning layer 2. \(\mathcal{N}_{u}\) is a set of neighbors adjacent to user \(u\) within \(u\)'s causal adjacency vector \(\tilde{\mathbf{A}}_{u}\). \(\mathbf{h}_{j}^{(l-1)}\) and \(\mathbf{h}_{u}^{(l-1)}\) are hidden factors for a neighbor \(j\) and the user \(u\) at the \(l-1\)-th layer 3. \(\mathbf{W}_{\varphi}\) is the learnable weight matrix for \(f_{\varphi}\) and \(\|\) denotes column-wise concatenation. Analogously, we can calculate the neighbor message \(\mathbf{m}_{v}\) for an item \(v\) follows Eq. (6). Footnote 2: The neighbor message at the 0-th layer, i.e., \(\mathbf{m}_{u}^{(0)}\), is initialized from a normal distribution. * Aggregation: At each graph learning layer \(l\), we perform aggregation operation on the messages \(\mathbf{m}_{u}\) and user exogenous variables \(\mathbf{Z}_{u}\) to obtain the hidden factor \(\mathbf{h}_{u}^{(l)}\) for \(u\): \[\mathbf{h}_{u}^{(l)}=\sigma\left(\mathbf{W}_{\phi}^{(l)}\left(\mathbf{h}_{u}^ {(l-1)}\|\mathbf{m}_{u}^{(l-1)},\mathbf{Z}_{u}\right)\right)\] (7) where \(\mathbf{h}_{u}^{(l)}\) is the learned hidden factor for \(u\) at the \(l\)-th graph learning layer. \(\sigma(\cdot)\) is the aggregation function chosen as \(\mathrm{sum}\), following [41]; \(\|\) is the concatenation operation. \(\mathbf{W}_{\phi}\) is the weight for aggregation. At the \(0\)-th layer, \(u\)'s hidden factors \(\mathbf{h}_{u}^{(0)}\) are initialized as the user features \(\mathbf{d}_{u}\). Similarly, we can calculate the hidden factors \(\mathbf{h}_{v}^{(l)}\) for an item \(v\) at the \(l\)-th graph learning layer follows Eq. (7). Having obtained the hidden factors \(\mathbf{h}_{u}^{(l)}\) for user \(u\) and \(\mathbf{h}_{v}^{(l)}\) for item \(v\) at each graph learning layer \(l\in\{1,\cdots,L\}\), we adopt layer-aggregation mechanism [42] to concatenate vectors at all layers into a single vector: \[\mathbf{h}_{u}=\mathbf{h}_{u}^{(1)}+\cdots+\mathbf{h}_{u}^{(L)},\quad\mathbf{h}_ {v}=\mathbf{h}_{v}^{(1)}+\cdots+\mathbf{h}_{v}^{(L)}\] (8) Fig. 3: CNGCF framework: Causal graph construction prepossess a user-item interaction graph by using the causal relations under our defined causal graph; Causal graph encoder models the causal relations under the causal graph-structured data using a semi-implicit generative model, and outputs user and item representations with a user encoder and an item encoder; Collaborative filtering decoder uses collaborative filtering to construct preference vectors based on user and item representations. By performing layer aggregation, we capture higher-order connectivities of node pairs across different graph learning layers. Finally, our semi-implicit generative model outputs \(\mathbf{h}_{u}\) and \(\mathbf{h}_{v}\) from Eq. (8) as the semi-implicit posteriors of users and items for the latter variational encoders. #### Iii-A2 User and Item Encoder Given semi-implicit posterior \(\mathbf{h}_{u}\) for a user \(u\), the user encoder outputs the mean and variance in \(\mathcal{N}\left({{{\mu}_{u}},{\mathop{\mathrm{diag}}\nolimits}\left({{ \sigma_{u}^{2}}}\right)}\right)\), from which user representation \(\mathbf{u}\) is sampled: \[{{q_{\theta_{1}}}}\left({\mathbf{u}}\mid{\mathbf{h}_{u}}\right)=\mathcal{N} \left({\mathbf{u}}\mid{{{\mu}_{u}},{\mathop{\mathrm{diag}}\nolimits}\left({{ \sigma_{u}^{2}}}\right)}\right) \tag{9}\] where \({{{\mu}_{u}}}\) and \({\mathop{\mathrm{diag}}\nolimits}\left({{\sigma_{u}^{2}}}\right)\) are the mean and variance for user \(u\), which are obtained by sending \(u\)'s hidden factors \(\mathbf{h}_{u}\) to a one-layer neural network with activation function \({\mathop{\mathrm{ReLU}}\nolimits}(x)=\max(0,x)\): \[{{{\mu}_{u}}}={\mathop{\mathrm{ReLU}}\nolimits}\left({{{\mathbf{W}}_{{{ \theta_{1}}}}^{{{{\mu}_{u}}}}}{{\mathbf{h}_{u}}}+b}\right),\quad{{\sigma_{u}^{ 2}}}=\exp\left({{\mathop{\mathrm{ReLU}}\nolimits}\left({{{\mathbf{W}}_{{{ \theta_{1}}}}^{{{{\sigma}_{u}}}}}{{\mathbf{h}_{u}}}+b}\right)}\right) \tag{10}\] where \(\mathbf{W}_{{{\theta_{1}}}}=\{\mathbf{W}_{{{\theta_{1}}}}^{{{{\mu}_{u}}}},{{ \mathbf{W}}_{{{\theta_{1}}}}^{{{{\sigma}_{u}}}}}\}\) is a hidden-to-output weight matrix for the user encoder \({{{q_{{\theta_{1}}}}}}\). Analogously, the item encoder follows the same paradigm as the user encoder to generate the mean and variance for item \(v\) based on \(v\)'s hidden factors \(\mathbf{h}_{v}\): \[{{q_{{\theta_{2}}}}}\left({\mathbf{v}}\mid{\mathbf{h}_{v}}\right)=\mathcal{N} \left({\mathbf{v}}\mid{{{\mu}_{v}},{\mathop{\mathrm{diag}}\nolimits}\left({{ \sigma_{u}^{2}}}\right)}\right), \tag{11}\] where \(\mathbf{W}_{{{\theta_{2}}}}=\{\mathbf{W}_{{{\theta_{1}}}}^{{{{\mu}_{u}}}},{{ \mathbf{W}}_{{{\theta_{1}}}}^{{{{\sigma}_{v}}}}}\}\) is the weight matrix for the item encoder \({{{q_{{\theta_{2}}}}}}\). ### _Collaborative Filtering Decoder_ Collaborative filtering is largely dominated by latent factor models, as evidenced by Koren et al. [43]. These models involve mapping users and items into latent factors in order to estimate the preference scores of users towards items. We extend latent factor-based collaborative filtering into our decoder for modeling the user preference \(\mathbf{e}\), which is a probability vector over the entire item set for recommendations. The predicted user interaction vector \(\mathbf{y}\) is assumed to be sampled from a multinomial distribution with probability \(\mathbf{e}\). Formally, we define a generative function \({{{f_{{\theta_{3}}}}}}(\mathbf{u},\mathbf{v})\) recovering classical latent factor-based CF to approximate user preference vector \(\mathbf{e}\): \[\mathbf{e}={{{f_{{\theta_{3}}}}}}(\mathbf{u},\mathbf{v})=\mathbf{u}^{\top} \mathbf{v} \tag{12}\] where \(\mathbf{u}\) and \(\mathbf{v}\) are latent factors drawn from our user and item encoder in Eq. (9) and Eq. (11), respectively. Then, the decoder \({{p_{{{\theta_{3}}}}}}\left({\mathbf{e}}\mid{\mathbf{u}},\mathbf{v}\right)\) produces interaction probability \(\mathbf{y}\) by approximating a logistic log-likelihood: \[{{\log{p_{{{\theta_{3}}}}}}}\left({\mathbf{y}}\mid{\mathbf{e}}\right)=\sum \limits_{v}{{{y_{uv}}}\log\sigma\left({\mathbf{e}}\right)}+\left(1-{{y_{uv}}} \right)\log\left(1-\sigma\left({\mathbf{e}}\right)\right) \tag{13}\] where \({{y_{uv}}}\) is the historical interaction between \(u\) and \(v\), e.g., click. \(\sigma(\mathbf{e})=1/(1+\exp(-\mathbf{e}))\) is the logistic function. ### _Optimization with Counterfactual Instances_ We wish our CNGCF to be robust to unseen (unknown) user preference shift to further enhance our recommendation robustness. Catching user preferences is at the core of any recommendation model [44]; however, user preferences are dynamic and may change over time [25, 45]. For example, a user may once love items with the brand been _Nike_ but changes his taste for liking _Adidas_. Such a user preference shift can be captured by actively manipulating user preference through interventions on the user preference vector \(\mathbf{e}\), i.e., \(do(\mathbf{e}=\mathbf{e}^{\prime})\). The data after interventions is termed as counterfactual instances [46] that, if augmented to original training instances, increase the model robustness to unseen interventions. Following this intuition, we optimize our CNGCF by considering two different data scenarios, i.e., the clean data scenario in which our CNGCF accesses the data without interventions, and the counterfactual data scenario in which the data is generated by known interventions on user preference vectors. Formally, for the clean data scenario, assuming that CNGCF observes clean data \(\mathbf{D}\) only during training. In this case, we retain the original value \(\mathbf{o}\) of user preference \(\mathbf{e}\) by \(do(\mathbf{e}=\mathbf{o})\). Then, CNGCF is trained by maximizing the likelihood function \(\log{{p_{{{\theta_{3}}}}}}\left({\mathbf{y}}\mid{\mathbf{e}},do(\mathbf{e}= \mathbf{o})\right)\). Since this marginal distribution is intractable [39, 47], we instead maximize the intervention evidence lower-bound (ELBO) with \(do(\mathbf{e}=\mathbf{o})\), i.e. \(\max_{{{\theta_{1}}},{{\theta_{2}}},{{\theta_{3}}}}\mathop{\mathrm{ELBO}} \nolimits(\mathbf{D},do(\mathbf{e}=\mathbf{o})\). In particular, \[\begin{split}&\mathop{\mathrm{ELBO}}\nolimits(\mathbf{D},do(\mathbf{e}= \mathbf{o}))=\\ &\mathop{\mathbb{E}}\nolimits_{\theta}\left[\log\frac{{{p_{{{ \theta_{3}}}}}\left({\mathbf{y}}\mid{\mathbf{e}},do(\mathbf{e}=\mathbf{o}) \right)}p(\mathbf{u})p(\mathbf{v})}{{{q_{{\theta_{1}}}}\left({\mathbf{u}}\mid{ \Xi},do(\mathbf{e}=\mathbf{o})\right){{q_{{\theta_{2}}}}}\left({\mathbf{v}} \mid{\Xi},do(\mathbf{e}=\mathbf{o})\right)}\right]\\ =&\mathbb{E}_{\theta}\left[\log{{p_{{{\theta_{3}}}}} \left({\mathbf{y}}\mid{\mathbf{e}},do(\mathbf{e}=\mathbf{o})\right)}\right]\\ &-\mathop{\mathrm{KL}}\nolimits\left({{{q_{{\theta_{1}}}}}\left({ \mathbf{u}}\mid{\Xi}\right)\left\|{{{p}}\left(\mathbf{u}\right)}\right.,{{q_{{ \theta_{2}}}}}\left({\mathbf{v}}\mid{\Xi}\right)\left\|{{{p}}\left(\mathbf{v} \right)}\right)}\end{split} \tag{14}\] where \(\Xi\) represents required conditions for the conditional probability distributions of \({{q_{{\theta_{1}}}}}\), \({{q_{{\theta_{2}}}}}\) and \({{p_{{{\theta_{3}}}}}}\), i.e., \(\Xi=\{\mathbf{Z}_{u},\mathbf{d}_{u},\mathbf{\hat{A}}_{u}\}\) for \({{q_{{\theta_{1}}}}}\), \(\Xi=\{\mathbf{Z}_{v},\mathbf{d}_{v},\mathbf{\hat{A}}_{v}\}\) for \({{q_{{\theta_{2}}}}}\) and \(\Xi=\{\mathbf{u},\mathbf{v}\}\) for \({{p_{{{\theta_{3}}}}}}\). \(\theta=\{{{{\theta_{1}}}},{{\theta_{2}}},{{\theta_{3}}}\}\) is a set of model parameters to be trained and \(\mathop{\mathrm{KL}}\nolimits(Q\|P)\) is KL-divergence between distributions \(Q\) and \(P\). For the counterfactual data scenario, we assume CNGCF accesses counterfactual data \(\mathbf{D}^{\prime}\) generated by known interventions \(do(\mathbf{e}=\mathbf{e}^{\prime})\) on user preference vectors. The counterfactual vectors \(\mathbf{e}^{\prime}\) hold the same dimension with \(\mathbf{e}\) and are drawn from a random distribution. Then, the ELBO of CNGCF with the counterfactual data is, \[\begin{split}&\mathop{\mathrm{ELBO}}\nolimits(\mathbf{D}^{ \prime},do(\mathbf{e}=\mathbf{e}^{\prime}))=\mathop{\mathbb{E}}\nolimits_{ \theta}\left[\log{{p_{{{\theta_{3}}}}}\left({\mathbf{y}}\mid{\mathbf{e}},do( \mathbf{e}=\mathbf{e}^{\prime})}\right)\right]\\ &-\mathop{\mathrm{KL}}\nolimits\left({{{q_{{\theta_{1}}}}}\left({ \mathbf{u}}\mid{\Xi}\right)\left\|{{{p}}\left(\mathbf{u}\right)}\right.,{{q_{{ \theta_{2}}}}}\left({\mathbf{v}}\mid{\Xi}\right)\left\|{{{p}}\left(\mathbf{v} \right)}\right)\end{split}\] ( ## VI Experiments We thoroughly evaluate the proposed CNGCF for the recommendation task to answer the following research questions: * **RQ1:** How does CNGCF perform as compared with state-of-the-art recommendation methods? * **RQ2:** How do different components impact CNGCF's performance? * **RQ3:** How do parameters in the causal graph encoder affect CNGCF? ### _Experimental Settings_ We conduct our experiments on three real-world and one synthetic datasets to evaluate the effectiveness of CNGCF. #### Vi-A1 Datasets We use three benchmark recommendation datasets from Amazon Product Reviews 4[49] and Epinions 5[50] Footnote 4: [https://nijianmo.github.io/amazon/index.html](https://nijianmo.github.io/amazon/index.html) Footnote 5: [http://www.cse.msu.edu/](http://www.cse.msu.edu/) tangilitrust.html * **Amazon-Beauty** and **Amazon-Appliances**: two sub-datasets selected from Amazon Product Reviews, which record large crawls of user reviews and product metadata (e.g., _brand_). Following [51], we use _brand_ and _price_ to build item features since other features (e.g., _category_) are too sparse and contain noisy information. We build item neighbors based on co-purchased and co-viewed information from the product metadata. The co-purchased and co-viewed information records item-to-item relationships, i.e., a user who bought/viewed item A also bought/viewed item B, reflecting the relations between item A and B. We build user neighbors based on similar interactions from the review data, i.e., users who reviewed the same item are neighbors for each other. * **Epinions**: a social recommendation dataset recording social relations between users. We convert user/item features from the dataset into one-hot embeddings. We use social relations to build user neighbors, i.e., a user's social friends are the neighbors of the user. Besides, items bought by the same user are neighbors to each other. We follow [30] to build the synthetic dataset, which assumes that synthetic user-item interactions follow the causal relations in a causal graph. In particular, given the causal graph in Figure 2(c), we construct the **Synthetic** dataset in four steps: 1. Feature generation: We simulate \(|\mathcal{U}|=1,000\) users and \(|\mathcal{I}|=1,000\) items, where each user has one discrete feature (_gender_) and one continuous feature (_income_), while each item has three discrete features, i.e., _type_, _brand_ and _location_. For discrete features, their values in \(\{0,1\}\) are sampled from Bernoulli distributions. We sample continuous features from random sampling, in which random feature values are chosen from the minimum (i.e., \(0\)) and the maximum (i.e., \(1000\)) feature values. For both users and items, we assume four exogenous variables (i.e., \(Z_{u}\) and \(Z_{v}\)) drawn from Gaussian distribution \(\mathcal{N}(0,1)\). 2. Causal neighbor sampling: As the causal graph gives causal relations \(U\to U\) and \(V\to V\), we synthesize the causal relations by building user/item causal neighbors, i.e., the connected users/items, for the target user/item. In particular, we set the causal neighbor number \(N_{c}=10\). We sample user causal neighbors (\(U\to U\)) through random sampling, in which a user's causal neighbors are randomly chosen from the user set \(\mathcal{U}\). For item causal neighbor sampling (\(V\to V\)), we first convert items with their features generated in the first step into dense vectors through item2vec [52], then calculate the Euclidean distances between two items. Those items that have the \(N_{c}\) smallest Euclidean distances with the target item are chosen as causal neighbors for the target item. 3. User preference estimation: For each user \(u\) and item \(v\), the user preference \(\mathbf{u}\in\mathbb{R}^{d}\) towards item property \(\mathbf{v}\in\mathbb{R}^{d}\) is generated from a multi-variable Gaussian distribution \(\mathcal{N}(0,\mathbf{I})\), where \(d\) and \(\mathbf{I}\) represent the vector size and unit matrix, respectively. Then, the preference score \(y_{uv}\) between user \(u\) and item \(v\) is calculated by the inner product of \(\mathbf{u}\) and \(\mathbf{v}\). 4. User interaction sampling: Once we obtain a user \(u\)'s preference scores for all items (i.e., \(\mathcal{I}\)), we normalize these preference scores by \(\frac{\exp(r_{i})}{\sum_{i^{\prime}\in\mathcal{I}}^{d}\exp(r_{i^{\prime}})}\). We select items with \(k\)-top scores as the interactions for the user \(u\in\mathcal{U}\), where \(k\) is a constant chosen randomly from range \([20,100]\). For the three real-world datasets, we regard user interactions with overall ratings above \(3.0\) as positive interactions. For the synthetic dataset, we regard all user-item interactions as positive as they are top items selected based on users' preferences. We adopt a \(10\)-core setting, i.e., retaining users and items with at least ten interactions. The statistics of the four datasets are shown in Table I. For model training, we split both datasets into training, validation, and test sets by the ratio of 70%, 10%, and 20%. #### Vi-A2 Baselines We compare CNGCF with eight competitive recommendation methods. * **BPR**[16]: a well-known matrix factorization-based model with a pairwise ranking loss to enable recommendation learning from implicit feedback. * **NCF**[4]: extends the CF to neural network architecture. It maps users and items into dense vectors, then feeds user and item vectors into an MLP to predict user preference scores. * **MultiVAE**[47]: extends the CF to VAE architecture for implicit feedback modeling. It converts the CF learning process into a generative model and uses variational inference to model the distribution of the generative model. * **NGCF**[7]: a graph CF that incorporates two GCNs to learn user and item representations. The learned representations are passed to a matrix factorization to capture the collaborative signal for recommendations. * **VGAE**[39]: a representative graph learning method that extends VAE to handle graph-structured data. We use VGAE \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline Dataset & Synthetic & Amazon-Beauty & Amazon-Appliances & Epinions \\ \hline \# Users & 1,000 & 271,036 & 446,774 & 116,260 \\ \# Items & 1,000 & 29,735 & 27,888 & 41,269 \\ \# Interactions & 12,813 & 311,791 & 522,416 & 181,394 \\ \# Density & 0.0128 & 0.0039 & 0.0041 & 0.0038 \\ \hline \hline \end{tabular} \end{table} TABLE I: Statistics of the datasets. to obtain user and item representations and inner product those representations to predict user preference scores. * **GC-MC**[17]: a graph-based auto-encoder framework for matrix completion. The encoder is a GCN that produces user and item representations. The learned representations reconstruct the rating links through a bilinear decoder. * **LightGCN**[18]: a SOTA graph-based recommendation model that simplifies the GCN component. It includes the essential part in GCNs, i.e., neighbor aggregation, to learn user and item representations for collaborative filtering. * **CACF**[53]: a method that learns attention scores from individual treatment effect estimation. The attention scores are used as user and item weights to enhance the CF model. #### Iv-A3 Evaluation Metrics We use three Top-\(K\) recommendation evaluation metrics, i.e., Precision@\(K\), Recall@\(K\) and Normalized Discounted Cumulative Gain(NDCG)@\(K\). The three evaluation metrics measure whether the recommended Top-\(K\) items are consistent with users' preferences in their historical interactions. We report the average results with respect to the metrics over all users. The Wilcoxon signed-rank test [54] is used to evaluate whether the improvements against baselines are significant. #### Iv-A4 Parameter Settings We implement our CNGCF using Pytorch. The latent embedding sizes of neural networks for all neural-based methods are fixed as \(d=64\). The in-dimension and out-dimension of the graph convolutional layer in CNGCF, NGCF, VGAE, GC-MC and LightGCN is set as \(32\) and \(64\), respectively for graph learning. We apply a dropout layer on top of the graph convolutional layer to prevent model overfitting for all GCN-based methods. The Adam optimizer is applied to all methods for model optimization, where the batch size is fixed as 1024. The hyper-parameters of all methods are chosen by the grid search, including the learning rate \(l_{r}\) in \(\{0.0001,0.0005,0.001,0.005\}\), \(L_{2}\) norm regularization in \(\left\{10^{-5},10^{-4},\cdots,10^{1},10^{2}\right\}\), and the dropout ratio \(p\) in \(\{0.0,0.1,\cdots,0.8\}\). We set the maximum epoch for all methods as \(400\) and use the early stopping strategy, i.e., terminate model training when the validation Precision@10 value does not increase for 20 epochs. ### _Recommendation Performance (RQ1)_ We show the recommendation performance of our CNGCF and all baselines on the four datasets in Table II. By analyzing Table II, we have the following findings. * CNGCF consistently outperforms the strongest baselines on both synthetic and real-world datasets, achieving the best recommendation performance across all three evaluation metrics. In particular, CNGCF outperforms the strongest baselines by 23.4%, 7.0%, 34.3% and 5.7% in terms of Precision@10 on Synthetic, Amazon-Beauty, Amazon-Appliances and Epinions, respectively. Additionally, CNGCF improves Recall@10/NDCG@10 by 2.5%/3.8%, 8.4%/22.1%, 13.3%/35.9% and 10.6%/2.8% on the four datasets, respectively. The superiority of CNGCF can be attributed to two factors: the power of neural graph learning and the modeling of causality. Firstly, graph learning explicitly models the interactions between users and items as a graph, and uses graph convolutional networks to capture the non-linear relations from neighboring nodes. This allows graph learning to capture more complex user behavior patterns. Secondly, modeling causal relations allows us to identify the causal effects of different items on users, thus capturing true user preferences on items. By injecting causal modeling into graph representation learning, our CNGCF captures more precise user preferences to produce robust recommendations against baselines. * CNGCF achieves the most notable improvements (e.g., 35.9% for NDCG@10 and 43.8% for NDCG@20) on the Amazon-Appliances dataset, which is a large-scale dataset with a considerable amount of user behavior data that may be noisy and challenging to model. CNGCF's ability to inject causality into graph learning enables the model to surpass merely capturing spurious correlations among noisy data, leading to more accurate and reliable modeling of true user preferences. * NGCF that uses graph representation learning outperforms NCF without graph learning. This is because NGCF models user-item interactions as a graph, and uses graph convolutional networks to capture more complex user-user collaborative behavior to enhance recommendations. In contrast, NCF uses a multi-layer perception to learn user and item similarities, which captures only linear user-item correlations from the interaction matrix. Moreover, GC-MC and LightGCN outperform other graph learning-based baselines (i.e., NGCF, VGAE) in most cases. This is because GC-MC and LightGCN aggregate multiple embedding propagation layers to capture higher-order connectivity within the interaction graph. Similarly, our CNGCF incorporates layer aggregation within our causal graph encoder, enabling us to capture higher-order connectivity and produce better graph representations for improved recommendation performance. * CNGCF outperforms all graph learning-based baselines, including NGCF, VGAE, GC-MC and LightGCN. This is because CNGCF models causal relations within the graph learning process. Guided by the causality-aware recommendation generative process, CNGCF is able to inject causal relations under the structural causal model into the learning process of the graph convolutional network. This allows CNGCF to uncover the causal effect of items on users and capture user behavior patterns more accurately. ### _Study of CNGCF (RQ2)_ We start by exploring how replacing our causal graph encoder with other graph representation learning methods, i.e., naive GCN [32], Graphsage [33] and Pinsage [55], impact CNGCF's performance. We then analyze the influences of core components, including causality-aware message passing and counterfactual instance-aware ELBO. #### Iv-C1 **Effect of Causal Graph Encoder** The causal graph encoder plays a pivotal role in CNGCF to model the causal relations of nodes. To investigate its effectiveness, we replace our causal graph encoder with different encoders built by other graph learning methods. In particular, we use GCN [32], Graphsage [33] and Pinsage [55] to produce user and item embedding vectors for the decoder learning phase, and compare the performance of CNGCF before and after the replacements. We present the experimental results in Table III. We find that both GCN [32], Graphsage [33] and Pinsage [55]-based encoders downgrade the performance of CNGCF compared with CNGCF equiped with our proposed causal graph encoder. For instance, CNGCF with a GCN-based encoder downgrades the NDCG@10 by 28.68% on the Amazon-Beauty. This is because GCN, Graphsage and Pinsage cannot capture the causal relations of nodes in the interaction graph, leading to insufficient representations of users and items. On the contrary, our causal graph encoder captures the intrinsic causal relations between nodes using the causality-aware message passing; thus learns causality-aware user and item representations to better serve the later decoder learning. Moreover, the GCN-based encoder downgrades the CNGCF performance most severely compared with GraphSage and Pinsage-based encoders. This is because naive GCN performs transductive learning requiring full graph Laplacian, whereas GraphSage and Pinsage perform inductive learning without requiring full graph Laplacian to handle large-scale graph data well. We thus conclude that an inductive learning setting is more desired for our CNGCF, especially when facing large-scale graph data. #### Iv-A2 **Effect of Causality-aware Message Passing** The causality-aware message passing models the dependency terms between each of the structural equations as the causal relations between nodes. We present CNGCF's performance after removing the causality-aware message passing in Table IV. We observe that removing the component downgrades CNGCF's performance, indicating the importance of causality-aware message passing in helping CNGCF to achieve favorable recommendation performance. We thus conclude that modeling the causal relations between nodes within the graph-structured data is essential for graph learning-based models to uncover true user preferences for improved recommendations. #### Iv-A3 **Effect of Counterfactual Instance-aware ELBO** The counterfactual instance-aware ELBO augments counterfactual instances for CNGCF optimization. We present CNGCF's \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \hline Dataset & \multicolumn{3}{c|}{Synthetic} & \multicolumn{3}{c|}{Amazon-Beauty} & \multicolumn{3}{c|}{Amazon-Appliances} & \multicolumn{3}{c|}{Epinions} \\ \cline{2-11} Method & Precision@10 & Recall@10 & NDCG@10 & Precision@10 & Recall@10 & NDCG@10 & Precision@10 & Recall@10 & NDCG@10 \\ \hline \hline BRR & 0.5214 & 0.4913 & 0.6446 & 0.3555 & 0.3319 & 0.4111 & 0.3720 & 0.3574 & 0.4356 & 0.3022 & 0.2895 & 0.4889 \\ NCF & 0.6120 & 0.6293 & 0.7124 & 0.3618 & 0.3659 & 0.4495 & 0.3871 & 0.3789 & 0.4771 & 0.3551 & 0.3364 & 0.5432 \\ MikolVAE & 0.6248 & 0.5999 & 0.8101 & 0.4418 & 0.4112 & 0.4616 & 0.4454 & 0.4428 & 0.5998 & 0.4229 & 0.3888 & 0.5331 \\ NGCF & 0.5990 & 0.5681 & 0.7477 & 0.4512 & 0.4003 & 0.5188 & 0.4271 & 0.3778 & 0.5555 & 0.4018 & 0.3912 & 0.5012 \\ VGAI & 0.5446 & 0.5572 & 0.7778 & 0.3499 & 0.3821 & 0.4466 & 0.3681 & 0.4014 & 0.5019 & 0.3590 & 0.3460 & 0.4913 \\ GC-GC & 0.6115 & 0.6276 & 0.8116 & 0.4666 & 0.4615 & 0.5042 & 0.4718 & 0.4158 & 0.5677 & 0.4666 & 0.4218 & 0.5112 \\ LightCN & 0.6429 & 0.6719 & 0.5222 & 0.4810 & 0.4723 & 0.5351 & 0.4844 & 0.4652 & 0.6023 & 0.4717 & 0.4544 & 0.5146 \\ CACF & 0.4424 & 0.4518 & 0.5555 & 0.3101 & 0.3005 & 0.3888 & 0.3222 & 0.3188 & 0.4215 & 0.2599 & 0.2766 & 0.3445 \\ **CNGCF** & **2.7952** & **0.6889** & **0.8538** & **0.5148** & **0.5138** & **0.6858** & **0.6519** & **0.5271** & **0.8193** & **0.4990** & **0.6800** & **0.5889** \\ **Impw.5** & \(\sim\)23.4\% & \(\sim\)2.5\% & \(\sim\)3.8\% & \(\sim\)4.70\% & \(\sim\)8.4\% & \(\sim\)2.21\% & \(\sim\)34.3\% & \(\sim\)13.3\% & \(\sim\)35.9\% & \(\sim\)5.7\% & \(\sim\)10.6\% & \(\sim\)2.8\% \\ \hline Precision@20 & Recall@20 & NDCG@20 & Precision@20 & Recall@10 & NDCG@20 & Precision@20 & Recall@20 & NDCG@20 & Precision@20 & Recall@20 & NDCG@20 \\ \hline \hline BRR & 0.6111 & 0.5516 & 0.6318 & 0.3561 & 0.3420 & 0.4062 & 0.3941 & 0.3599 & 0.4322 & 0.3332 & 0.3322 & 0.4689 \\ NCF & 0.6678 & 0.6466 & 0.7003 & 0.3699 & 0.3691 & 0.4330 & 0.3999 & 0.4033 & 0.4519 & 0.3719 & 0.3614 & 0.5255 \\ MultiVAI & 0.679 & 0.6126 & 0.8006 & 0.4496 & 0.4200 & 0.4558 & 0.4819 & 0.4716 & 0.5911 & 0.4465 & 0.4055 & 0.5133 \\ NGCF & 0.6233 & 0.5999 & 0.7312 & 0.4612 & 0.4112 & 0.5081 & 0.4666 & 0.4258 & 0.5499 & 0.4223 & 0.4210 & 0.4811 \\ VGAI & 0.5847 & 0.5687 & 0.7613 & 0.3551 & 0.3999 & 0.4410 & 0.3771 & 0.4228 & 0.4761 & 0.3667 & 0.3598 & 0.4781 \\ GC-MC & 0.6645 & 0.6317 & 0.3091 & 0.4781 & 0.4771 & 0.5382 & 0.4892 & 0.4881 & 0.5514 & 0.4815 & 0.4515 & 0.4999 \\ LightGCN & 0.6904 & 0.6819 & 0.3108 & 0.5032 & 0.4869 & 0.5306 & 0.4919 & 0.4781 & 0.5613 & 0.4915 & 0.4718 & 0.5211 \\ CACF & 0.4547 & 0.4266 & 0.5348 & 0.3186 & 0.3211 & 0.3678 & 0.3418 & 0.3271 & 0.4103 & 0.2747 & 0.27910 & 0.3368 \\ **CNGCF** & **0.3081** & **0.6844** & **0.8603** & **0.5153** & **0.5106** & **0.7123** & **0.6367** & **0.5085** & **0.8501** & **0.5902** & **0.5034** & **0.5667** \\ **Impw.5** & \(\sim\)17.0\% & \(\sim\)4.3\% & \(\sim\)6.1\% & \(\sim\)2.5\% & \(\sim\)4.8\% & \(\sim\)27.6\% & \(\sim\)49.4\% & \(\sim\)3.5\% & \(\sim\)43.8\% & \(\sim\)17.7\% & \(\sim\)6.6\% & \(\sim\)7.3\% \\ \hline \hline \end{tabular} \end{table} TABLE II: Recommendation performance comparison: The best results are highlighted in bold while the second-best ones are underlined. All improvements against the second-best results are significant at \(p<0.01\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline \hline Variants & Precision@10 & Recall@10 & NDCG@10 \\ \hline CNGCF & 0.7852 & 0.7889 & 0.88538 \\ \(-\) Causality-aware message passing & 0.5806–\(\frac{31.9}{2.719}\) & 0.5804–\(\frac{20.5}{0.7179}\) & \(\frac{1-16.05}{0.725}\) \\ \(-\) Counterfactual instance-aware ELBO & 0.7781–\(\frac{2.17}{2.179}\) & \(\frac{40.6644–\frac{20.5}{4.75}}\) & 0.7573–\(\frac{11.27}{2.25}\) \\ \hline CNCGF & 0.5138 & 0.5138 & performance after removing the counterfactual instance-aware ELBO in Table IV. Apparently, removing the counterfactual instance-aware ELBO leads to the downgraded performance of CNGCF on both datasets. This is because our counterfactual instance-aware ELBO augments counterfactual instances, i.e., the intervened data on user preference vectors, thus facilitating better model optimization to capture user preference shifts. ### _Parameter Analysis of Causal Graph Encoder (RQ3)_ We analyze CNGCF's performance under different embedding sizes \(n\) of the semi-implicit generative model in the causal graph encoder. We also investigate the node dropout ratios \(p\) of the dropout layer applied in the causal graph encoder. #### Vi-D1 **Effect of Embedding Size** Figure 4 (a) (b) (c) report the parameter sensitivity of our CNGCF w.r.t. embedding size \(n\) with \(n=\{16,32,64,128,256,512,1024,2048\}\). Apparently, the performance of CNGCF on Amazon-Beauty, Amazon-Apilances and Epinions demonstrates increasing trends from \(n=16\), then reaches the peak when \(n=512\), \(n=64\) and \(n=256\), respectively. This is reasonable since \(n\) controls the number of latent vectors of users and items from the semi-implicit generative model, and low-dimensional latent vectors cannot retain enough information for the encoder learning phrase. After reaching the peaks, the performance of CNGCF degrades slightly and then becomes stable. The decrease in performance is due to the introduction of redundant information as the embedding size becomes too large, which can affect the model. Additionally, we observe the largest Amazon-Apiliances dataset requires the smallest embedding size of \(n=64\) to reach its peak performance compared to the other two datasets. This is because a larger embedding size brings large-scale datasets a higher computational burden, thus limiting the model's performance. #### Vi-D2 **Effect of Dropout Ratio** We employ a node dropout layer in the causal graph encoder to prevent model overfitting. We show the influence of node dropout ratio \(p\) on the three datasets in Figure 4 (d) (e) (f). We observe that the performance of CNGCF on both Amazon-Beauty, Amazon-Apilances and Epinions exhibits a decreasing trend as we increase the node dropout ratio \(p\) from \(0.0\) to \(0.3\), but recovers at \(p=0.4\). After \(p=0.4\), the performance of CNGCF decreases as the dropout ratio increases. We believe that the reduced performance could be attributed to the removal of crucial information that the model needs to learn from the data, thus impairing the CNGCF's performance. Nevertheless, the recovered performance at \(p=0.4\) indicates that CNGCF is robust to balance the loss of information and overfitting. ## VII Conclusion We propose CNGCF, the first causality-aware graph representation learning framework for collaborative filtering. Our CNGCF injects causal relations between nodes into GCN-based graph representation learning to derive satisfactory user and item representations for the CF model. We craft a causal graph to describe the causality-aware graph representation learning process. Our CNGCF quantifies each of the structural equations under the causal graph, with a semi-implicit generative model enabling causality-aware message passing for graph learning. Finally, we capture true user preferences on items by modeling node messages as dependencies of structural equations. Extensive evaluations on four datasets demonstrate CNGCF's ability to produce precise recommendations that interpret user preferences and uncover user behavior patterns. ## Acknowledgments This work is supported by the Australian Research Council (ARC) under Grant No. DP220103717, LE220100078, Fig. 4: Parameter analysis on causal graph encoder. LP170100891 and DP200101374.
2310.01155
EIP-4844 Economics and Rollup Strategies
We study the economics of the Ethereum improvement proposal 4844 and its effect on rollups' data posting strategies. Rollups' cost consists of two parts: data posting and delay. In the new proposal, the data posting cost corresponds to a blob posting cost and is fixed in each block, no matter how much of the blob is utilized by the rollup. The tradeoff is clear: the rollup prefers to post a full blob, but if its transaction arrival rate is low, filling up a blob space causes too large delay cost. The first result of the paper shows that if a rollup transaction arrival rate is too low, it prefers to use the regular blockspace market for data posting, as it offers a more flexible cost structure. Second, we show that shared blob posting is not always beneficial for participating rollups and change in the aggregate blob posting cost in the equilibrium depends on the types of participating rollups. In the end, we discuss blob cost-sharing rules from an axiomatic angle.
Davide Crapis, Edward W. Felten, Akaki Mamageishvili
2023-10-02T12:41:36Z
http://arxiv.org/abs/2310.01155v1
# EIP-4844 Economics and Rollup Strategies ###### Abstract We study the economics of the Ethereum improvement proposal 4844 and its effect on rollups' data posting strategies. Rollups' cost consists of two parts: data posting and delay. In the new proposal, the data posting cost corresponds to a blob posting cost and is fixed in each block, no matter how much of the blob is utilized by the rollup. The tradeoff is clear: the rollup prefers to post a full blob, but if its transaction arrival rate is low, filling up a blob space causes too large delay cost. The first result of the paper shows that if a rollup transaction arrival rate is too low, it prefers to use the regular blockspace market for data posting, as it offers a more flexible cost structure. Second, we show that shared blob posting is not always beneficial for participating rollups and change in the aggregate blob posting cost in the equilibrium depends on the types of participating rollups. In the end, we discuss blob cost-sharing rules from an axiomatic angle. ## 1 Introduction Ethereum improvement proposal (EIP) numbered 4844, dubbed as EIP-4844, is meant to create a cheaper and more efficient calldata posting service on the Ethereum main chain, sometimes called layer one (L1). The goal is to facilitate the Ethereum ecosystem move to rollups. Today, the largest optimistic and ZK rollups offer fees that are 3-50x lower than Ethereum L1. This EIP and follow-ups will further reduce costs of transacting on rollups by providing extra space, thus creating strong incentives for users to switch to using rollups and enabling new applications that can borrow Ethereum L1 security at a much lower cost. In this project, we study the economics of the proposal. We refer to the market created by EIP-4844 as the data market and the gas market of the Ethereum mainnet as the main market. In particular, we look at the trade-offs faced by rollups that are adopting the new service: 1. When should a rollup use the data market versus the main market for sending data to L1? 2. Is there a substantial efficiency gain in aggregating data from multiple rollups and what happens to the data market fees? 3. When would rollups decide to aggregate and what is the optimal cost-sharing scheme? In what follows we set up an economic model of aggregate rollup demand that we use to study the above questions. We make simplifying assumptions that allow us to obtain a crisp characterization of optimal rollup data posting strategies. In particular, we consider a continuous-time model in a large market with many rollups. We model the cost of rollups as a sum of two parts. The first part of the cost is observable, it is a data posting cost. The second is delay cost. For some applications delay of L1 finality is not crucial. However, many applications built on top of rollups need fast L1 finality for their liveness and some even include it in their security model. The goal of the rollup is to minimize overall costs per transaction, as the per-transaction costs are what users incur when using rollups. The blob posting cost is calculated in the equilibrium state endogenously. The main market price per gas is assumed to be fixed for simplicity. The rollups decide between using either of the two technologies for transaction-relevant data posting, that is, both markets are perfect substitutes for each other 1. We show that if the demand for blob posting is high, it drives smaller rollups to use the main market data posting strategy. We identify conditions when posting data on the main market is better for the rollup than posting blobs. In the second part of the paper, the joint blob posting option for two (or more) rollups is studied. Depending on the type of rollups that decide to post a shared blob, the price of the blob in the equilibrium can change in both directions, up or down. We derive bounds on the increase or decrease of the blob price in relative terms. If the shared blob posting is profitable, the rollups can join in doing so. We study cost-sharing rules from the axiomatic angle, by employing the Nash bargaining solution concept from the economic theory. In particular, we show that if both rollups were using data market, in the joint blob posting large rollup has to pay less than a proportional cost of the joint blob. On the other hand, it always pays more than half of the joint blob cost and the improvement of the large rollup is always less than the improvement of the small rollup. Footnote 1: Using both markets interchangeably is technologically feasible. ### Related Literature In [5], the authors study how under the EIP-1559 dynamic fee mechanism the gas usage converges to the target usage over time. This theoretical and empirical observation is a cornerstone of our modeling of the equilibrium state. [4] proposes a dynamic posted-price mechanism--which uses not only block utilization but also observable bids from past blocks to compute a posted price for subsequent blocks, they study the stability and welfare optimality of the proposed policy in a steady state. In [3], the authors argue that different resource types should be independently priced, as converting all resources in one unit and pricing it uniformly is economically inefficient. EIP-4844 can be seen as the first step towards fixing the economic inefficiencies of pricing different types of resources together. [1] provides a definition and empirical analysis of the upcoming EIP-4844 fee market, which introduces data gas on the Ethereum blockchain and a data gas pricing mechanism modeled after the fee update mechanism of EIP-1559. [2] studies optimal dynamic pricing method for multiple resources, with uncertain future demand flow and statistical dependencies between resource demands. In [6] the authors provide a rollup data batch posting strategy, in the context of a single independent rollup, and in the absence of a dedicated market for data. They analyze trade-offs between price and time that are similar in nature to the ones faced here in the presence of a dual market and multiple rollups. Continuous-time Model In this section, we outline modeling assumptions and derive initial results on the rollup behavior in the presence of two potential markets. We make the following list of assumptions: * Delay cost of a transaction is \(aD\), where \(D\) is the delay, in time units, experienced by the transaction (a time when a transaction was posted in a batch or blob to L1, minus time when it arrived to L2 sequencer), and \(a>0\) is a positive constant. * L1 gas price is \(G\), which is treated as a constant. * The base cost of a batch- or blob-posting transaction on L1 is \(P_{0}G\). Here \(P_{0}\) indicates the size of the metadata associated with the rollup transaction containing a batch/blob. * The cost of posting a blob on L1 is \(P_{0}G+B\), where \(B\) will be set later to a market-clearing price. The latter is interpreted as the minimum price for which no more than three blobs are posted per time unit on average. * The cost of posting a batch of \(n\) transactions on L1 is \((P_{0}+P_{1}n)G\). The target number of blobs per Ethereum block is denoted by \(k\)2. We treat time as continuous so that a blob can be posted at any time. Conceptually, a "time unit" can be thought of as one L1 block time, but in this model, we will allow blobs to be posted "in between" L1 blocks. Suppose there are rollups with transaction arrival rates \(R_{1},R_{2},...,R_{n}\) and they are sorted in decreasing rates, \(R_{i}\geq R_{i+1}\) for any \(i\in\{1,...,n-1\}\). The goal of a rollup is to minimize cost per transaction. The latter is obtained by dividing the total cost by the total number of transactions in the blob and paid by rollup users. The justification of this approach is that even though some transactions arrive earlier than others, their arrival time can be assumed to be uniformly random. This payment rule corresponds to the average costs, and therefore, the users pay fairly. Footnote 2: Initially set to 2 and currently set to 3. ## 3 Analysis In this section, we obtain results on the tradeoff between using data and main markets. First, we show the following: **Proposition 1**.: _There is a threshold \(n^{*}\), such that for any \(i>n^{*}\), a rollup \(i\) uses the direct on L1 posting strategy._ Proof.: Let us determine the optimal strategy for a single rollup, assuming the prices on both the data market and on L1 are fixed. There are two strategies any rollup could use: (1) posting blobs or (2) posting directly on L1. The rollup finds a strategy that minimizes the cost within each of these strategies separately and then selects the strategy that has a lower cost. First, consider a blob-posting strategy. Suppose a rollup with generic rate \(R\) posts a blob every time \(t\). Then, a blob contains \(Rt\) transactions, with the total cost (posting cost plus delay cost) of \[T_{B}(t):=P_{0}G+B+\frac{aRt^{2}}{2}. \tag{1}\] The cost per transaction is: \[Tr_{B}(t):=\frac{T_{B}(t)}{Rt}=\frac{P_{0}G+B}{Rt}+\frac{at}{2}. \tag{2}\] By the first order condition (FOC), \(Tr_{B}(t)\) is minimized when \[t_{B}:=t=\sqrt{\frac{2(P_{0}G+B)}{aR}}. \tag{3}\] Plugging \(t_{B}\) obtained above in (2), gives that the cost per transaction is \[Tr_{B}(t)=\frac{P_{0}G+B}{R}\sqrt{\frac{aR}{2(P_{0}G+B)}}+\frac{1}{2}\sqrt{ \frac{2(P_{0}G+B)a}{R}}=\sqrt{\frac{2(P_{0}G+B)a}{R}}=at_{B},\] and the number of transactions per blob is \(C_{B}:=Rt_{B}=\sqrt{\frac{2(P_{0}G+B)R}{a}}.\) Next, we consider an L1-posting strategy. Suppose a rollup posts a batch every time \(t\), with \(Rt\) transactions per batch. The total cost of a batch is \[T_{E}(t):=(P_{0}+RtP_{1})G+\frac{aRt^{2}}{2}, \tag{4}\] and the cost per transaction is \[Tr_{E}(t):=\frac{P_{0}G}{Rt}+P_{1}G+\frac{at}{2}. \tag{5}\] By the FOC, this is minimized when \(t=\sqrt{2P_{0}G/(aR)}.\) Plugging the above value in (5) gives that the cost per transaction is: \[Tr_{E}=\frac{P_{0}G}{R}\sqrt{\frac{aR}{2P_{0}G}}+P_{1}G+\frac{a}{2}\sqrt{ \frac{2P_{0}G}{aR}}=\sqrt{\frac{2P_{0}Ga}{R}}+P_{1}G,\] and the number of transactions per batch is \(Rt=\sqrt{2P_{0}GR/a}.\) Now, we focus on the indifference condition between posting blobs and posting on L1. That is, we solve for the value of \(B\) that makes the rollup indifferent between the two strategies. For this, set costs per transaction in both cases equal: \[\sqrt{\frac{2(P_{0}G+B)a}{R}}=P_{1}G+\sqrt{\frac{2P_{0}Ga}{R}}.\] Multiplying both sides by \(\sqrt{R/(2a)}\) and squaring gives \(P_{0}G+B=(\sqrt{R/2a}P_{1}G+\sqrt{P_{0}G})^{2},\) or equivalently \[B+P_{0}G=\frac{RP_{1}^{2}G^{2}}{2a}+2P_{1}G\sqrt{\frac{RP_{0}G}{2a}}+P_{0}G.\] Canceling terms, we get \[B=\frac{RP_{1}^{2}G^{2}}{2a}+2P_{1}G\sqrt{\frac{RP_{0}G}{2a}}. \tag{6}\] For any finite \(G\) and \(B>0\), there are two scenarios. First, there is an index \(n^{*}\) such that in (6), the right-hand side is lower than the left-hand side, therefore, the rollup prefers to use a main market posting strategy, a contradiction. Second, if such an index does not exist, then we set the threshold equal to \(n\). This finishes the proof of the proposition. Next, we demonstrate how to calculate the equilibrium price \(B\) and the threshold in Proposition 1. For simplicity, suppose that \(P_{0}=0\). Then, the time between posting for rollup \(i\) is \(t_{i}=\sqrt{\frac{2B}{aR_{i}}}\). The algorithm proceeds in two steps: * Step 1. Initialization: To hit the target of \(k\) blobs per time unit, we require that \[k=\sum_{i=1}^{n}\frac{1}{t_{i}}=\sum_{i=0}^{n}\frac{\sqrt{aR_{i}}}{\sqrt{2B}}.\] Solving \(B\) gives: \[B=\frac{a(\sum_{i=1}^{n}\sqrt{R_{i}})^{2}}{2k^{2}}.\] (7) Let \(\sum_{i=1}^{n}\sqrt{R_{i}}=:R\) is a positive real number. Plugging this value in (6), gives the initial value on \(m\) such that for a rollup with rate \(R_{m}\), LHS of (6) is larger than RHS of (6). * Step 2. In the loop, we increase \(m\) initialized in the previous step by one, calculate new equilibrium price \(B\) with a set of rollups \(\{1,2,...,n\}\), as long as the LHS of (6) is smaller than the RHS. Once we find a value of \(k\), for which LHS is higher than the RHS, we output \(B\) and \(n^{*}=m\), as an answer. Note that the condition in Step 2 may never be satisfied. In this case, \(n^{*}\) is set to \(n\). Intuitively, the calculation of \(B\) in the initialization step assumes that all rollups use a blob posting strategy. However, it might be that some rollups under this price will not use a blob posting strategy, that is, it is an overestimation of the price. The second step fixes this potential overestimation by first excluding all small rollups and adding them one by one. **Example 1**.: _Consider an example in which rates drop exponentially, that is, suppose \(R_{i}=\frac{R_{i-1}}{2}\) for any \(i\in\{1,...,n\}\)._ _Assume that \(G\) is very large, that is, all rollups use a blob posting strategy. To hit the target of \(k\) blobs per time unit, we require that:_ \[k=\sum_{i=0}^{n}\frac{1}{t_{i}}=\sqrt{\frac{aR_{0}}{2B}}\sum_{i=0}^{n}(\sqrt{ 2})^{-i}\approx\sqrt{\frac{aR_{0}}{2B}}\frac{\sqrt{2}}{\sqrt{2}-1}=\sqrt{ \frac{aR_{0}}{B}}\frac{1}{\sqrt{2}-1}.\] _The approximation is taken by assuming a large enough value of \(n\). We get an equivalent condition \(k(\sqrt{2}-1)\approx\sqrt{aR_{0}/B}.\) Solving for \(B\) gives:_ \[B\approx\frac{aR_{0}}{k^{2}(3-2\sqrt{2})}.\] _For \(k=2\), an initial EIP-4844 target number of blobs per block, \(B\approx 1.46aR_{0},\) and the time between posting for rollups is \(1.71,2.42,3.42,...\) For \(k=3\), a current EIP-4844 target, \(B\approx 0.65aR_{0},\) and the time between posting for rollups is \(1.14,1.62,2.27,...\)_ Joining chains Suppose two rollups join forces in posting blobs. There are three different type of profiles of these rollups in the equilibrium derived above. In the first, both rollups use blob posting technology. In the second, one rollup uses blob posting technology, while the other uses the main market to post the data. In the third, both rollups use the main market for posting the data. In this section, we analyze what happens with the equilibrium price of the blob in these different scenarios and derive a cost-sharing scheme that satisfies certain reasonable properties. Let \(B^{N}\) denote the new price in the equilibrium after two rollups join in posting blobs. **Case 1:** In this case, both rollups post the blobs in the initial equilibrium state. We obtain that the blob price in the new equilibrium state decreases, because of the blob price formula (7). In the following, we show a result of how large this decrease can be. **Proposition 2**.: \(B^{N}\) _satisfies the following inequalities: \(B\geq B^{N}\geq B/2\)._ Proof.: Assume that the two rollups joining in the blob posting are indexed \(i\) and \(j\). Then, \(B\) can be rewritten as \(B=c(\sqrt{R_{i}}+\sqrt{R_{j}}+t)^{2}\) and \(B^{N}=c(\sqrt{R_{i}+R_{j}}+t)^{2}\), where \(c=\frac{1}{2k^{2}}\) and \(t=\sum_{k\neq i,j}\sqrt{R_{k}}\). \(B\geq B^{N}\) is equivalent to \(\sqrt{R_{i}}+\sqrt{R_{j}}\geq\sqrt{R_{i}+R_{j}}\), that trivially holds for any \(R_{i},R_{j}>0\). The second inequality, \(B^{N}\geq B^{N}/2\), is equivalent to \(\sqrt{2}\sqrt{R_{1}+R_{2}}\geq\sqrt{R_{1}}+\sqrt{R_{2}}\). The latter is equivalent to \((R_{1}-R_{2})^{2}\geq 0\), which holds trivially. From the proof above we see that the equality in \(B^{N}=B/2\) holds if and only if there are only two rollups and their transaction rates are equal. For obtaining the corner solution, it is also implicitly assumed that no other rollup joins the blob posting strategy, in the new state with a lower price. **Case 2:** In this case, one rollup posts blobs, and the other posts on the main market in the initial arrangement. Joining blob posting pushes the price of the blob posting up, assuming that no rollup stops using blob posting technology. In the following, we derive an upper bound on the price increase. **Proposition 3**.: \(B^{N}\) _satisfies the following inequalities: \(2B\geq B^{N}\geq B\)._ Proof.: Assume that the rollup that posts blobs is indexed \(i\), that is, its transaction rate is \(R_{i}\) and the rollup that posts calldata at the main market has a transaction rate \(R\). Then, the old blob price in the equilibrium is equal to \(B=c(\sqrt{R_{i}}+t)^{2}\), where \(c=\frac{1}{2k^{2}}\) and \(t=\sum_{k\neq i}\sqrt{R_{k}}\). The new price, on the other hand, is equal to \(B^{N}=c(\sqrt{R_{i}+R}+t)^{2}\). It is obvious that \(B^{N}\geq B\), since no rollup stops using blob posting strategy. \(2B\geq B^{N}\) is equivalent to \(\sqrt{2}(\sqrt{R_{i}}+t)\geq\sqrt{R_{i}+R}+t\). It is sufficient to show that \(\sqrt{2}\sqrt{R_{i}}\geq\sqrt{R_{i}+R}\), equivalent to \(R_{i}\geq R\). The latter holds because the rollup with transaction rate \(R_{i}\) posts blobs in the equilibrium and has a higher transaction rate than the rollup with \(R\) rate, posting on the main market. Case 3:In this case, both rollups use the main market for posting the data in the initial setting. Assume they join in posting blobs and no rollup stops using blob posting technology. Then, the new blob price \(B^{N}\) in the equilibrium increases. We obtain the following upper bound on the increase. **Proposition 4**.: \(B^{N}\) _satisfies the following inequalities: \(2B\geq B^{N}\geq B\)._ Proof.: The proof is similar to the proof of proposition 3. Two rollups with transaction rates \(R^{1}\) and \(R^{2}\) that were posting their data in the main market, can reach a level that is almost \(2R_{1}\). In fact, as long as \(R^{1}+R^{2}\geq R_{1}\), the rollup with rate \(R_{1}\) would not post blobs in the new equilibrium, a contradiction with the assumption. This gives an upper bound of \(2B\) on the new equilibrium price. ### Cost Sharing Suppose there are two chains, with transaction arrival rates \(R_{L}=R\), from now on referred to as large rollup, and \(R_{S}=Rf\), referred to as small rollup. \(0<f<1\) is a real number. Assume \(P_{0}=0\), that is, there are no metadata costs. We use the same model as above. Let \(Tr_{L}\) and \(Tr_{S}\) denote costs per transaction of large and small rollups, respectively, \(C_{L}\) and \(C_{S}\) denote the total number of transactions posted by large and small rollups separately. In the following, for the illustration of calculating these parameters above, in this and the next subsections, we assume that both rollups use a blob posting strategy. If the big chain is the only one using the blob space and it posts every \(t_{L}\) time, then optimal posting time is \(t_{L}=\sqrt{2B/(aR)}\), with a cost per transaction of \(Tr_{L}=\sqrt{2Ba/R}\) and the total number of transactions in blob \(C_{L}=\sqrt{2BR/a}\). For the small chain only, we have optimal posting time \(t_{S}=\sqrt{2B/(aRf)}\), with a cost per transaction of \(Tr_{S}=\sqrt{2Ba/(Rf)}\) and the total number of transaction in the blob is \(C_{S}=\sqrt{2BRf/a}\). If the two chains post their blobs separately, the small chain has a higher per transaction cost, by a factor of \(1/\sqrt{f}\) per transaction, since \(t_{S}/t_{R}=1/\sqrt{f}\). Because the large chain has more transactions, it pays more overall, by a factor of \(1/\sqrt{f}\). A joint blob is also posted so that it minimizes cost per transaction. This is a Pareto efficient approach, as otherwise, both rollups could agree to deviate to the optimal strategy and share added value in any way. Based on the analysis in the proof of proposition, a joint blob is posted every \(t_{J}=\sqrt{\frac{2B^{N}}{(1+f)aR}}\). The total cost per blob is: \[B^{N}+\frac{a(1+f)R}{2}\cdot\frac{2B^{N}}{(1+f)aR}=2B^{N}.\] In all three cases, the total cost per blob is equal to 2 times the blob price. A cost per transaction of \(Tr_{J}=\sqrt{2B^{N}a/(R(1+f))}\) and the total number of transaction in the blob is \(C_{J}=\sqrt{2B^{N}R(1+f)/a}\). First, note that if \(Tr_{J}>Tr_{L}\), the rollups will not join in posting blobs together, as it is not profitable for a large rollup. Therefore, an interesting case is when \(Tr_{J}<Tr_{L}\leq Tr_{S}\). In propositions 2, 3 and 4, we obtained that \(2B\geq B^{N}\geq B/2\). That is, the relation between \(Tr_{J}\) and \(Tr_{L}\) can be arbitrary. We discuss a suitable cost-sharing rule in the next section. ### The Nash Bargaining Solution In this section, we take an axiomatic approach to the cost-sharing rule between rollups that decide to post blobs together. One such approach is suggested by the Nash Bargaining solution. First, we introduce the required notation and then reduce the cost-sharing rule to solving the Nash Bargaining problem. Let \(A\) denote the set of all possible bargaining outcomes. In particular, \(D\in A\) is the outcome if no agreement can be reached. In our setting, \(A\) is interpreted as a set of cost-sharing options between two rollups, while \(D\) is interpreted as a case when rollups post their blobs separately. The utility (payoff) function of agent \(i\) is given by \(u_{i}:A\rightarrow\mathbb{R}.\) We consider linear utility functions, in particular. Let \(S\) denote the set of all possible utilities (payoffs): \[S=\left\{\left(s_{1},s_{2}\right)\mid s_{1}=u_{1}(a),s_{2}=u_{2}(a),a\in A \right\}.\] Let \(s:=\left(s_{1},s_{2}\right)\). Further, let \(d=\left(d_{1},d_{2}\right)=\left(u_{1}\left(D\right),u_{2}\left(D\right)\right)\) be the utility vector if no agreement could be reached (threat point). Two Requirements on \(S\) to have a characterization: 1) There exists \(s\in S\) with \(s_{i}>d_{i}\ \forall i,\) and 2) \(S\) is compact and convex. Our set satisfies these properties, as we will see later. Then, define \(S^{\prime}=\left\{s\mid s_{i}\geq d_{i}\ \ \forall i\right\}\subseteq S.\) Let \(H(S)\) denote the set of _Pareto-optimal_ outcomes. _Pareto-optimal_ (or _Pareto-efficient_) means that there is no other outcome that makes one player better off without making another player worse off. In our case, this means that rollups pay completely for the blob posting cost and do not overpay. Now, we are in the position to define a Nash Bargaining Solution. **Definition 1**.: \(A\) **bargaining solution** _is a rule that assigns a solution vector \(f(S,d)\in S\) to every bargaining problem \(B=(S,d)\)._ Let \(f_{i}(S,d)\) denote the \(i\)-component of \(f(S,d)\). That is: \(f(S,d)=\left(f_{1}\left(S,d\right),f_{2}\left(S,d\right)\right).\) We have the following 4 axioms. **Axiom 1** (**Invariance of Utility Scaling)**.: _If there are two bargaining situations \(B=(S,d)\) and \(\bar{B}=(\bar{S},\bar{d})\) with \(\bar{S}=\left\{\alpha_{1}s_{1}+\beta_{1},\alpha_{2}s_{2}+\beta_{2}:\ s\in S\right\}\) and \(\bar{d_{i}}=\alpha_{i}d_{i}+\beta_{i}\ \forall i,\) where \(\alpha_{1},\alpha_{2}>0\). Then, for the solution, the following holds: \(f_{i}(\bar{S},\bar{d})=\alpha_{i}f_{i}(S,d)+\beta_{i}\ \forall i.\)_ The axiom states that if we change the way we measure utility when we construct a bargaining problem but keep new utility scales decision-theoretically equivalent to the old ones, then the bargaining solution in utility-allocation space changes in the same way, so that it still corresponds to the same real outcome. **Axiom 2** (**Pareto Optimality)**.: _If \(f(S,d)\) is a solution to \(B=(S,d)\), then \(f(S,d)\in H(S).\)_ The axiom states that there is no other feasible allocation that is better than the solution for one player and not worse than the solution for the other player. For the next axiom, we need a definition: **Definition 2**.: _A game is called symmetric if two conditions hold: (1) \(d_{1}=d_{2}\), and (2) \((s_{1},s_{2})\in S\), then \((s_{2},s_{1})\in S\)._ **Axiom 3** (**Symmetry)**.: _If \((S,d)\) is symmetric, then the solution is also symmetric, i.e. \(f_{1}(S,d)=f_{2}(S,d).\)_ The axiom states that, if the positions of players 1 and 2 are completely symmetric in the bargaining problem, then the solution also treats them symmetrically. **Axiom 4** (**Independence of Irrelevant Alternatives)**.: _Let \((S,d)\) and \((T,d)\) be two bargaining situations with \(S\subset T\) and \(f(T,d)\in S\), then \(f(T,d)=f(S,d)\)._ The axiom states that eliminating feasible alternatives (other than the threat point) that would not have been chosen does not affect the result. **Theorem 1** ([7]).: _There is a unique bargaining solution, \(f^{N}\), satisfying the four axioms above and it has the following representation for every two-person bargaining problem:_ \[f^{N}(S,d)=\arg\max_{s\in H(S)}(s_{1}-d_{1})(s_{2}-d_{2})=s^{*}. \tag{8}\] The expression \((s_{1}-d_{1})(s_{2}-d_{2})\) is called the Nash Product. Suppose the large rollup is indexed by \(1\) and the small rollup is indexed by \(2\). The disagreement point in our setting is \((d_{1},d_{2})=(Tr_{L},Tr_{S})\). Assuming that posting a joint blob is profitable, that is, \(Tr_{J}<Tr_{L}\), the rollups need to decide how to share the new blob price \(B^{N}\). Suppose the large rollup pays \(B_{1}\) and the small rollup pays \(B_{2}\), with \(B_{1},B_{2}\geq 0\). Then, we can redefine their per-transaction costs, which define points \((s_{1},s_{2})\) in the payoff set. This defines a two-dimensional space. However, note that because of the Pareto efficiency, \(B_{1}+B_{2}=B\) holds, since underpaying is not an option, and overpaying is not efficient. Therefore, we are down to \(1\) dimensional space, as \(B_{1}\in[0,B^{N}]\) defines it fully. The number of large and small rollup transactions in the blob are denoted by \(C_{J,L}\) and \(C_{J,S}\), respectively. Since their rate ratio is \(\frac{1}{f}\), we have \(C_{J,L}=\frac{C_{J}}{1+f}\) and \(C_{J,S}=\frac{C_{J}f}{1+f}\), so that they sum up to \(C_{S}\). Let \(d_{J,L}\) and \(d_{J,S}\) denote the total delay costs of the large and small rollups, respectively. Then \(d_{J,L}=\frac{aRt_{J}^{2}}{2}\) and \(d_{J,S}=\frac{aRft_{J}^{2}}{2}\). Having settled all necessary parameters, we proceed to calculate \(s_{1}\) and \(s_{2}\) values for given \(B_{1}\). \(s_{1}\) is calculated as \(s_{1}=\frac{B_{1}+d_{J,L}}{C_{J,L}}\) and \(s_{2}\) is calculated as \(s_{2}=\frac{B^{N}-B_{1}+d_{J,S}}{C_{J,S}}\). Plugging in \(s_{1}\) and \(s_{2}\) in (8), and simplification by getting rid of constant denominators gives the following optimization problem: \[\arg\max_{B_{1}}(B_{1}+d_{J,L}-C_{J,L}Tr_{L})(B^{N}-B_{1}+d_{J,S}-C_{J,S}Tr_{S }). \tag{9}\] Since (10) is a negative quadratic polynomial in \(B_{1}\), we solve the optimal value by the first order condition with respect to \(B_{1}\): \[B_{1}=(B^{N}+d_{J,S}-d_{J,L}-C_{J,S}Tr_{S}+C_{J,L}Tr_{L})/2. \tag{10}\] The solution of (10) directly gives a sharing rule for a blob cost and also determines per transaction costs for both rollups. Note that it is only a function of \(f\), and therefore, the contract between the rollups can be easily automatized. The axiomatic approach of this section can be easily generalized to \(m>2\) rollups, by taking a Nash product over \(m\) rollups \((s_{1}-d_{1})(s_{2}-d_{2})\cdot...\cdot(s_{m}-d_{m})\). However, the optimization problem at hand can be much harder to solve, as it is \(m-1\) dimensional. The number of dimensions comes from \(m-1\) rollups' contributions towards the final blob cost. The last rollup contribution is determined by the contributions of the rest. First, we show a structural result that will come in handy later. The result is similar to the one in proposition 2, in that we lower bound the new equilibrium price in terms of the original equilibrium price \(B\), and a parameter \(f\). **Lemma 1**.: \(\frac{B^{N}}{B}\geq\frac{1+f}{(1+\sqrt{f})^{2}}\)_._ Proof.: Similar to the proof of proposition 2, the ratio between \(B^{N}\) and \(B\) is minimized if there are only two rollups posting blobs. Then, in this case, by (7) we have that \(B^{N}/B=\frac{\sqrt{R+RT}}{(R+\sqrt{R}f)^{2}}=\frac{1+f}{(1+\sqrt{f})^{2}}\), which finishes the proof. Next, we show that the Nash bargaining outcome the large rollup to pay, is always an internal value: **Proposition 5**.: \(B_{1}\in(0,B^{N})\) _for any \(f<1\)._ Proof.: Plugging all values in the formula of \(B_{1}\) gives: \[B_{1}= 0.5(B^{N}+\frac{aRf}{2}\frac{2B^{N}}{a(1+f)R}-\frac{aR}{2}\frac{2 B^{N}}{a(1+f)R}-\] \[\frac{f}{1+f}\sqrt{\frac{2B^{N}(1+f)R}{a}}\sqrt{\frac{2Ba}{Rf}}+ \frac{1}{f+1}\sqrt{\frac{2B^{N}(1+f)R}{a}}\sqrt{\frac{2Ba}{R}}).\] Simplifying gives: \[B_{1}=(B^{N}\frac{f}{1+f}+\sqrt{B^{N}B}(\sqrt{\frac{1}{1+f}}-\sqrt{\frac{f}{1+ f}})). \tag{11}\] Then, \(B_{1}<B^{N}\) is equivalent to: \[B(1+f)(1-\sqrt{f})^{2}<B^{N}.\] Note that the condition in the lemma 1 readily implies the condition above, which finishes the proof of the proposition. Note that \(B_{1}\) does not depend on \(R\) and \(a\), see (11), but only on blob prices and \(f\), as claimed earlier. To get an intuition of the parameters above, in the following, we consider an example. Let \(Tr_{J,L,B_{1}}\) and \(Tr_{J,S,B_{1}}\) denote effective per transaction costs after fixing the amount large rollup pays for the blob price, \(B_{1}\). The following holds: \[Tr_{J,L,B_{1}}=(B_{1}+d_{J,L})/C_{J,L}\text{ and }Tr_{J,S,B_{1}}=(B^{N}-B_{1}+d _{J,S})/C_{J,S}. \tag{12}\] Let the rollup \(X\in\{L,S\}\) improvement is denoted by \[I_{X,B_{1}}:=(Tr_{X}-Tr_{J,X,B_{1}})/Tr_{X}=1-Tr_{J,X,B_{1}}/Tr_{X},\] and the proportional payment of the large rollup \(B_{1}^{pr}:=\frac{B^{N}}{1+f}\). **Example 2**.: _Suppose \(R=B=a=1\) and \(f=0.25\). That is, a small rollup has 4 times less traffic than a large rollup. Then, in the case of large rollup posting blobs alone, parameters are equal \(t_{L}=Tr_{L}=C_{L}=\sqrt{2}\approx 1.41\). Parameters of the small rollup posting alone: \(t_{S}=Tr_{S}=\sqrt{8}\approx 2.82\) and \(C_{L}=\sqrt{0.5}\approx 0.71\). The joint posting parameters are: \(t_{J}=Tr_{J}=\sqrt{2\cdot 0.81/1.25}\approx 1.14\) and \(C_{J}=\sqrt{2.5\cdot 0.81}\approx 1.42.\) Large rollup includes \(C_{J,L}=\frac{1.42}{1.25}\approx 1.138\) transactions in the joint blob, small rollup includes \(C_{J,S}\approx 0.285\) transactions. Large rollup total delay is \(d_{J,L}\approx 1.14^{2}/2=0.65\) and small rollup delay is \(d_{J,S}=0.25d_{J,L}\approx 0.16.\) Finally, we plug in all parameters in the calculation of the large rollup share in the Nash bargaining solution (10):_ \[B_{1}\approx(0.81+0.16-0.65-0.285\cdot 2.82+1.138\cdot 1.41)/2\approx 0.564.\] _Then, the small rollup pays \(B_{2}\approx 0.15\). Note that they do not share the total price \(0.81\) proportionally, which would result in the large rollup paying \(B_{1}^{pr}=0.81\cdot\frac{4}{5}=0.648\). Plugging in all values, we obtain \(Tr_{J,L,B_{1}}=(0.56+0.65)/1.138\approx 1.07\) and \(Tr_{J,S,B_{1}}=(0.15+0.16)/0.28\approx 1.43.\) That is, the large rollup improvement is \(I_{L,B_{1}}\approx 24.7\%\), while the small rollup improvement is \(I_{S,B_{1}}\approx 49.4\%\). Note that the large rollup improvement of the per-transaction cost is smaller than the improvement of the small rollup._ Observations obtained in the example above are more general, which we show in the following propositions. First, we show that the Nash bargaining outcome for the large rollup is less than the proportional payment to the blob price: **Proposition 6**.: \(B_{1}\leq B_{1}^{pr}\) _for any \(f<1\)._ Proof.: \(B_{1}<B^{pr}=\frac{B^{N}}{1+f}\) from (11) is equivalent to \[\sqrt{\frac{B^{N}B}{1+f}}(1-\sqrt{f})<\frac{B^{N}(1-f)}{1+f}, \tag{13}\] which is on its own equivalent to \(B<\frac{(1+\sqrt{f})^{2}}{1+f}B^{N}\). This condition is exactly the condition in the lemma 1, which finishes the proof of the proposition. Next, we obtain a lower bound on the Nash bargaining outcome. Namely, we show the following: **Proposition 7**.: \(B_{1}\geq B^{N}/2\) _for any \(f<1\)._ Proof.: The condition \(B_{1}\geq B^{N}/2\) is equivalent to \[\sqrt{B^{N}B}\frac{1-\sqrt{f}}{\sqrt{1+f}}\geq\frac{1-f}{2(1+f)}B^{N},\] which after simplification becomes: \[\frac{B^{N}}{B}\leq 4(1+f)(1+\sqrt{f})^{2}.\] The right-hand side of the above inequality is decreasing in \(f\) and achieves its minimum value \(2\) when \(f=1\). By proposition 2, we know that \(B\geq B^{N}\), which finishes the proof. That is, the large rollup never pays less than half of the new blob price, which is fair. Last, we show that the large rollup improvement in the Nash bargaining outcome is less than the small rollup improvement. **Proposition 8**.: \(I_{L,B_{1}}\leq I_{S,B_{1}}\) _for any \(f<1\)._ Proof.: The condition is equivalent to: \[B^{N}\frac{(1+\sqrt{f})f}{1+f}+\sqrt{B^{N}B}(1+\sqrt{f})(\sqrt{\frac{1}{1+f}} -\sqrt{\frac{f}{1+f}})\geq\frac{B^{N}}{1+f}(f\sqrt{f}-1)+B^{N}\sqrt{f}. \tag{14}\] Further simplification gives: \[\frac{2}{1+f}+\sqrt{\frac{B}{B^{N}}}\frac{1-f}{\sqrt{f+1}}-B^{N}\sqrt{f}\geq 0.\] Let \(p:=\sqrt{B/B^{N}}\). We show that the function \[h(f):=\frac{2}{1+f}+p\frac{1-f}{\sqrt{1+f}}-\sqrt{f}\] is decreasing in \(f\) on the interval \([0,1]\) for any \(p>0\). Note that \(dh(f)/df<0\) for any \(f>0\). Since \(h(1)=0\), we get the proof of the proposition. The intuition is simple: a small rollup has much more room to improve than a large rollup. Note that the result holds _unconditionally_ regarding the equilibrium prices \(B\) and \(B^{N}\). Since we consider only Pareto-efficient solutions, the result, in particular, implies that a Nash bargaining solution favors the small rollup compared to the "fair" blob cost-sharing rule, which improves both rollup per-transaction costs equally. The latter favors the large rollup to a high extent: the rollups pay almost equally even if \(f=0.25\). In this section, we assumed that the two rollups engaged in the joint blob posting were the ones that initially were posting blobs. This in particular gives a lower bound on the new equilibrium price, derived in the Lemma 1, and is used in the proofs of propositions 5 and 6. If one or both rollups were using the main market, then the propositions would be automatically satisfied, as the new equilibrium price goes up. ## 5 Extensions In this section, we discuss two natural extensions of the baseline model. In the first extension, the blob size is limited. Suppose there is the maximum blob size \(U\), so that for any rollup, \(Rt\leq U\), or equivalently, the posting time \(t\) is upper bounded by \(U/R\). Intuitively, adding an upper bound on the blob size causes the blob price in the equilibrium to increase, compared to the baseline model. The reason is that with the upper bound the rollups produce blobs even faster. To figure out rollup optimum posting time in the case of using the data market, we again solve the optimal time of posting using the first-order condition and compare the blob size with the upper bound \(U\). If the obtained size is larger than the upper bound, then we take the size to be \(U\) and adjust the posting time accordingly. If on the other hand, the optimal size is smaller than the upper bound, the rollup keeps the same optimal strategy. Calculating the main market posting time is done exactly as in the main model, therefore, deciding on the posting strategy given equilibrium price is trivial. Calculating the equilibrium price is also easy. Shared blob posting stays the same if the aggregate demand does not cross the threshold, and cost-sharing does not need modification as well. If one of the participating rollups reaches the threshold size itself and the other one does not, then it has more bargaining power, as it saves only on the delay cost. However, if both rollups reach the threshold themselves, cost-sharing becomes more intricate. Both rollups save only on the delay cost, and therefore, large rollup has lower bargaining power over a small rollup, compared to the baseline model. In the second extension, rollups have compression technology. It is natural to assume that compression technology is monotonic, that is, the compression factor is increasing in the data size. Then, the existence of compression, in general, favors bigger rollups as well, since they generate enough transaction data to compress efficiently faster than smaller rollups do. In the case of shared blob posting, this advantage should be compensated. The Nash bargaining outcome guarantees such compensation since the per-transaction cost in the disagreement point for the small rollup will be low because of compression, as the joint blob posting does not let the small rollup use compression to full extent. ## 6 Conclusions and Future Work We introduced a simple economic model to analyze EIP-4844 and its effects on rollups to decrease L1 data costs. In the proposed model, large enough rollups use a new market for posting their data to L1, while the rest continue using the original market. Moreover, we studied sharing blob posting and cost-sharing rules. First, we outlined conditions when sharing is profitable for both rollups and then described an axiomatic approach to the blob-posting cost-sharing rule. The are many interesting future research avenues: (a) optimal strategy in an oligopolistic market with relevant strategic interaction between rollups; (b) strategic consumers and endogenous main market equilibrium price - we can extend the above model with demand growth to study questions about the equilibrium structure of demand; (c) finally, using agent-based simulation, we can numerically test the theoretical results obtained above in an environment that closely represents the actual Ethereum market and proposed data market with dynamic transaction fee adjustment. For example, we can model discrete block time and compression technology of rollups. This would allow us to validate our results and provide insights into practical rollup posting policies/services as well as L1 fee market design.
2304.03805
Correcting Model Misspecification via Generative Adversarial Networks
Machine learning models are often misspecified in the likelihood, which leads to a lack of robustness in the predictions. In this paper, we introduce a framework for correcting likelihood misspecifications in several paradigm agnostic noisy prior models and test the model's ability to remove the misspecification. The "ABC-GAN" framework introduced is a novel generative modeling paradigm, which combines Generative Adversarial Networks (GANs) and Approximate Bayesian Computation (ABC). This new paradigm assists the existing GANs by incorporating any subjective knowledge available about the modeling process via ABC, as a regularizer, resulting in a partially interpretable model that operates well under low data regimes. At the same time, unlike any Bayesian analysis, the explicit knowledge need not be perfect, since the generator in the GAN can be made arbitrarily complex. ABC-GAN eliminates the need for summary statistics and distance metrics as the discriminator implicitly learns them and enables simultaneous specification of multiple generative models. The model misspecification is simulated in our experiments by introducing noise of various biases and variances. The correction term is learnt via the ABC-GAN, with skip connections, referred to as skipGAN. The strength of the skip connection indicates the amount of correction needed or how misspecified the prior model is. Based on a simple experimental setup, we show that the ABC-GAN models not only correct the misspecification of the prior, but also perform as well as or better than the respective priors under noisier conditions. In this proposal, we show that ABC-GANs get the best of both worlds.
Pronoma Banerjee, Manasi V Gude, Rajvi J Sampat, Sharvari M Hedaoo, Soma Dhavala, Snehanshu Saha
2023-04-07T18:20:38Z
http://arxiv.org/abs/2304.03805v1
# Correcting Model Misspecification via Generative Adversarial Networks ###### Abstract Machine learning models are often misspecified in the likelihood, which leads to a lack of robustness in the predictions. In this paper, we introduce a framework for correcting likelihood misspecifications in several paradigm agnostic noisy prior models and test the model's ability to remove the misspecification. The "ABC-GAN" framework introduced is a novel generative modeling paradigm, which combines Generative Adversarial Networks (GANs) and Approximate Bayesian Computation (ABC). This new paradigm assists the existing GANs by incorporating any subjective knowledge available about the modeling process via ABC, as a regularizer, resulting in a partially interpretable model that operates well under low data regimes. At the same time, unlike any Bayesian analysis, the explicit knowledge need not be perfect, since the generator in the GAN can be made arbitrarily complex. ABC-GAN eliminates the need for summary statistics and distance metrics as the discriminator implicitly learns them, and enables simultaneous specification of multiple generative models. The model misspecification is simulated in our experiments by introducing noise of various biases and variances. The correction term is learnt via the ABC-GAN, with skip connections, referred to as skipGAN. The strength of the skip connection indicates the amount of correction needed or how misspecified the prior model is. Based on a simple experimental setup, we show that the ABC-GAN models not only correct the misspecification of the prior, but also perform as well as or better than the respective priors under noisier conditions. In this proposal, we show that ABC-GANs get the best of both worlds. Likelihood-free inference, Deep Neural Regression, Approximate Bayesian Computation, Generative Adversarial Networks ## 1 Introduction A model is a probing device used to explain a phenomenon through data. In most cases, a true model for this phenomenon exists but cannot be specified at all [16]. This setting indicates that all plausible models, though useful, can be deemed as misspecified [5]. Can we use a plausible explainable model, while correcting for its misspecification implicitly? Unlike the prescriptive generative modeling dogma, predominant in the statistical community, the implicit generative modeling view taken by the machine learning community lays emphasis on predictive ability rather than on explainability [6]. Implicit Deep Generative Models have witnessed tremendous success in domains such as Computer Vision. However, their opaqueness and lack of explainability has made the injection of subjective knowledge into them a highly specialized and experimental task. In this work, our proposal is to reconcile implicit and explicit generative models into a single framework in the misspecified setting. We do that by taking GANs and ABC as representative of the two fields respectively. The introduction of GANs in 2014 by Goodfellow et al. [10] marked a very decisive point in the field of generative deep learning. Since then, deep learning based generative models like GANs and Variational Autoencoders have been extensively worked on, with the main intention of addressing issues with likelihood estimation based methods and related strategies. The crux of these issues lies in complex or intractable computations that arise during maximum likelihood estimation or evaluation of the likelihood function. A GAN uses two adversarial modules - the Generator and the Discriminator, essentially playing a zero sum min-max game with each other, with the competition between them driving both modules to improve and reach a stage where the Generator is able to produce counterfeit data which is indistinguishable from the real data. Although GANs have been shown to address the issues mentioned above well by leveraging the benefits of using piece-wise linear units, there are some inherent issues with the GAN paradigm. These include the inherent difficulty in achieving convergence, stability issues during training and the necessity of large amounts of data. An active area of research in this direction is to apply GANs in different settings [14; 21] and also to improve stability [12]. Another older, but equally interesting generative paradigm is Approximate Bayesian Computation (ABC) [4][3][11][7][17]. ABC finds its roots in Bayesian inference, and aims to bypass likelihood evaluation by approximating the posterior distribution of model parameters. This method is extremely useful in cases when the likelihood estimation is computationally intensive, or even intractable. The likelihood-free aspect of this paradigm allows the data generative model to be as complex as it can get. However, there are some drawbacks, such as low acceptance rates at higher dimensions, the difference between the prior distribution from the posterior distribution, identification of lower dimensional statistics to summarize and compare the datasets and the model selection problem. **ABC and GAN complementarity:** Looking at these two paradigms, it becomes clear that both ABC and GANs try to solve the same problem - learning the data generation mechanism by capturing the distribution of the data, but they approach the problem in different ways. By studying these two paradigms, their similarities and differences become apparent. With respect to the data generation model, ABC uses a user-specified model, whereas the Generator in a GAN is non-parametric. Looking at the discriminative model for both, ABC uses an explicit, user-specified discriminator which often uses Euclidean distance or some other distance measure on a set of summary statistics to measure the difference between real and simulated datasets. For GANs, the discriminative model is specified through a function like KL divergence or JS divergence as the Discriminator's objective function. Another key difference here is that the feedback from the Discriminator in a GAN flows back to the Generator, thereby making them connected, while in ABC, these two modules are disconnected. Further, in ABC, model selection is followed by model inference, but in GANs, since the Generator and Discriminator are connected, this occurs implicitly during the learning process. We now see that ABC and GAN appear to be at two ends of the data generation spectrum, with each having its own advantages and disadvantages. ## 2 Motivation and Contributions As it is clear from the previous discussion, both GANs and ABCs are likelihood-free methods. But there are certain limitations to both of them. ABC is a Bayesian paradigm. Like in any Bayesian modeling approach, subjective knowledge about the data generating model is expressed both in terms of the likelihood (explicit or implicit) and the prior. One would want to exercise more freedom in the choice of priors, however. Majority of the model selection criteria focus on the priors, keeping the likelihood fixed. However, misspecification in the likehood can lead to spurious errors and make the inference invalid. Some model selection criteria like Deviance Criteria (DIC) won't work well in such cases. It is generally a hard problem to tackle computationally, if one were to obtain marginal evidences. So how we do address this problem? In the context of ABC, the choice of summary statistic and the distance metric to compare the simulated datasets with the real data set determine the efficiency of the approximation. While it seems advantageous to rely on sampling, it leaves many issues suggested above to experimentation and to the modeler. Model selection and sensitivity analysis have to be performed, regardless. Can we get rid of making choices about the summary statistics, distance metrics, model selection in the context of ABC? Further, can we deal with model misapplication either in likelihood or prior or both in the Bayesian context? GANs, in particular, the adversarial mini-max formulation can address these questions. However, GANs require relatively large amounts of data, owing to their non-parametric nature, to train the Generator and Discriminator networks. It is also known that training GANs can be unstable [15]. A consequence of deep networks, of which GANs are a special case, is that, they are opaque from the standpoint of interpretability [1]. Further, incorporation of any available prior knowledge into GANs is limited to modifying the architectures or loss functions or a combination of them. In part, it may be due to the long held misconception that deep learning eliminates the need for good feature engineering. However, good feature engineering gives a way to architecture design. Can we incorporate prior knowledge into GANs? Can the GANs work on low-data regimes, where prior knowledge could be both available and valuable? We argue that, ABC can augment the Generator network of a GAN. The amount of correction needed can be learnt via the data itself, without making hard choices _a priori_. We show the effectiveness of our work through several ABC-GAN models. We consider cGAN [18] and TabNet [2] as baseline GANs with some architectural modifications. 1. mGAN: GAN Generator takes as inputs the features, and the simulated data from ABC. 2. skipGAN: GAN Generator takes as inputs the features, and the simulated data from ABC, and the Discriminator, also takes a weighed combination of ABC Generator and GAN Generator. 3. Tab-mGAN: mGAN with TabNet as the Generator of the GAN. 4. Tab-skipGAN: skipGAN with TabNet as the Generator of the GAN. They are described in detail later. We consider several standard, interpretable models such as Linear Models, Gradient Boosted Trees (GBT) and a combination of Deep Learning and Gradient boosted trees (TabNet) as ABC models under various mispaceification settings. Extensive experimentation (check sections (4), (5)) helps us answer and tackle the questions posted above, and shows the novelty of our work. ## 3 Our Approach Some notations and settings are introduced to make the exposition clear. Let \(\mathcal{D}_{\tau}=\{y_{i}^{\tau},x_{i}^{\tau}\}_{i=1}^{n}\), be the observed dataset, a set of \(n\) i.i.d tuples \((y_{i}^{\tau},x_{i}^{\tau})\), where \(x_{i}^{\tau}\in\mathrm{Re}^{p}\) is a p-dimensional column feature vector, and \(y_{i}^{\tau}\in\mathrm{Re}\) is the response variable. Assume that, \(G_{\tau}\) is the true generative model, typically unknown, that produces \(y_{i}^{\tau}\sim G_{\tau}(x_{i}^{\tau})\). Define the datasets \(\mathcal{D}_{\pi}\equiv\{y_{i}^{\pi},x_{i}^{\tau}\}_{i=1}^{n}\) and \(\mathcal{D}_{\gamma}\equiv\{y_{i}^{\gamma},x_{i}^{\gamma}\}_{i=1}^{n}\), that can be sampled by ABC and GAN, respectively. Here, by convention, \(y_{i}^{\gamma}\sim G_{\pi}(x_{i}^{\tau}),x_{i}^{\tau}=x_{i}^{\tau}\) for ABC and similarly \(y_{i}^{\gamma}\sim G_{\gamma}(x_{i}^{\gamma})\). Further, assume that \(d(.,.)\) is some distance or loss such as Mean Absolute Error (MAE) that measures discrepancy between two datasets. Note that \(G_{\pi}\) is typically a sampler and \(G_{\gamma}\) is a deterministic transformation. ### ABC-GAN Framework Suppose that we know the generative model \(G_{\pi}\), but it is misspecified. In order to rectify this misspecification, we append it to a standard GAN generator \(G_{\gamma}\) network, i.e., \(x_{i}^{\gamma}=[y_{i}^{\pi},x_{i}^{\tau}]\). \(G_{\gamma}\) now transforms \(G_{\pi}\) samples so as to make resulting dataset look more realistic. Now, by design, \(G_{\gamma}\) can be quite shallow. The hope, rather, intent is that, the "sampler" is already pretty good, and lot of domain knowledge is encoded in it. Therefore, not much needs to be done by the \(G_{\gamma}\), except doing a few corrections. The exact corrections that are to be done are taught by the Discriminator of the GAN \(D_{\gamma}\). Under ideal conditions, when perfect knowledge about the Sampler \(G_{\pi}\) (the pre-generator, or the generative model in the context of ABC) is known, \(G_{\gamma}\) does an identity transformation. Under these conditions, the GAN learning should not be a concern (stability-wise), as the problem is already regularized. From an architecture perspective, \(G_{\gamma}\) can have large capacity but is regularized to produce an identity transformation. Hence, the primary objectives are to investigate two key hypotheses 1. when \(G_{\pi}\) is perfect, i.e, \(G_{\pi}=G_{\tau}\), we expect \(G_{\gamma}=I(.)\), an identity map and \(d(\mathcal{D}_{\tau},\mathcal{D}_{\pi})=0\). 2. when \(G_{\pi}\) is imperfect, \(G_{\gamma}\neq I(.)\) and \(d(\mathcal{D}_{\tau},\mathcal{D}_{\pi})>0\). More than that, we expect \(d(\mathcal{D}_{\tau},\mathcal{D}_{\pi})>d(\mathcal{D}_{\tau},\mathcal{D}_{ \gamma})\). We consider two broad families of architectures to test the hypothesis. _mGAN_: We depict the functional architecture of baseline overall model _mGAN_, shown in Fig. 1. The GAN is a vanilla cGAN except that one of the inputs is the ABC Generator's output, i.e., \(x_{i}^{\gamma}=[y_{i}^{\pi},x_{i}^{\tau}]\) and \(\mathcal{D}_{\gamma}\equiv\{y_{i}^{\gamma},x_{i}^{\gamma}\}_{i=1}^{n}\) will be passed to the Discriminator \(D_{\gamma}\). _skipGAN_: Another variant that we experimented with is the _skipGAN_. We conjecture that vanilla mGAN might have information bottleneck. When the prior model is very good, both \(G_{\gamma}\) and \(D_{\gamma}\) can be very shallow. If not explicitly regularized, training mGAN could be hard. We can mitigate this problem by supplying both \(y_{i}^{\pi}\) and \(y_{i}^{\gamma}\) to \(D_{\gamma}\). Specifically, we choose weighed average so that the weights can be seen as model averaging, and can also be interpreted as amount of expressiveness borrowed by mGAN. That is, \(D_{\gamma}\) gets \(wy_{i}^{\pi}+(1-w)y_{i}^{\gamma}\). The idea of using skip connection is to try to achieve performance improvement over mGAN. At the least, it should be able to ensure that the mGAN does at least as well as the baseline \(G_{\pi}\). ### Objective Function Consider the following hybrid generative model: \[p_{i}=D_{\gamma}(G_{\gamma}([y_{i}^{\pi},x_{i}^{\tau}]),\text{ with }y_{i}^{\pi}\sim G_{\pi}(x_{i}^{\tau})\] Then the likelihood can be written as: \(L(y)=\prod_{i=1}^{n}p_{i}\). In fact, it is striking to see the likelihood as empirical likelihood [20] without the normalization constraint \(\Sigma p_{i}=1\). But it is not obvious how to estimate \(D_{\gamma}\) and \(G_{\gamma}\), if not for the adversarial min-max optimization used in GANs. In that sense, we see that our contribution is more in using the adversarial optimization to maximize the empirical likelihood, that has absorbed a non-parametric correction term by Deep Neural Networks and some prior models. ## 4 Experimental Setup Several experiments were conducted to test the impact of the ABC-GAN models in correcting misspecified prior models \(G_{\pi}\). The purpose of these experiments is two-folds: 1) to assess the benefits of including a prior for a GAN and 2) to verify that ABC-GAN models successfully correct misspecified models. We consider three datasets (one simulated and two real), three prior generative models, two basic ABC-GAN architectures, and two GAN Generator architectures - leading to a total 36 experiments. The misspecification at the prior generative model has bias and variance terms with three levels each. Each of the 36 experiment has 9 runs each simulating different amounts of model misspecification, taking the total number of experiments to 324. The details are provided below. ### \(G_{\pi}\): Prior Generative Models In particular, we consider, Linear Models, Gradient Boosting Trees, and Transformers for \(G_{\pi}\) and Feedforward Networks and Transformers for \(G_{\gamma}\). We also consider different Ground Truth generative models (\(G_{\tau}\)). Under perfect information \(G_{\pi}=G_{\tau}\). For simulated datasets, \(G_{\tau}\) is known. Imperfect information can creep from mis-specified Figure 1: A baseline mGAN model Figure 2: Proposed skipGAN model sampling distribution or prior or both. We simulate misspecifcation by adding Gaussian noise to an assumed \(G_{\tau}\). To keep the design space smaller and simpler, we consider mis-specified priors, keeping the likelihood of the prior generative model fixed. We consider three families of models - Linear Models, Gradient Boosted Trees (GBTs), and Transformers - as explicit generative models. 1. Linear Models: Standard Linear Regression models are implemented in statsmodel [22], a Python module that provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests, and statistical data exploration. We use the linear ordinary least squares model as our baseline. 2. GBTs: CatBoost [8] is an algorithm for gradient boosting on decision trees. It is developed by Yandex researchers and engineers, and is used for search, recommendation systems, personal assistant, self-driving cars, weather prediction and many other tasks. It is an industry standard and an ambitious benchmark to beat. We use _catboost_ implementation. 3. Transformers: TabNet, a Transformers-based models for Tabular data, was introduced in [2]. This model inputs raw tabular data without any preprocessing and is trained using gradient descent-based optimisation. TabNet uses sequential attention to choose which features to reason from at each decision step, enabling interpretability. Feature selection is instance-wise, e.g. it can be different for each row of the training dataset. TabNet employs a single deep learning architecture for feature selection and reasoning, this is known as soft feature selection. These make the model enable two kinds of interpretability: local interpretability that visualizes the importance of features and how they are combined for a single row, and global interpretability which quantifies the contribution of each feature to the trained model across the dataset. We use TabNet as baseline by calling the TabNetRegressor class under pytorch-tabnet module. Henceforth, all references to Stats Models, CatBoost, and TabNet, correspond to Linear Models, GBTs and Transformers, respectively, where applicable. In this other extreme case, we pass covariates (\(x_{i}^{\tau}\)) plus random noise (\(e_{i}\)) to GAN, i.e., \(x_{i}^{\gamma}=[x_{i}^{\tau},e_{i}]\) in which case, the ABC-GAN acts more like a conditional-GAN [19]. ### \(G_{\gamma}\): GAN Generators We consider two architectures: 1. _Feed Forward Networks (FFN):_ The FFN Generator consists of 5 hidden layers of 50 nodes each and ReLU activation. The Discriminator consists of 2 hidden layers of 25 and 50 nodes respectively followed by ReLU activation after every layer. 2. _Transformers:_ We consider the same TabNet Regressor used in \(G_{\pi}\) discussed earlier- the Transformer-based Generator. ### Model Misspecification The following noise model is considered for real datasets: \[y_{i}^{\pi}\sim N(y_{i}^{\tau}+\mu,\sigma^{2})\] For the Linear Model, we consider a full Bayesian model, of the following specification: \[y_{i}^{\pi}\sim N(<x_{i}^{\tau},\beta>,1)\] where \(\beta\sim N(\beta^{\tau}+\mu,\sigma^{2})\) with \(\mu\in\{0,0.01,0.1\}\) and \(\sigma^{2}\in\{0.01,0.1,1\}\) and \(y_{i}^{\tau}\) is the output of the prior model \(G^{\pi}\) and \(<,>\) is the standard dot product. ### Datasets We evaluate our models on the following Synthetic and real datasets: _1. Friedman3_[9] consists of 4 independent features \(z=[z_{1},z_{2},z_{3},z_{4}]\) as input, uniformly distributed on the intervals: \(0\leq z_{1}\leq 100\), \(40\pi\leq z_{2}\leq 560\pi\), \(0\leq z_{3}\leq 1\), \(1\leq z_{4}\leq 11\). The generative model for \(y\) is is nonlinear model \(y=\arctan((z_{1}z_{2}-1/(z_{1}z_{3}))/z_{1})\). A standard normal noise is added for every sample. The dataset has 100 samples. 2. _Boston:_ The Boston Housing Dataset [13] is derived from information collected by the U.S. Census Service concerning housing in the area of Boston MA. The dataset has 503 samples and 13 columns/features. _3. Energy efficiency_[25] contains eight attributes and two responses (or outcomes. The dataset has 768 samples. The aim is to use the eight features to predict each of the two responses. For our experiments, we have restricted only to the first response with all 8 features. ### Training The cGAN, mGAN, skipGAN and their TabNet versions are trained for 1000 epochs with BCE loss function and a batch size of 32. The dataset is split into training and validation sets (80-20) and the same validation set is used to validate the performance of all models. The learning rate used for Friedman 3 dataset is 0.001, and is 0.01 for all other datasets. All datasets are run using 1.6 GHz Dual-Core Intel Core i5 CPUs. ### Metrics We use MAE to evaluate the performance of the models. The experiments were run 10 times and the average of the MAE over the 10 runs is presented. ## 5 Results In order to test the hypothesis that, ABC-GAN models perform no worse than the prior models, we take Boston dataset, and synthetically inject model misspecification, as described earlier, and report MAE of \(G_{\pi}\) (sampler) and \(G_{\gamma}\) (a deterministic transformation). In Fig. 3, we show the boxplots of the MAE for each of the models, for each of the prior models. As can be seen, the proposed ABC-GAN models outperform the prior models in almost all cases - different priors, different ABC-GAN models, and different levels of model misspectications. Even a simpler mGAN successfully corrects the misspecified baselines (Linear Models, Boosted trees and TabNet) and results in lower MAEs than the prior model. Next, we investigate, how these models perform at specific levels of model misspecification by prior, model architecture, and dataset. In Tables 1-9, each row corresponds to a level of model misspecification as indicated by (Variance, Bias) columns, and rows corresponding to columns - Prior Model, mGAN, Tab-mGAN, skipGAN, Tab-skipGAN - indicate the MAE of the models indicated by the column header. In the case of skip variants, the skip weights are also reported. Tables 1, 4, 7 correspond to Friedman3 dataset, 2, 5, 8 to Boston dataset and 3, 6, 8 to Energy dataset. For tables 1, 2, 3, we use Linear Models as the prior, GBT in Tables 4, 5, 6 and TabNet in Tables 7, 8, 9. By looking at all the Tables I-IX collectively, it is clear that ABC-GAN models are able to detect the extent of misspecification, as the reduction in the MAE, relative to the prior model, is more pronounced for larger misspecifications. Hence we see that as the misspecification of the pre-generator increases, the model relies more and more on the GAN generator to do the correction. Overall, we notice that our model majorly outperforms SOTA models such as C-GAN, C-GAN with TabNet generator, TabNet regressor and CatBoost. A skip connection has been added in some models, as explained earlier, to take a weighted average of the prior model and the GAN model. The weight given to the GAN in the skip connection tends to increase with increase in variance and bias, and is ideally supposed to be close to 1 for the highest variance and bias values and close to 0 for lowest variance and bias values. In most cases variance seems to be playing a greater role in the skip connection weight than the bias. This indicates that as the model misspecification increases, more weightage is given to the GAN skip node to help cofrect this misspecification. Hence, as the complexity of the prior increases (such as when we use Transformers as priors), mGAN is sufficient to correct the misspecification of the models. However, for models with lower complexity (such as linear models), skipGAN performs better in correcting the model misspecification. From tables 4 and 6, it is evident that as the misspecification reduces, the skipGAN weight reduces and drops to almost 0 (it becomes 0 for Tab-skipGAN for variance 0.001 and bias 0). This effectively proves our original claim that when \(G_{\pi}\) is almost perfect, \(G_{\gamma}\) is almost an identity transformation and \(d(D_{\tau},D_{\pi})\approx 0\). As the noise increases, the dependence on the GAN generator increases, resulting in higher weights in the skipGAN. Using TabNet network for the generator of the GAN helps in stabilising the model. mGAN, Tab-mGAN and Tab-skipGAN perform consistently well with no high MAE outlier. While Tab-mGAN and Tab-skipGAN may not consistently outperform their vanilla counterparts (mGAN and skipGAN), adding the TabNet Network ensures consistent results across multiple iterations. We also wanted to explore the effect of different sizes. We consider the Boston dataset again, and took a subset of the data to see if, as the dataset size increases, ABC-GANs continue to do well. As expected, the performance of the all the models improves with increase in sample size (as visible from Fig. 4 to Fig. 8). However skipGAN destabilizes for larger datasets (see tables 5, 6 and 8), thus resulting in large MAE values for a few experimental set-ups. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Variance & Bias & Prior model & mGAN & Tab-mGAN & skipGAN & Weights skipGAN & Tab-skipGAN & \begin{tabular}{c} Weights \\ Tab-skipGAN \\ \end{tabular} \\ \hline 1 & 1 & 1.1591 & 0.1462 & **0.0902** & 0.1149 & 0.9594 & 0.0983 & 0.9882 \\ 1 & 0.1 & 0.7870 & 0.0915 & 0.0981 & 0.1154 & 0.9798 & **0.0733** & 0.9815 \\ 1 & 0.01 & 0.7771 & **0.0848** & 0.1636 & 0.1339 & 0.9432 & 0.1112 & 0.9927 \\ 1 & 0 & 0.8482 & 0.0924 & **0.0577** & 0.1334 & 0.9834 & 0.1549 & 0.9950 \\ 0.1 & 1 & 1.0073 & **0.0745** & 0.0762 & 0.0937 & 0.2482 & 0.0776 & 0.3692 \\ 0.1 & 0.1 & 0.1178 & **0.0783** & 0.1382 & 0.1077 & 0.0671 & 0.0906 & 0.1422 \\ 0.1 & 0.01 & 0.0787 & **0.0656** & 0.0750 & 0.1138 & 0.0979 & 0.0964 & 0.2580 \\ 0.1 & 0 & 0.0801 & **0.0637** & 0.0650 & 0.0830 & 0.0028 & 0.0823 & 0.0232 \\ 0.01 & 1 & 0.9994 & 0.0662 & 0.0762 & 0.1157 & 0.2280 & **0.0522** & 0.2164 \\ 0.01 & 0.1 & 0.1004 & 0.1231 & **0.0482** & 0.0489 & 0.0751 & 0.0698 & 0.0782 \\ 0.01 & 0.01 & 0.0248 & 0.0265 & 0.0436 & 1048.8400 & 0.0184 & **0.0233** & 0.0000 \\ 0.01 & 0 & **0.0249** & 0.1845 & 0.0650 & 0.0287 & 0.0333 & 0.0284 & 0.0034 \\ \hline \end{tabular} \end{table} Table 6: Results for Energy Efficiency dataset for 1st target with Gradient Boosted Trees (GBT) prior. The MAEs of cGAN, cGAN with TabNet generator and baseline GBT (Catboost) are 0.0849, 0.1564 and 0.0201 respectively. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Variance & Bias & Prior model & mGAN & Tab-mGAN & skipGAN & Weights skipGAN & Tab-skipGAN & \begin{tabular}{c} Weights \\ Tab-skipGAN \\ \end{tabular} \\ \hline 1 & 1 & 1.1928 & 0.3492 & **0.2812** & 0.2888 & 0.9645 & 0.2834 & 0.8963 \\ 1 & 0.1 & 0.8688 & 0.2892 & 0.2765 & 0.2949 & 0.9308 & **0.2731** & 0.8692 \\ 1 & 0.01 & 0.8406 & 0.3559 & 0.2787 & 0.2493 & 0.9953 & **0.2732** & 0.8812 \\ 1 & 0 & 0.8120 & 0.2821 & **0.2447** & 0.2704 & 0.9742 & 0.2696 & 0.93485 \\ 0.1 & 1 & 1.0098 & 0.2704 & **0.2068** & 0.2662 & 0.1860 & 0.2420 & 0.2964 \\ 0.1 & 0.1 & 0.2419 & 0.3634 & **0.2189** & 0.2596 & 0.0382 & 0.2640 & 0.0917 \\ 0.1 & 0.01 & 0.2256 & 0.3437 & **0.2206** & 0.2306 & 0.0313 & 0.2271 & 0.0021 \\ 0.1 & 0 & 0.2700 & 0.4463 & **0.2646** & 0.3035 & 0.0297 & 0.2666 & 0.0209 \\ 0.01 & 1 & 1.0205 & 0.3271 & **0.2327** & 0.3547 & 0.1945 & 0.3541 & 0.2191 \\ 0.01 & 0.1 & 0.2562 & 0.3181 & 0.2551 & **0.2422** & 0.0317 & 0.2493 & 0.0599 \\ 0.01 & 0.01 & 0.2424 & 0.3075 & 0.2640 & 21.9047 & 0.1488 & **0.2166** & 0.0550 \\ 0.01 & 0 & 0.2198 & 0.2683 & 0.2287 & 0.2291 & 0.0058 & **0.2179** & 0.0361 \\ \hline \end{tabular} \end{table} Table 5: Results for Boston dataset with Gradient Boosted Trees (GBT) prior. The MAEs of cGAN, cGAN with TabNet generator and baseline GBT (Catboost) are 0.2838, 0.2729 and 0.2049 respectively. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Variance & Bias & Prior model & mGAN & Tab-mGAN & skipGAN & Weights skipGAN & Tab-skipGAN & \begin{tabular}{c} Weights \\ Tab-skipGAN \\ \end{tabular} \\ \hline 1 & 1 & 1.1267 & **0.4191** & 0.5205 & 0.4517 & 0.1495 & 0.5559 & 0.5722 \\ 1 & 0.1 & 1.0068 & **0.3514** & 0.4891 & 0.4270 & 0.1977 & 0.5288 & 0.7900 \\ 1 & 0.01 & 0.8485 & **0.3857** & 0.5234 & 0.4168 & 0.1942 & 0.5164 & 0.6472 \\ 1 & 0 & 0.9358 & **0.4052** & 0.4305 & 0.4102 & 0.2730 & 0.4468 & 0.7080 \\ 0.1 & 1 & 1.0669 & 0.4593 & 0.6085 & **0.4591** & 0.1444 & 0.5137 & 0.1770 \\ 0.1 & 0.1 & 0.3809 & 0.4044 & 0.4932 & 0.4477 & 0.1356 & **0.3329** & 0.0520 \\ 0.1 & 0.01 & 0.5561 & **0.4130** & 0.5409 & 0.4294 & 0.2035 & 0.5938 & 0.0309 \\ 0.1 & 0 & 0.4094 & **0.3674** & 0.4049 & 0.3981 & 0.0000 & 0.3828 & 0.0808 \\ 0.01 & 1 & 1.0446 & 0.3951 & 0.4414 & **0.3740** & 0.3830 & 0.4562 & 0.1855 \\ 0.01 & 0.1 & 0.4847 & 0.4940 & 0.5045 & **0.4416** & 0.1651 & 0.5273 & 0.1612 \\ 0.01 & 0.01 & 0.4274 & **0.4153** & 0.5523 & 0.5022 & 0.2027 & 0.5107 & 0.1883 \\ 0.01 & 0 & 0.4536 & **0.4328** & 0.4709 & 0.4409 & 0.0000 & 0.4727 & 0.1730 \\ \hline \end{tabular} \end{table} Table 7: Results for Friedman3 dataset with Trasformer network prior. The MAEs of cGAN, cGAN with TabNet generator and baseline transformer (TabNet) model are 0.4477, 0.49724 and 0.5529 respectively. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Variance & Bias & Prior model & mGAN & Tab-mGAN & skipGAN & Weights skipGAN & Tab-skipGAN & \begin{tabular}{c} Weights \\ Tab-skipGAN \\ \end{tabular} \\ \hline 1 & 1 & 1.2568 & 0.3098 & **0.2527** & 0.2791 & 0.9812 & 0.2560 & 0.9347 \\ 1 & 0.1 & 0.7969 & 0.2417 & 0.2544 & 0.2767 & 0.9747 & **0.2336** & 0.9272 \\ 1 & 0.01 & 0.7705 & 0.2943 & 0.2891 & 0.2719 & 0.9670 & **0.2385** & 0.9724 \\ 1 & 0 & 1.0423 & 0.2798 & **0.2754** & 0.3089 & 0.9857 & 0.2915 & 0.8586 \\ 0.1 & 1 & 1.0238 & 0.3667 & **0.1848** & 0.4119 & 0.2430 & 0.2313 & 0.3206 \\ 0.1 & 0.1 & 0.2796 & 0.2951 & 0.2685 & **0.2400** & 0.1145 & 0.2506 & 0.1688 \\ 0.1 & 0.01 & 0.2864 & 0.3200 & 0.2885 & 0.3356 & 0.1382 & **0.2665** & 0.2107 \\ 0.1 & 0 & **0.2318** & 0.3291 & 0.2455 & 288.3585 & 0.0923 & 0.2643 & 0.2122 \\ 0.01 & 1 & 1.0746 & 0.4723 & 0.3016 & 0.2897 & 0.2173 & **0.2697** & 0.2799 \\ 0.01 & 0.1 & 0.2694 & 0.2977 & **0.2459** & 527.8482 & 0.0105 & 0.3008 & 0.1342 \\ 0.01 & 0.01 & 0.2240 & 0.2725 & **0.2142** & 0.2820 & 0.0735 & 0.2957 & 0.2292 \\ 0.01 & 0 & 0.2089 & 0.2849 & **0.1980** & 2303.4068 & 0.1503 & 0.2628 & 0.3504 \\ \hline \end{tabular} \end{table} Table 8: Results for Boston dataset with Transformer network prior. The MAEs of cGAN, cGAN with TabNet generator and baseline transformer (TabNet) model are 0.2838, 0.2729 and 0.2515 respectively. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Variance & Bias & Prior model & mGAN & Tab-mGAN & skipGAN & Weights skipGAN & Tab-skipGAN & \begin{tabular}{c} Weights \\ Tab-skipGAN \\ \end{tabular} \\ \hline 1 & 1 & 1.2390 & 0.0618 & 0.0903 & **0.0499** & 0.9200 & 0.1203 & 0.9702 \\ 1 & 0.1 & 0.7765 & 0.1274 & 0.1342 & **0.0597** & 0.9715 & 0.0693 & 0.9976 \\ 1 & 0.01 & 0.7958 & 0.1380 & 0.0778 & 0.1183 & 0.9353 & **0.0708** & 1.0000 \\ 1 & 0 & 0.7588 & **0.0548** & 0.1465 & 0.0604 & 0.9753 & 0.0970 & 0.9916 \\ 0.1 & 1 & 1.0056 & 0.3290 & 0.1076 & **0.0551** & 0.3679 & 0.0640 & 0.4676 \\ 0.1 & 0.1 & 0.1517 & **0.0489** & 0.0511 & 0.1028 & 0.0894 & 0.0945 & 0.4901 \\ 0.1 & 0.01 & 0.1052 & 0.0974 & 0.0894 & **0.0689** & 0.1474 & 0.0998 & 0.0420 \\ 0.1 & 0 & 0.0907 & 0.0708 & **0.0651** & 0.1349 & 0.0665 & 0.0652 & 0.1287 \\ 0.01 & 1 & 0.9865 & 0.0987 & **0.0524** & 0.3475 & 0.3747 & 0.0897 & 0.2481 \\ 0.01 & 0.1 & 0.0942 & 0.1016 & 0.0872 & **0.0789** & 0.1056 & 0.0941 & 0.0000 \\ 0.01 & 0.01 & 0.0514 & 0.1431 & 0.0698 & 0.1271 & 0.0193 & **0.0477** & 0.0379 \\ 0.01 & 0 & 0.0652 & **0.0429** & 0.0840 & 0.1034 & 0.0591 & 0.1034 & 0.0451 \\ \hline \end{tabular} \end{table} Table 9: Results for Energy Efficiency dataset for 1st target with Transformer network prior. The MAEs of cGAN, cGAN with TabNet generator and baseline transformer (TabNet) model are 0.0849, 0.1564 and 0.0543 respectively. Figure 4: cGAN and cGAN with TabNet generator models on 100 samples and 503 samples (entire dataset) of Boston dataset. Figure 3: Box plots for comparison of models in the Boston dataset. All ABC-GAN models outperform the Linear, GBT and transformer prior models. Large outliers (MAE \(\geq\) 20) for skipGAN were removed. Figure 8: Tab-skipGAN model for Linear model, GBT and Transformer priors on 100 samples and 503 samples (entire dataset) of Boston dataset. Figure 5: mGAN model for Linear model, GBT and Transformer priors on 100 samples and 503 samples (entire dataset) of Boston dataset. Figure 6: Tab-mGAN model for Linear model, GBT and Transformer priors on 100 samples and 503 samples (entire dataset) of Boston dataset. Figure 7: skipGAN model for Linear model, GBT and Transformer priors on 100 samples and 503 samples (entire dataset) of Boston dataset. mance, but it is to show that the model is capable of doing a likelihood-free inference, and is more explicit in it's way of working than most pre-existing black-box models. Our ABC-GAN models outperform prior models with the same amount of misspecification, and perform equivalent or better than these priors even in the ideal situation of perfectly specified models. How is our experimentation on regression any different than the other existing work, among the wide variety of literature that exists on regression, including non-parametric approaches such as Gaussian process regression [26]? While being useful in the ML community, these methods don't solve the problems of (1) correcting likelihood misspecification in the models or data and (2) performing equivalent or better than to the prior models under perfect condition (no noise condition). Our model caters mainly to correcting misspecification in the prior models, and performs equivalently or better than the prior models in the ideal case in several regression tasks. In this paper, the objective that we want to achieve is to regularise the GAN generator by prepending the complex sampler(s), which ideally would have all the domain knowledge (which would be otherwise captured by the prior on the parameters in case of the ABC, thereby biasing the training of the GAN). In this case, although there is just one complex sampler initially, we have multiple samplers - one for each candidate model, with each sampler trying to learn a different transformation. The distance between the simulated and actual data is measured using a divergence metric and ultimately only those samplers or models are chosen which lie within a certain threshold. We argued that, the proposed method can do no worse than the baselines, but also significantly outperforms the baseline priors, and can successfully correct the likelihood misspecification in them 1. Hence, in the ABC-GAN framework, the Generator is correcting for the misspecification, while the Discriminator is learning summary statistics (data representations) along with the rejection region. Our simple and elegant formulation can absorb a variety of paradigms. It will be interesting to investigate a full Bayesian setup, and draw posterior samples for the baseline. Likewise, on the adversarial optimization side, owing to incorporation of prior knowledge, stability dynamics could be studied. Our extensive experimentation involving wide variety of datasets, baseline models and tasks reaffirms our belief that, the proposed regime can be used for continuous model improvement, in an inter-operable way. Footnote 1: Code and data are available at [https://github.com/Manasi1999/ABC](https://github.com/Manasi1999/ABC) GAN Our work opens up many future directions. In our current work, we have not yet exploited obtaining posterior inference. Can we compute the posterior quantities, like in ABC? A reasonable hunch is to calculate approximate posterior quantities, under change of measure. Here, we view \(T\equiv G_{GAN}\) as fixed, deterministic, but differential transformation. Recent advances in gradient-based normalizing flows inspire us in this direction [24]. It is relatively straightforward to obtaining posterior predictive distribution - sample from the ABC-Pre Generator, and pass it through GAN Generator, and treat the samples as approximate draws with which any statistics can be computed. Another interesting question to ask is - does the discriminating function learnt by \(D_{GAN}\) approximate the Bayes Factor and/or the likelihood ratio? Previous work in this direction provide hints [23]. Likewise, it will be of interest to know whether the representations learnt by \(D_{GAN}\) showcase sufficient statistics? Earlier work on learning summary statistics via deep neural networks for ABC provide clues [27]. Under linear models or generalized linear models, we find an affirmative answer. From stability standpoint, can the specific type of regularization of \(G_{GAN}\) be tuned such that an optimum between explicit and implicit generative models is found? Pursuing the above questions will help us understand ABC-GANs better.
2308.03508
Tensorized orbitals for computational chemistry
Choosing a basis set is the first step of a quantum chemistry calculation and it sets its maximum accuracy. This choice of orbitals is limited by strong technical constraints as one must be able to compute a large number of six dimensional Coulomb integrals from these orbitals. Here we use tensor network techniques to construct representations of orbitals that essentially lift these technical constraints. We show that a large class of orbitals can be put into ``tensorized'' form including the Gaussian orbitals, Slater orbitals, linear combination thereof as well as new orbitals beyond the above. Our method provides a path for building more accurate and more compact basis sets beyond what has been accessible with previous technology. As an illustration, we construct optimized tensorized orbitals and obtain a 85\% reduction of the error on the energy of the $H_2$ molecules with respect to a reference double zeta calculation (cc-pvDz) of the same size.
Nicolas Jolly, Yuriel Núñez Fernández, Xavier Waintal
2023-08-07T12:03:42Z
http://arxiv.org/abs/2308.03508v2
# Tensorized orbitals for computational chemistry ###### Abstract Choosing a basis set is the first step of a quantum chemistry calculation and it sets its maximum accuracy. This choice of orbitals is limited by strong technical constraints as one must be able to compute a large number of high dimensional integrals from these orbitals. Here we use tensor network techniques to construct representations of orbitals that essentially lift these technical constraints. We show that a large class of orbitals can be put into "tensorized" form including the Gaussian orbitals, the exact hydrogenoid atomic orbitals and orbitals for chemical bonds. Our method provides a path for building more accurate and more compact basis sets beyond what has been accessible with previous technology. The very first step of a first principles many-body calculation is to discretize the problem onto _some_ finite basis set. On the quality of this discretization depends the maximum accuracy that may be reached in the calculation and therefore its usefulness for making predictions [1; 2; 3]. The commonly accepted target of \(1.6mHa\) (around \(500K\)) for "chemical accuracy" typically requires large basis sets to approach the continuum sufficiently well. There exists an immense body of litterature devoted to the construction of optimized basis sets for a wide variety of situations. Among others, those include plane waves, wavelets [4], Wannier function [5], and many flavours of Gaussian orbitals [1; 6; 7; 8]. In the context of chemistry, these Gaussian orbitals overwhelmingly dominate the literature. Their popularity stems from two important properties. The first is (P1) their compactness, i.e. a few Gaussians centered on the different nuclei are sufficient to give a reasonably accurate approximation of the atomic orbitals. Second, and perhaps more importantly (P2), the 6-dimensional integrals appearing in the calculation of the electron-electron interaction matrix elements can be computed analytically for Gaussian orbitals. Property (P2) sounds somewhat technical. Yet, it is the crucial point that has allowed Gaussian orbitals to thrive and become ubiquitous in computational chemistry since these matrix elements are the starting points on which all further calculations are based. Despite these favourable aspects, Gaussian orbitals do have a number of important limitations: the convergence to the continuum limit is slow, core electrons are poorly described (in part because Gaussians lack the cusp that true atomic orbitals have close to the nuclei core; also the tail of the orbitals is in general not Gaussian) and so are delocalized orbitals above the ionization threshold. In this letter, we do _not_ propose another set of orbitals. Rather, we propose a novel _representation_ of orbitals, using tensor networks [9; 10; 11], that naturally solves the problem of computing the needed matrix elements (P2). Once "tensorized", the orbitals provide access to all the usual mathematical objects needed by chemistry packages to proceed with the calculation. The class of orbitals that can be tensorized is extremely large. It includes Gaussian orbitals but also the actual exact atomic orbitals of the hydrogen atom, plane waves and many other types of functions that can be combined at will. _Problem formulation._ A molecule with \(N\) electrons is described by the positions \(\vec{R}_{\alpha}\) and atomic number \(Z_{\alpha}\) of its nuclei. We consider a basis set of \(L\) orbitals \(\phi_{i}(\vec{r})\). Within this basis set, the many-body problem that one needs to solve has the following Hamiltonian (ignoring relativistic corrections for simplicity), \[H=\sum_{ij\sigma}H_{ij}c^{\dagger}_{i\sigma}c_{j\sigma}+\sum_{ijkl\sigma \sigma^{\prime}}V_{ijkl}c^{\dagger}_{i\sigma}c^{\dagger}_{j\sigma^{\prime}}c_ {k\sigma^{\prime}}c_{l\sigma} \tag{1}\] where \(c^{\dagger}_{i\sigma}\) (\(c_{i\sigma}\)) creates (destroys) an electron in orbital \(i\) and spin \(\sigma\). They satisfy the anti-commutation rule \(\{c^{\dagger}_{i\sigma},c_{j\sigma}\}=\delta_{\sigma\sigma^{\prime}}S_{ij}\) where \(S_{ij}\) is the overlap matrix. The problem is therefore entirely set by the three objects \(S_{ij}\), \(H_{ij}\) and \(V_{ijkl}\) whose expression in terms of the orbitals \(\phi_{i}(\vec{r})\) is, \[S_{ij} = \int d\vec{r}\;\phi_{i}(\vec{r})\phi_{j}(\vec{r}) \tag{2}\] \[H_{ij} = K_{ij}+P_{ij}\] \[K_{ij} = \int d\vec{r}\;\phi_{i}(\vec{r})\left[\frac{-\hbar^{2}}{2m} \Delta\right]\phi_{j}(\vec{r})\] \[P_{ij} = -\int d\vec{r}\;\phi_{i}(\vec{r})\left[\sum_{\alpha}\frac{Z_{ \alpha}e^{2}}{4\pi\epsilon|\vec{r}-\vec{R}_{\alpha}|}\right]\phi_{j}(\vec{r})\] (3) \[V_{ijkl} = e^{2}\int d\vec{r}_{1}d\vec{r}_{2}\;\frac{\phi_{i}(\vec{r}_{1} )\phi_{j}(\vec{r}_{2})\phi_{k}(\vec{r}_{2})\phi_{l}(\vec{r}_{1})}{4\pi\epsilon| \vec{r}_{1}-\vec{r}_{2}|} \tag{4}\] For a set of orbitals to be useful, one must be in position to compute Eqs.(2), (3) and (4) quickly and accurately. In particular the bottleneck of this calculation is given by the interaction matrix elements Eq.(4) which contains a large number \(\propto L^{4}\) of 6-dimensional integrals. _Quantics representation of an orbital._ Let's consider an orbital \(\phi(\vec{r})\) inside the hypercube \(\vec{r}\in[0,b]^{3}\). The first step of the construction of the tensorized orbitals is to discretize the segment \([0,b]\) onto an _exponentially_ dense grid of \(2^{n}\) equally spaced points. Each of these points is labeled by \(n\) bits \(x_{1}x_{2}....x_{n}\in\{0,1\}^{n}\) such that \[\frac{x}{b}=\frac{x_{1}}{2}+\frac{x_{2}}{2^{2}}+\cdots+\frac{x_{n}}{2^{n}} \tag{5}\] with similar equations for the \(y\) and \(z\) coordinates. One obtains a very large tensor \(\Phi_{\vec{r}}\) with \(3n\) indices indexing the \((2^{n})^{3}\) different values of the orbital on the grid, \[\Phi_{x_{1}y_{1}z_{1}x_{2}y_{2}z_{2}...x_{n}y_{n}z_{n}}\equiv\phi(x,y,z) \tag{6}\] The second step is to represent the tensor \(\Phi_{\vec{r}}\) as a tensor train (also known as matrix product state or MPS), \[\Phi_{x_{1}y_{1}z_{1}x_{2}y_{2}z_{2}...x_{n}y_{n}z_{n}}=\prod_{a=1}^{n}M_{3a}( x_{a})M_{3a+1}(y_{a})M_{3a+2}(z_{a}) \tag{7}\] in terms of \(\chi\times\chi\) matrices \(M_{p}\). This is the quantics repsentation [12; 13] of the orbital. Such a decomposition is always possible but the bond dimension \(\chi\) may be exponentially large \(\chi\sim 2^{3n/2}\). The magic of tensor trains is that for many mathematical objects the convergence of the tensor train with \(\chi\) is very fast and numerical accuracy can be achieved with small values of \(\chi\) (\(\chi\leq 100\) in this work). MPS plays a central role in computational many-body theory [9; 10; 11], in particular in the context of the density matrix renormalization algorithm (DMRG) [14]. The joined usage of MPS with the quantics representation is much more recent and has only recently started to be explored [15; 16; 17; 18; 19]. In the rest of this letter, we will propose two independent algorithms to arrive at the tensorized representation of an orbital Eq.(7). We will further discuss how, once the tensorized representation has been obtained, one can proceed with the calculation of \(S_{ij}\), \(H_{ij}\) and \(V_{ijkl}\) thereby closing the gap to continue the calculation further with any many-body technique (e.g. Hartree-Fock, DMRG, coupled clusters...). _The Tensor Cross Interpolation (TCI) algorithm._ The central algorithm that makes the present work possible is the recently introduced TCI learning algorithm [21; 22] that allows one to construct the tensor train from a few \(\sim n\chi^{2}\) calls to the function \(\phi(\vec{r})\). We rely on the implementation and extensions of TCI described in detailed in [23] to which we also refer for an introduction to the algorithm. For the needs of this letter, it is sufficient to know that the input of TCI is a tensor (in the form of a function that can be called for any values of the indices) and its output is the tensor train Eq.(7) together with an estimate of the error of the representation (systematically controled by increasing the bond dimension \(\chi\)). As a first illustration, we construct the tensorized representation of the \(1s\) orbital of the hydrogen atom using TCI. The \(1s\) orbital is extremely simple, \(\phi(\vec{r})\propto\exp(-\sqrt{x^{2}+y^{2}+z^{2}}/a_{B})\) (\(a_{B}\): bohr radius). Yet, computing precisely the different matrix elements for such orbitals centered around different nuclei positions is highly non-trivial which has prevented the direct use of these atomic orbitals so far. The first three panels of Fig.1 show the error respectively for the kinetic energy \(K\), potential energy \(P\) and total energy \(E=K+P\) for the ground state of the hydrogen atom (the algorithms for these calculations will be detailed below). We observe that a very moderate value of \(\chi\) is needed to achieve very precise accuracy \(<0.01mHa\). On the other hand, the calculations of \(K\) and \(P\) requires a very fine grid with \(n\geq 16\). This is not a issue with the quantics representation since the computational cost is linear in \(n\). Indeed, the calculations with \(n=20\) (black curve) is only marginally more difficult than the calculation for \(n=12\) (orange) despite the fact that, formally, \(n=20\) corresponds to a grid that contains an astronomically large \(\sim 10^{18}\) number of points (\(\sim 10^{36}\) points for the 6-dimensional integrals). In the last panel of Fig.1, we show the error for \(p\), \(d\) and \(f\) orbitals. One finds that the Figure 1: Error versus bond dimension \(\chi\) for the energy of the \(1s\) orbital of the hydrogen atom (first three panels) and other orbitals (\(1s\), \(2p_{z}\), \(3d_{xz}\) and \(4f_{z(x^{2}-y^{2})}\), last panel). First three panels correspond respectively to the error on the kinetic energy \(K\), nuclei potential energy \(P\) and total energy \(E=K+P\) for different values of the grid discretization \(n=8,12,16\) and \(20\). Last panel: \(n=20\) except for the \(4f\) orbital for which \(n=22\). All energies are in Hartrees. accuracy depends only very weakly on the actual shape of the orbital. For the \(4f\) orbital, we had to increase \(n\) up to \(n=22\) due to the fact that this orbital is much more extended than \(1s\). We found that TCI algorithm works equally well for all the orbitals that we have tested; they only need to be known explicitly, i.e. one can compute \(\phi(\vec{r})\) for any value of \(\vec{r}\). This includes in particular any combination of Gaussians as we have checked explicitly. _Algorithms for the computation of \(S_{ij}\), \(H_{ij}\) and \(V_{ijkl}\)._ It remains for us to show that we can compute the interaction matrix elements needed for subsequent many-body calculations. The simplest one is \(S_{ij}\). Indeed, in the framework of MPS, \(S_{ij}=\sum_{\vec{r}}\Phi_{\vec{r}}^{i}\Phi_{\vec{r}}^{j}\) is simply the scalar product between two MPS. The algorithm for contracting this tensor network belongs to the standard toolbox of MPS [9] and has a mild complexity \(\sim n\chi^{3}\). Second is the kinetic energy \(K_{ij}\). We discretize the Laplacian on the grid using finite difference, i.e. \(\partial^{2}\phi/\partial x^{2}\approx[\phi(x+1/2^{n})-2\phi(x)+\phi(x-1/2^{ n})]/2^{2n}\). In the quantics representation, this second derivative can be written as a "Matrix Product Operator" (or MPO, an MPO is to matrices what MPS are to vectors), \[\frac{\partial^{2}}{\partial x^{2}}\approx-\frac{1}{2^{2n-1}}+\frac{1}{2^{2n} }\sum_{p=1}^{n}\sigma_{3p}^{+}\prod_{q>p}\sigma_{3q}^{-}+h.c. \tag{8}\] where \(\sigma^{\pm}=[\sigma^{x}\pm i\sigma^{y}]/2\) with \(\sigma^{x}\) and \(\sigma^{y}\) the usual Pauli matrices and \(\sigma_{a}^{\pm}\) acts on the bit \(a\) of the quantics representation. Collecting the terms for \(y\) and \(z\), we arrive at a MPO for the Laplacian \(\Delta_{\vec{r}_{1}\vec{r}_{2}}\) that is the sum of \(6n+1\) product terms, hence with a bond dimension at most \(6n+1\). After compressing this MPO with the TCI algorithm (a standard compression using singular value decomposition also works), one arrives at a very small bond dimension \(\chi=7\) (with machine precision) [17]. The calculation of \(K_{ij}\) is therefore put in the form \(K_{ij}=\sum_{\vec{r}_{1}^{\prime}\vec{r}_{2}}\Phi_{\vec{r}_{1}}^{i}\Delta_{ \vec{r}_{1}\vec{r}_{2}}\Phi_{\vec{r}_{2}}^{j}\) which is a MPS.MPO.MPS product. We are back to the standard algorithms of the MPO/MPS toolbox. Next comes the contribution from the nuclei potential \(P_{ij}\). The function \(\sum_{\alpha}1/|\vec{r}-\vec{R}_{\alpha}|\) is given to the TCI algorithm which produces an MPS \(P_{\vec{r}}\). Our numerical Figure 2: Benchmark of the calculation of the matrix elements for the LiH molecule in the STO-6G basis set. Upper panels: colormap of the different mathematical objects in Hartrees. The worst error for a matrix element is \(0.7mHa\) for this calculation (\(n=16\) and \(\chi=100\)). Lower panels: error versus bond dimension \(\chi\) for the different energies and different discretization parameters \(n=8,12,16\). Distance between the nuclei: \(2.8571a_{B}\). The reference calculation uses the pyscf package [20] at the Hartree-Fock level. experiments on several nuclei configurations show a fast convergence: the relative accuracy for \(P_{\vec{r}}\) is around \(10^{-3}\) for \(\chi=40\) and \(10^{-5}\) for \(\chi=80\). We arrive at \(P_{ij}=\sum_{\vec{r}}\Phi_{\vec{r}}^{i}P_{\vec{r}}\Phi_{\vec{r}}^{j}\) which is a direct extension of the MPS.MPS scalar product. Last, we need to calculate the interaction matrix elements \(V_{ijkl}\). The function \(1/|\vec{r}_{1}-\vec{r}_{2}|\) is given to the TCI algorithm which produces an MPO \(U_{\vec{r}_{1},\vec{r}_{2}}\). Although exponential, the convergence is less favourable than for the nuclei potential and \(\chi\approx 80\) is needed to reach a relative accuracy of \(10^{-3}\). We arrive at \(V_{ijkl}=\sum_{\vec{r}_{1}\vec{r}_{2}}\Phi_{\vec{r}_{1}}^{i}\Phi_{\vec{r}_{1}} ^{j}U_{\vec{r}_{1}\vec{r}_{2}}\Phi_{\vec{r}_{2}}^{k}\Phi_{\vec{r}_{2}}^{l}\). To evaluate these matrix elements we first form the element-wise product \(\Phi_{\vec{r}}^{i}\Phi_{\vec{r}}^{j}\) for all pairs \(ij\), then compress them using TCI into an MPS \(\Phi_{\vec{r}}^{ij}\). Then, we're back to the calculations of MPS.MPO.MPS products \(V_{ijkl}=\sum_{\vec{r}_{1}\vec{r}_{2}}\Phi_{\vec{r}_{1}}^{ij}U_{\vec{r}_{1} \vec{r}_{2}}\Phi_{\vec{r}_{2}}^{kl}\). This completes the suite of algorithms. _Validation of the whole algorithmic chain with Gaussian orbitals._ To validate the entire procedure, we make use of Gaussian orbitals for which all these contributions are known analytically. We use the package pyscf [20] for this benchmark. Figure 2 shows a calculation of the LiH molecule in the STO-6G basis set at the Hartree-Fock level (there is no need to go beyond Hartree-Fock since our sole purpose is to validate the proper calculation of the _inputs_ of the many-body problem). We find that \(n=16\) and \(\chi=100\) (with similar values of \(\chi\) for the MPS \(P_{\vec{r}}\) and MPO \(U_{\vec{r}_{1},\vec{r}_{2}}\)) are sufficient to reach chemical accuracy for all matrix elements (upper panels) as well as for the individual contribution to the energies (lower panels) and the final total energy (lower right panel). Interestingly though, a slightly coarser discretization of \((2^{12})^{3}\approx 6\). \(10^{10}\) points is not sufficient, showing that the arbirary resolution available with tensorized orbitals is really needed. Many-body MPO/MPS calculations can reach bond dimensions of several thousands, so the computational cost of the present calculation with \(\chi\leq 100\) is relatively light, of the order of one hour on a single core computer. As these algorithms may be further optimized and trivially parallelized, the usage of tensorized orbitals should not impact significantly the overall computational time of a full chemistry calculation. _Direct construction of tensorized orbitals._ So far we have used orbitals that were known explicitly and put them into a tensorized form. This is a powerful approach that can allow one to use existing orbitals, invent new ones and even combine orbitals of different sorts (e.g. Gaussians with plane waves). We end this letter with an alternative approach where the orbital is directly constructed in its tensorized form without the knowledge of its explicit form. We consider the example of the ground state of the \(H_{2}^{+}\) ion (one electron and two protons). This is an interesting case because, as we shall see, chemical bonds are not easy to represent with high accuracy with Gaussian orbitals. We seek a tensorized orbital \(\Phi_{\vec{r}}\) that minimizes the energy of the \(H_{2}^{+}\) ion. Here, we take advantage of the fact that the Shrodinger equation for the wavefunction of the electron has already been put into MPO/MPS form, i.e. we are looking for the lowest eigenenergy of the one electron Hamiltonian, or equivalently the minimum of, \[E=\min_{\Phi_{\vec{r}}}\sum_{\vec{r}_{1},\vec{r}_{2}}\Phi_{\vec{r}_{1}}[\Delta _{\vec{r}_{1},\vec{r}_{2}}+P_{\vec{r}_{1}}\delta_{\vec{r}_{1},\vec{r}_{2}}] \Phi_{\vec{r}_{2}} \tag{9}\] where \(\delta_{\vec{r}_{1},\vec{r}_{2}}\) is the kronecker symbol. Performing such a minimization is exactly what the celebrated DMRG algorithm does, although usually in a totally different context (each index is a e.g. a different spin while here they label the different scales of a one-particle problem). Hence we can rely on any existing implementation of DMRG to get our orbital. Here we use the quintp package [25]. The results are shown in Fig.3. We find that DMRG easily reaches a very high accuracy of \(10^{-4}m\)Ha with just a single tensorized orbital. In contrast the best Gaussian basis set reaches \(0.01m\)Ha with \(\sim 100\) orbitals. The inset shows that the electronic distribution in between the two protons is not yet fully captured by the Gaussian basis set. Fig.3 illustrates an important difference between Gaussian basis sets and tensorized orbitals: to improve the quality of the former, one needs to add more Gaussians to the orbitals which eventually becomes expansive computationally. In contrast, tensorized orbitals are very expressive (they contain tens of thousands of parameters) and can be optimized without affecting the computational cost. We anticipate that combining tensorized Figure 3: Error on the ground state energy of the \(H_{2}^{+}\) ion. Crosses: result of a pyscf [20] calculation using different Gaussian basis sets of varying number of orbitals. Circles: results with just a single tensorized orbital using a DMRG calculation. The reference energy \(E_{\rm ref}=-0.60263421\)Ha. is from an exact resolution of the molecule [24]. Inset: iso-density line \(|\phi(\vec{r})|^{2}=0.095\) for the best tensorized orbital (blue) and the best Gaussian orbital (red, pc-4). The hexagons correspond to the positions of the two protons which are situated \(2a_{B}\) away from each other. orbitals with the existing methodologies for building basis sets will lead to important progress in their accuracy. For instance, a very effective way to build basis sets is to use "natural orbitals" [26]: one performs a many-body calculation, then diagonalizes the one-body density matrix \(\rho_{ij}=\langle c_{i}^{\dagger}c_{j}\rangle\) and uses the active (i.e. filled or partially filled) eigenstates to construct better orbitals. These orbitals are enriched with new orbitals and the process is repeated until one has achieved the necessary accuracy. With Gaussians, such a procedure can only be iterated a few times, because each enrichment adds new Gaussians to the set, ramping up the computational cost. With tensorized orbitals, one can iterate this procedure until convergence at no additional cost. _Conclusions_. Tensor network techniques were invented in the context of solving many-body problems, mostly in one dimension, and for a long time were mostly confined to this application. In the last few years important technical developments have appeared and these techniques are stepping out of their original context. In particular the TCI algorithm provides a natural bridge to map problems apparently unrelated to tensor networks onto the MPO/MPS toolbox. In this article we have shown that a combination of the quantics representation, TCI and the traditional MPO/MPS toolbox provides all the necessary ingredients to represent atomic or molecular orbitals with very high - perhaps unprecedented - accuracy. This methodology provides new possibilities for constructing accurate basis sets, lifting the strong constraint of having to express them solely in terms of Gaussians. Future work will include the integration of these tensorized orbitals, ideally within a standardized format, with the existing quantum chemistry packages as well as the exploration of new ways to construct basis sets opened by this format. _Acknowledgment._ We thank Miles Stoudenmire, Emanuel Gull and Steve White for interesting discussions.
2308.14074
Nonrigid Object Contact Estimation With Regional Unwrapping Transformer
Acquiring contact patterns between hands and nonrigid objects is a common concern in the vision and robotics community. However, existing learning-based methods focus more on contact with rigid ones from monocular images. When adopting them for nonrigid contact, a major problem is that the existing contact representation is restricted by the geometry of the object. Consequently, contact neighborhoods are stored in an unordered manner and contact features are difficult to align with image cues. At the core of our approach lies a novel hand-object contact representation called RUPs (Region Unwrapping Profiles), which unwrap the roughly estimated hand-object surfaces as multiple high-resolution 2D regional profiles. The region grouping strategy is consistent with the hand kinematic bone division because they are the primitive initiators for a composite contact pattern. Based on this representation, our Regional Unwrapping Transformer (RUFormer) learns the correlation priors across regions from monocular inputs and predicts corresponding contact and deformed transformations. Our experiments demonstrate that the proposed framework can robustly estimate the deformed degrees and deformed transformations, which makes it suitable for both nonrigid and rigid contact.
Wei Xie, Zimeng Zhao, Shiying Li, Binghui Zuo, Yangang Wang
2023-08-27T11:37:26Z
http://arxiv.org/abs/2308.14074v2
# Nonrigid Object Contact Estimation With Regional Unwrapping Transformer ###### Abstract Acquiring contact patterns between hands and nonrigid objects is a common concern in the vision and robotics community. However, existing learning-based methods focus more on contact with rigid ones from monocular images. When adopting them for nonrigid contact, a major problem is that the existing contact representation is restricted by the geometry of the object. Consequently, contact neighborhoods are stored in an unordered manner and contact features are difficult to align with image cues. At the core of our approach lies a novel hand-object contact representation called RUPs (Region Unwrapping Profiles), which unwrap the roughly estimated hand-object surfaces as multiple high-resolution 2D regional profiles. The region grouping strategy is consistent with the hand kinematic bone division because they are the primitive initiators for a composite contact pattern. Based on this representation, our Regional Unwrapping Transformer (RUFormer) learns the correlation priors across regions from monocular inputs and predicts corresponding contact and deformed transformations. Our experiments demonstrate that the proposed framework can robustly estimate the deformed degrees and deformed transformations, which makes it suitable for both nonrigid and rigid contact. ## 1 Introduction Perceptions of hand-object contact patterns are crucial to advance human-computer interaction and robotic imitation [44]. The interactive objects in these applications, from mouse/keyboard to bottle/doll, are mostly nonrigid. Although impressive progress has been achieved towards monocular contact estimation between hands and 3D rigid objects [12, 41, 47, 35] or 2.5D cloth [37, 36, 1], it is still difficult to extend them to 3D nonrigid object. One important reason is that existing methods usually project the contact area of different objects onto their own surface (point cloud or mesh), which is represented by either unordered points or unregistered points and edges. As a result, it is challenging to store contact into a feature-aligned space. To conquer this obstacle, our key idea is to **first represent regional 3D surface where hand-object contact may occur as regional 2D unwrapping profiles**, then predict the nonrigid contact and deformation within/across regions according to monocular image cues through a Vision Transformer**. Considering that the mutual contact is caused by individual hand regions [17, 41, 47], our surface grouping is based on the 16 hand kinematic bones [30, 28, 41, 47] illustrated in Fig. 2(a). Each piece of object surface shown in Fig. 2(b) is divided into a cer Figure 1: **Contact patterns estimated from monocular RGB images.** Since the deformed degrees of the contact areas are considered by our framework, both contact with nonrigid (Row1, Row2, Row3) and rigid objects (Row4) can be plausibly estimated. tain group when it can be directly intersected by a ray emanating from the region center associated with this group. Each subsurface is further mapped to the image plane according to the spherical unwrapping algorithm [46]. Consequently, the whole object surface is converted to 16 object _regional unwrapping profiles_ (object-RUPs). Similarly, the hand surface is converted to 16 hand-RUPs, each of which records pixel-aligned ray intersections with the object-RUP in the same group. In contrast to object point clouds [25, 12, 35, 6], this novel representation preserves both the hand-object surface correlation and the contact point orderliness. Numerous works [12, 35] only predicted plausible contact patterns according to data prior and ignore contact clues in the image. This may be applicable to rigid interaction. However, when the deformed degree is considered, multiple nonrigid contact patterns can be yielded from the same hand-object spatial relationship. Therefore, our framework crops the image patches of the corresponding 16 hand bones as extra visual cues to estimate nonrigid contact. Altogether, our RUFormer is tamed to take those 16 groups of hand-RUPs, object-RUPs, and visual cues as the inputs. It gradually estimates the contact and deformed features across RUPs, and finally predicts the deformed transformations of the object. To our best knowledge, this is the first framework that is applicable to reconstruct both rigid and nonrigid hand-object interaction from monocular images. In summary, our main contributions are: \(\bullet\) A learning-based framework with the ambition to estimate the contact between hand and nonrigid objects from monocular images; \(\bullet\) A hand-object interaction representation to record hand-object surfaces into multiple pixel-aligned and fine-grained 2D profiles; \(\bullet\) A unwrapping-informed transformer to predict contact and deformation on the object according to both visual cues and data prior. ## 2 Related Work **Hand-object interaction reconstruction.** Thanks to the creation of several hand-object interaction datasets [13, 5, 47, 2, 14, 26, 32] in recent years, monocular 3D hand-object interaction reconstruction has received extensive attention from researchers. Hasson _et al_. [19] proposed a two-branch network to reconstruct the hand and an unknown manipulated object. Subsequent works[7, 42] estimated hand-object pose and inferred implicit 3D shape of the object. Other works [34, 9, 17, 18, 3, 41, 47] assumed that the object template is known and reduce the object reconstruction to 6D pose estimation. They jointly regressed hand and object poses by reasoning about their interactions. However, all the existing work focuses on interactions between hands and rigid objects. Our framework attempts for the first time to reconstruct the interaction between hands and non-rigid objects from monocular images, while also being compatible with rigid ones. **Hand-object contact pattern estimation.** Inferring contact patterns is vital for 3D hand-object reconstruction. [19, 3] introduced contact losses which encourage contact surfaces and penalize penetrations between hand and object. However, these methods cannot enforce hand-object alignment at test time. Recently, Some works [12, 35, 43] used explicit contact inference and enforcement to achieve higher quality grasps. Grady _et al_. [12] estimated the contact pattern between hand and object based on PointNet [29]. Tse _et al_. [35] proposed a graph-based network to infer contact patterns. [12, 35] estimated hand-object contact patterns from sparse point clouds, which are unordered and challenging to store contact into a feature-aligned space. Yu _et al_. [43] proposed a dense representation in the form of a UV coordinate map, which only inferred the contact areas of the hand surface. All the existing works focus on contact with rigid objects and are not applicable to contact with non-rigid objects. **Vision transformer.** Transformer and self-attention networks have revolutionized natural language processing [38, 8, 39] and are making a deep impression on visual-related tasks, such as object detection [4, 48], image classification [10], 3D pose estimation [24, 22, 27, 23, 15] and point cloud processing [33]. We refer the reader to [16] for a details survey of Vision Transformer. In our task, We use attention modules to exploit the visual and hand-object spatial correlations. Figure 2: **Surface grouping strategy. (a) Hand region division based on 16 kinematic bones of an LBS hand. Each region center is marked as a gray sphere. (b) The correlated hand-object sub-surfaces for each region are aligned according to rays emanating from the center and unwrapped to the 2D regional profiles (_i.e_. hand-RUP and object-RUP). An extra object-f-RUP is generated only for grid-wise sampling.** ## 3 Method An overview of our pipeline for hand and nonrigid objects contact patterns estimation is shown in Fig. 3. It takes the image and the corresponding object mesh template as input. Through our RUFormer, it predicts the contact areas of the hand-object surface pair, as well as the deformation of the object. Initially, the hand-object surfaces are estimated and unwrapped into multiple high-resolution 2D regional profiles (Sec. 3.2). Then, RUFormer estimates contact according to region-aligned features (Sec. 3.3), and predicts deformed transformations of sampling points (Sec. 3.4). Hand-object surfaces refinement and deployment details are described in Sec. 3.5. ### Preliminary **Surface representation.** We represent the hand surface based on MANO [30]. It can be regarded as a differentiable function \(M_{h}(\mathbf{\beta},\mathbf{\theta},\mathbf{\tau})\) parameterized by shape \(\mathbf{\beta}\in\mathbb{R}^{10}\), pose \(\mathbf{\theta}\in\mathbb{R}^{16\times 3}\) and global translation \(\mathbf{\tau}\in\mathbb{R}^{3}\) w.r.t. the camera coordinate system. For a left-hand case, the RGB images and hand-object surfaces are mirrored together in advance. We represent object 6D pose as its mesh template w.r.t. the right MANO hand coordinate system. It is noted that the plausible hand-object relationship is always the object in front of the hand, \(y<0\) in the vertex coordinates of the object. The deformation of a sampling point \(\mathbf{p}\) on the nonrigid object is represented as an affine transformation \(\mathbf{A}(\mathbf{p})\in SE(3)\) w.r.t. to its template position. **Grouping strategy.** Our hand-object surfaces grouping is shown in Fig. 2. We first divide the hand region based on the 16 kinematic bones of the posed MANO. Each piece of the object surface is divided into a certain group when it can be directly intersected by a ray emanating from the hand region center associated with this group. We further unwrap each subsurface to the image plane to obtain 16 hand-RUPs, 16 object-RUPs and 16 object-f-RUPs. The records are pixel-aligned in the same group. It should be noted that object-f-RUPs are only used for grid-wise sampling. ### RUFormer Input: Region Alignment **Surface initialization.** The hand-object pose estimation network is refer to [47]. We take RGB image and object template as inputs and predict the MANO parameters and object 6DoF pose. We integrate MANO as a differentiable network layer and use it to output the 3D hand surface. **Surface unwrapping.** We unwrap the estimated hand-object surfaces into multiple fine-grained RUPs. RUPs define the projection to unwrap the hand-object surfaces into the image plane with the center of 16 hand sub-surfaces as the origin, respectively. We refer to [46] to map a surface point \(\mathbf{p}(x,y,z)\) in the Cartesian coordinate system to the spherical coordinate \(\mathbf{s}(\rho,\theta,\varphi)\). Specifically, the closest intersections between the hand surface and the rays emitted from \(i\)-th bone center are recorded in the \(i\)-th hand-RUPs, the closest intersections between the object surface and those same rays are recorded in the \(i\)-th object-RUPs, and the farthest intersections between the object surface Figure 3: **Overview of RUFormer. (a) The preparation process of RUFormer input data, all of which are aligned to the hand 16 regions. (b) RUFormer Encoder estimates hand-object regional contact areas from image patches, hand bone transformation and RUPs. (c) RUFormer Decoder estimates fine-grained deformation from grid-wise sampling features.** and those same rays are recorded in the \(i\)-th object-f-RUPs. All rays emitted from a center can be parameterized as \(\overrightarrow{O_{i}}\overrightarrow{R}(\vartheta,\varphi)\), where \(\vartheta\in[0,\pi],\varphi\in[0,2\pi],\rho>0\) are the spherical coordinates. Therefore, each RUP channel can be formulated as: \[\mathbf{R}\left(\frac{\vartheta}{\pi}W_{R},\frac{\varphi}{2\pi}H_{R}\right) \triangleq\operatorname*{arg\,min}_{\rho}\{\mathbf{s}(\rho)\mid\mathbf{s}\in( \overrightarrow{O_{i}}\overrightarrow{R}\cap\partial\mathbf{S})\} \tag{1}\] where \(\mathbf{R}\) denotes as RUP and \(\partial\mathbf{S}\) represents hand or object surface. The value \(\rho\) is set to zero if no interaction occurs. As a result, hand-RUPs \(\{\mathbf{R}_{i}^{H}\}_{i=1}^{16}\), object-RUPs \(\{\mathbf{R}_{i}^{O}\}_{i=1}^{16}\) and object-f-RUPs \(\{\mathbf{R}_{i}^{O}\}_{i=1}^{16}\) are all 16-channel image tensors. Furthermore, pixels with the same indices correspond to the intersections between the same ray and the hand bone surface / near object surface / far object surface. **Regional features.** Our RUFormer utilizes regional aligned features to predict contact areas, deformed degree of contact areas and deformed transformations of the object. With the above estimation, the following features are aligned according to the region: (i) The image patches \(\{\mathbf{I}_{i}\}_{i=1}^{16}\) belonging to each bone are cropped using the guidance of MANO joint image coordinates, where \(\mathbf{I}_{i}\in\mathbb{R}^{(3,n_{p},n_{p})}\). (ii) The MANO pose is further converted as bone transformations \(\{\mathbf{B}_{i}\}_{i=1}^{16}\) to measure the relative relationship across RUP groups, where \(\mathbf{B}_{i}\in SE(3)\). (iii) \(\{\mathbf{R}_{i}^{H}\}_{i=1}^{16}\), \(\{\mathbf{R}_{i}^{O}\}_{i=1}^{16}\) and \(\{\mathbf{R}_{i}^{OF}\}_{i=1}^{16}\) computed from the estimated hand-object 3D surface. ### RUFormer Encoder: Contact Estimation **Contact attentions.** Our contact area estimation process is shown in Fig. 3(b). Image patches, bone transformations, hand-RUPs and object-RUPs are embedded to the latent space through their respective feature extractors. We extract regional image features from 16 groups of image patches by the first four blocks of ResNet18 [21]. The regional unwrapping features are extracted from 16 groups of object-RUPs and hand-RUPs through the first four blocks of ResNet18. The bone transformations are sent to the MLP to encode features of the relative relationship. These features are later concatenated together as a regional feature embedding \(\mathbf{F}_{c}\in\mathbb{R}^{16\times 768}\). After that, \(N_{CTA}\) cascaded ViT-based [10] attention modules are used to exploit the visual and hand-object spatial correlations within/across these 16 groups. It computes contact embeddings \(\mathbf{F}_{c+}\) with the same size as \(\mathbf{F}_{c}\). **Contact representation.** We represent hand contact as 2D maps \(\{\mathbf{C}_{i}^{H}\}_{i=1}^{16}\) on hand-RUPs and object contact as 2D maps \(\{\mathbf{C}_{i}^{O}\}_{i=1}^{16}\) on object-RUPs. Each pixel on the contact map indicates the contact probability of the point recorded on RUP with the same indices. To estimate these image-like tensors, an extra CNN decoder with a symmetrical structure with RUP encoder is further adopted. The contact embeddings \(\mathbf{F}_{c+}\) are up-sample back to the RUP space again. **Deformed degree.** Besides hand-object contact maps, we further estimate regional deformed degree \(\{D_{i}\}_{i=1}^{16},D_{i}\in[0,1]\) from \(\mathbf{F}_{c+}\) by an MLP. Each value in \(\{D_{i}\}_{i=1}^{16}\) represents the deformed degree of the contact area in the 16 object-RUPs. **Loss terms.** During the training, we supervised the hand-object contact maps and the deformed degree of contact areas. The RUFormer encoder loss \(L_{C}\) can be expressed as follows: \[L_{C}=L_{M}+\lambda_{1}L_{D} \tag{2}\] where \(L_{D}\) is the standard binary cross-entropy loss for deformed degrees, and \(L_{D}=1000\). The ground truth of the deformed degree for each RUP is the average deformed degree of all contact points within the region. The process of obtaining the deformation degree of the contact points is shown in Fig. 4(b). \(L_{M}\) is the MSE loss between the prediction and the ground truth of hand-object contact maps, which can be defined as: \[L_{M}=\sum_{i=1}^{16}(\|\mathbf{C}_{i}^{H}-\hat{\mathbf{C}}_{i}^{H}\|_{2}^{2}+ \|\mathbf{C}_{i}^{O}-\hat{\mathbf{C}}_{i}^{O}\|_{2}^{2}) \tag{3}\] where \(\mathbf{C}_{i}^{H}\) and \(\mathbf{C}_{i}^{O}\) are ground truth. \(\hat{\mathbf{C}}_{i}^{H}\) and \(\hat{\mathbf{C}}_{i}^{O}\) are predicted contact maps. ### RUFormer Decoder: Deformation Estimation **Coarse deformation acquiring.** The focus of the previous sections is on the contact area. However, fine-grained de Figure 4: **(a) Grid-wise sampling results after back-projecting on their object surfaces. Each column corresponds to an instance. (b) The illustration of object deformation process. The deformation vector of a surface point \(P\) is defined as its position difference before and after deformation (\(PP^{\prime}\)). The maximum deformation of \(P\) is defined as the distance \(d_{PQ}\) between the point in its original state and the closest intersection point of the object projected from that point in the opposite normal direction. The deformation degree of \(P\) is \(\mathbf{v}_{P}\triangleq d_{PP^{\prime}}/d_{PQ}\).** formation should be described from point perspective. For each pixel on the object RUPs, it corresponds to a point on the object surface. With the help of deformed degrees of contact areas, the coarse deformation of contact points can be obtained. As illustrated in Fig. 4(b), the ray is emitted from the point to the object surface for intersection detection, and its maximum deformation is defined as the distance between the point from the closest intersection point of the object. The ray direction is set to the negative normal direction of the point. The coarse deformation of the contact point is the maximum deformation multiplied by the predicted deformed degree. However, the deformation obtained in this way does not consider the deformation priori of local geometries and lacks the understanding of the global deformation behavior. **Points sampled from RUPs.** Therefore, we sample points from the object and utilize the RUFormer decoder to aggregate deformation features from these sampling points, ultimately predicting the deformation transformations of the object. Existing practices utilize the farthest point sampling to acquire point candidates, or iteratively optimize them through geodetic distance. By contrast, because the surface points of the object have been divided into 32 groups (\(\{\mathbf{R}_{i}^{O}\}_{i=1}^{16}\) and \(\{\mathbf{R}_{i}^{GP}\}_{i=1}^{16}\)) based on their distance from each hand bone, we select sampling points from RUPs in an orderly manner. Specifically, we divide RUP into \(n_{g}\times n_{g}\) grids and sample one point within a grid with maximum value. The coordinates of sampled points are converted back to Cartesian coordinates. For a grid with all-zero pixels, we use the one mask embedding \(\mathbf{p}_{[M]}\in\mathbb{R}^{3}\)[20] as a replacement: \[\mathbf{p}=\begin{cases}\Pi^{-1}(\rho,\theta,\varphi)&\rho\neq 0\\ \mathbf{p}_{[M]},&\rho=0\end{cases} \tag{4}\] where \(\mathbf{p}\) and \(\rho\) are the 3D point and pixel value corresponding to pixel \((\theta,\varphi)\) in a RUP. We obtain ordered point candidates. For a contact point, its deformation feature is set as its coarse deformation. For a non-contact point, its deformation feature is set as the learnable embedding \(\mathbf{d}_{[M]}\in\mathbb{R}^{3}\). As shown in Fig. 4(a), the points sampled according to RUP grids emphasize more on contact area compared with other general sampling strategies. With the above conversion, \(\frac{H_{R}}{n_{g}}\times\frac{W_{B}}{n_{g}}\times 32\) points are sampled. **Deformation attentions.** We inherit the idea of the deformation graph [31] that represents the deformation of arbitrary points on the surface as a combined deformation of nearby nodes: \[\tilde{\mathbf{p}}=\sum_{m=1}^{k}\omega_{m}\left[\mathbf{A}_{m}\left(\mathbf{p}-\mathbf{g }_{m}\right)+\mathbf{g}_{m}\right] \tag{5}\] where \(\mathbf{p}\) is original position of the point and \(\tilde{\mathbf{p}}\) is its deformed position. \(\omega_{m}\) is the weight of node \(\mathbf{g}_{m}\) to \(\mathbf{p}\). The weight calculation is referred to [31]. Therefore, the RUFormer decoder is designed to select \(N_{q}\) nodes from \(N_{p}\) points, and predict their affine transformations \(\{\mathbf{A}_{k}\}_{k=1}^{N_{p}}\) according to input features \(\mathbf{F}_{p}\). In practice, we use the farthest point sampling to select \(N_{q}\) nodes from \(N_{p}\) points, where \(N_{p}=\frac{H_{B}}{n_{g}}\times\frac{W_{B}}{n_{g}}\times 32\). The input features \(\mathbf{F}_{p}=\{\mathbf{p}_{j}\oplus\mathbf{d}_{j}\oplus c_{j}\}_{j=1}^{N_{p}}\in \mathbb{R}^{N_{p}\times 7}\), where \(c_{j}\) indicates whether the point is selected as the node. If the point is selected as the node, it is 1, otherwise, it is 0. Our deformation transformations of the object estimation process are shown in Fig. 3(c). We first utilize an MLP to encode \(\mathbf{F}_{p}\) to the latent space and extract deformation embeddings \(\mathbf{F}_{d}\in\mathbb{R}^{N_{p}\times 256}\). After that, \(N_{\mathrm{DFA}}\) cascaded attention modules are used to enhance the understanding of global deformation behavior and aggregate the deformation features. It computes embeddings \(\mathbf{F}_{d+}\) with the same size as \(\mathbf{F}_{d}\). Finally, the deformation transformations are obtained through an MLP. We train RUFormer decoder in a semi-supervised manner (transformations of points not selected as nodes are not supervised). To reduce dimensionality, each rotation is represented as an axis-angle. ### Implementation Details **Surface refinement.** Based on hand-object contact maps and deformation transformations, we refine the hand and object surfaces. We first perform object surface deformation. The vertices in the object are deformed by Eqn. 5. Afterward, hand and object pose are refined based on hand contact maps \(\{\hat{\mathbf{C}}_{i}^{H}\}_{i=1}^{16}\) and object contact maps \(\{\hat{\mathbf{C}}_{i}^{O}\}_{i=1}^{16}\). We convert back to points in the surface by querying pixels in hand-RUPs and object-RUPs, then obtain the contact information of the hand and object surface vertices through interpolation, respectively. Then we follow the method in [12] and optimize the hand-object poses to achieve the target contact. **Parameter settings.** The hand-object RUPs size is set to \(H_{R}=W_{R}=64\) and the image patch size is set to \(n_{p}=64\). The grid size for point sampling is set to \(n_{g}=4\). The depth of contact attentions modules and deformation attentions modules are set to \(N_{\mathrm{CTA}}=6,N_{\mathrm{DFA}}=5\), respectively. We use Pytorch to implement our networks and train them on a computer configured with NVIDIA GeForce RTX 3090. RUFormer encoder and RUFormer decoder are trained separately. To train RUFormer encoder, we use SGD optimizer with a learning rate 1e-4. RUFormer decoder is trained with Adam optimizer with a learning rate 1e-4. The total training epochs for RUFormer encoder and RUFormer decoder are both 100. ## 4 Experiments ### Datasets Our experiments are performed on the HMDO [40], HO3D [13], FPHB [11] and ContactPose [2] datasets. The HMDO dataset records the interaction between hands and nonrigid objects. We split it with 4:1 for training and testing. HO3D and FPHB is the dataset of hands in manipulation with rigid objects. We follow the official dataset split for HO3D and adopt the action split following the protocol given by [17] for FPHB. ContactPose is the dataset of hand-object contact paired with hand-object pose. HMDO, HO-3D, and FPHB datasets are used to test our entire pipeline. ContactPose dataset is used to evaluate our hand and object contact estimation. To reduce the ambiguity in the selection of interacted objects, we filter these datasets with the 3D distance between the hand-object not exceeding \(2\mathrm{mm}\) as the threshold. ### Metrics **Hand-object error.** We use the mean per-point position error (_MPJPE_) of 21 hand joints to evaluate the 3D reconstruction error. The mean per-vertex position error (_MPVPE_) is adopted to evaluate the object error. **Contact quality.** We adopt _max penetration_ (denoted as Max Pene. in tables) and _intersection volume_ (denoted as Inter. in tables) proposed in [19] to evaluate the hand-object geometric relationship. ### Comparisons **Monocular hand-object reconstruction.** In the task of reconstructing the hand-object from the monocular image, our method is compared with the hand-object reconstruction network from Hasson _et al_. [17]. The quantitative results on HO3D [13] and FPHB [11] datasets are shown in Tab. 3. Our method achieves better performance in hand-object interaction datasets. This demonstrates that our method can achieve explicit contact patterns inference and effective hand-object contact optimization, which can help us reconstruct hand-object interaction with higher quality. We show our qualitative results in Fig. 5. Our methods can achieve more plausible reconstructions with fewer penetrations than [17]. More qualitative results of our method are shown in Fig. 7. **Hand-object contact estimation.** We take the result from our pose estimation network as the initial state and compare our RUFormer with ContactOpt [12] and \(\mathrm{S}^{2}\)Contact [35]. We retrain DeepContact network in ContactOpt [12] and GCN-Contact network in \(\mathrm{S}^{2}\)Contact [35] on the HMDO [40] dataset. As shown in Fig. 1 and Fig. 6, we show the qualitative results compared with [12, 35] under HMDO [40], ContactPose [2] and HO3D [13] datasets. We evaluate the contact patterns optimization results of 3D rigid hand-object interaction between [12, 35] and ours in Tab. 1. Our method achieved better performance than other methods. The contact optimization results of nonrigid hand-object interaction are shown in Tab. 2, and our method achieves higher quality grasping of hand and object. These demonstrate that our method can achieve better contact patterns optimization in both rigid and nonrigid interactions compared to other methods. Since RUFormer can estimate the deformed degree of the contact areas and the deformed transformations of the object, it allows our method to suit \begin{table} \begin{tabular}{c|c c c c} \hline \hline Methods & Initial State & [12] & [35] & Ours \\ \hline MPJPE\({}_{H}(\mathrm{mm})\)\(\downarrow\) & 14.67 & 14.92 & 14.81 & **14.78** \\ Max Pene.\((\mathrm{mm})\)\(\downarrow\) & 11.82 & 9.46 & 9.75 & **9.24** \\ Inter.\((\mathrm{cm}^{3})\)\(\downarrow\) & 10.69 & 7.51 & 7.58 & **7.39** \\ \hline \hline \end{tabular} \end{table} Table 1: **Evaluations for rigid interactions** under HO3D [13] dataset. Figure 5: **Comparisons on monocular reconstruction.** (a) RGB images. (b) Reconstruction results from [17]. (c) Ours. able for both contact optimization with nonrigid and rigid objects. From Tab. 1 and Tab. 2, it can be seen that although our method and [12, 35] did not improve hand pose estimation, they significantly reduced the intersection volume and max penetration, and improved the hand-object contact quality. This may be due to the optimization affected the hand region that did not interact with the object, as shown in the row 2 of Fig. 6. ### Ablation Study **Baseline.** We take our hand-object pose estimation network as our baseline. Since monocular estimation is ill-posed, there may be mutual penetration or no contact between the reconstructed hand and the object. In addition, the contact areas of the object may be deformed due to its non-rigidity and the force exerted by the subject. Since HMDO [40] is the dataset that records the 3D nonrigid interactions, most of our ablation experiments are based on this dataset. Where the results of completely using our entire pipeline are in the last row of Tab. 4. **Surface unwrapping.** We explore the effects of the size of hand-object RUPs. As shown in row 3 to row 4 of Tab. 4, we compared the impact of different sizes of RUP on hand-object interaction reconstruction. Considering both efficiency and reconstruction quality, it is appropriate to set the size of RUPs to \(64\times 64\). RUP of \(n\times n\) size is denoted as "RUP-n" in Tab. 4. **Contact estimation.** The impact of contact attention modules is ablated as shown in row 5 of Tab. 4. We replace contact attention modules with an MLP architecture, resulting \begin{table} \begin{tabular}{c|c c|c c} \hline \hline Datasets & \multicolumn{2}{c|}{FPHB [11]} & \multicolumn{2}{c}{HO3D [13]} \\ \hline Methods & [17] & Ours & [17] & Ours \\ \hline MPJPE\({}_{H}(\mathrm{mm})\downarrow\) & 18.23 & **17.86** & 14.74 & **14.78** \\ MPVPE\({}_{O}(\mathrm{mm})\downarrow\) & 21.45 & **21.22** & 19.42 & **19.27** \\ Max Pene.\((\mathrm{mm})\downarrow\) & 18.64 & **13.35** & 11.43 & **9.24** \\ Inter.\((\mathrm{cm}^{3})\downarrow\) & 13.57 & **8.28** & 10.26 & **7.39** \\ \hline \hline \end{tabular} \end{table} Table 3: **Comparisons for monocular reconstruction under FPHB [11] and HO3D [13] datasets.** Figure 6: **Comparisons on contact patterns optimization. (a) RGB images. (b) Initial hand-object surfaces. (c) Results from ContactOpt [12]. (d) Results from \(\mathrm{S}^{2}\)Contact [35]. (e) Ours. Row1, Row2 and Row 3 are 3D nonrigid interactions. Row4 is 3D rigid interaction.** in a decrease in the quality of hand object reconstruction. The main reason may be that the contact attention modules can better explore the visual and hand-object spatial correlation, which can help better to predict the mutual contact areas and the object deformation. We ablate the effect of introducing image patches on the results. As shown in row 6 of Tab. 4, visual cues can improve the quality of hand-object reconstruction. Because the image patches contain contact and deformation information, which can guide our RUFormer to better predict contact areas and deformed degrees on region-aligned features. We denote the contact attention modules as "Con-Att" and image patches as "Img-Pat" in Tab. 4. **Deformation estimation.** As shown in row 7 of Tab. 4, we ablate the deformation estimation block. We replace our deformation estimation block with Point-Transformer [45]. Compared with [45], our deformation estimation block can give consideration to both efficiency and accuracy. We do not need to constantly query and build the neighborhood. Our deformation estimation block can benefit from the ordered sampling points and achieve effective aggregation of deformation features. In addition, we ablate the grid size for ordered point sampling from object-RUPs and object-f-RUPs, as shown in row 8 of Tab. 4. Since the deformed transformations aggregated from fewer grid-wise sampling features can not well represent the object surface deformation and more points calculations are expensive, setting grid size to \(4\times 4\) is more appropriate. We denote Point-Transformer as "Point-Tran" and \(n\times n\) grid size as "grid-\(n\)" in Tab. 4. ## 5 Conclusion This paper proposes a learning-based framework to estimate the contact patterns between hand and nonrigid objects from monocular images. A hand-object interaction representation is proposed to record the hand-object surfaces into multiple fine-grained 2D regional unwrapping profiles. Based on this representation, the roughly estimated hand-object surfaces are first unwrapped into 2D regional profiles, then a Vision Transformer is tamed to predict contact areas and deformed transformations within/across regions according to region-aligned features. Finally, hand-object surfaces are refined based on contact areas and deformed transformations. **Limitations and Future Work.** Due to the influence of 2D hand joints on image patch cropping, our method relies on reliable 2D pose estimation. Our method can be extended to RGBD input and multi-view RGB input. By introducing depth and multi-view information, we can improve the quality of contact patterns and hand-object reconstruction. \begin{table} \begin{tabular}{c|c c c c} \hline \hline Method & MPFP\({}_{\text{gt}}\)(\(\text{tunn}\)) \(\downarrow\) & MPVF\({}_{\text{G}}\)(\(\text{tunn}\)) \(\downarrow\) & Max Fenc.(\(\text{tunn}\)) \(\downarrow\) & Image.(\(\text{tun}^{T}\)) \(\downarrow\) \\ \hline Baseline & 18.54 & 21.42 & 19.46 & 14.25 \\ \hline wf (RP-2) & 19.12 & 21.16 & 11.01 & 8.82 \\ w/ RUF-128 & 18.94 & 21.08 & 10.25 & 8.64 \\ \hline w/o Con-Att & 19.73 & 21.59 & 11.62 & 9.27 \\ w/o Long-Pat & 19.96 & 21.65 & 12.57 & 9.19 \\ \hline w/ Point-Tran & 19.04 & 21.09 & 10.75 & 8.49 \\ w/ grid-8 & 19.25 & 21.24 & 11.16 & 8.74 \\ \hline Ours & **18.97** & **21.04** & **10.42** & **8.56** \\ \hline \hline \end{tabular} \end{table} Table 4: **Ablation study of our method.** Our RUFormer, surface unwrapping and point sampling are evaluated. Figure 7: **More qualitative results.** (a) RGB images. (b) Initial hand-object surfaces. (c) Ours. High-quality reconstruction results certify the effectiveness of our framework.
2310.11346
Towards Generalizable Multi-Camera 3D Object Detection via Perspective Debiasing
Detecting objects in 3D space using multiple cameras, known as Multi-Camera 3D Object Detection (MC3D-Det), has gained prominence with the advent of bird's-eye view (BEV) approaches. However, these methods often struggle when faced with unfamiliar testing environments due to the lack of diverse training data encompassing various viewpoints and environments. To address this, we propose a novel method that aligns 3D detection with 2D camera plane results, ensuring consistent and accurate detections. Our framework, anchored in perspective debiasing, helps the learning of features resilient to domain shifts. In our approach, we render diverse view maps from BEV features and rectify the perspective bias of these maps, leveraging implicit foreground volumes to bridge the camera and BEV planes. This two-step process promotes the learning of perspective- and context-independent features, crucial for accurate object detection across varying viewpoints, camera parameters, and environmental conditions. Notably, our model-agnostic approach preserves the original network structure without incurring additional inference costs, facilitating seamless integration across various models and simplifying deployment. Furthermore, we also show our approach achieves satisfactory results in real data when trained only with virtual datasets, eliminating the need for real scene annotations. Experimental results on both Domain Generalization (DG) and Unsupervised Domain Adaptation (UDA) clearly demonstrate its effectiveness. The codes are available at https://github.com/EnVision-Research/Generalizable-BEV.
Hao Lu, Yunpeng Zhang, Qing Lian, Dalong Du, Yingcong Chen
2023-10-17T15:31:28Z
http://arxiv.org/abs/2310.11346v3
# Towards Generalizable Multi-Camera 3D Object Detection via Perspective Debiasing ###### Abstract Detecting objects in 3D space using multiple cameras, known as Multi-Camera 3D Object Detection (MC3D-Det), has gained prominence with the advent of bird's-eye view (BEV) approaches. However, these methods often struggle when faced with unfamiliar testing environments due to the lack of diverse training data encompassing various viewpoints and environments. To address this, we propose a novel method that aligns 3D detection with 2D camera plane results, ensuring consistent and accurate detections. Our framework, anchored in perspective debiasing, helps the learning of features resilient to domain shifts. In our approach, we render diverse view maps from BEV features and rectify the perspective bias of these maps, leveraging implicit foreground volumes to bridge the camera and BEV planes. This two-step process promotes the learning of perspective- and context-independent features, crucial for accurate object detection across varying viewpoints, camera parameters and environment conditions. Notably, our model-agnostic approach preserves the original network structure without incurring additional inference costs, facilitating seamless integration across various models and simplifying deployment. Furthermore, we also show our approach achieves satisfactory results in real data when trained only with virtual datasets, eliminating the need for real scene annotations. Experimental results on both Domain Generalization (DG) and Unsupervised Domain Adaptation (UDA) clearly demonstrate its effectiveness. The codes are available at [https://github.com/EnVision-Research/Generalizable-BEV](https://github.com/EnVision-Research/Generalizable-BEV). ## 1 Introduction Multi-Camera 3D Object Detection (MC3D-Det) refers to the task of detecting and localizing objects in 3D space using multiple cameras (Ma et al., 2022; Li et al., 2022). By combining information from different viewpoints, multi-camera 3D object detection can provide more accurate and robust object detection results, especially in scenarios where objects may be occluded or partially visible from certain viewpoints. In recent years, bird's-eye view (BEV) approaches have gained tremendous attention for the MC3D-Det task (Ma et al., 2022; Li et al., 2022; Liu et al., 2022; Wang et al., 2022). Despite their strengths in multi-camera information fusion, these methods may face severe performance degeneration when the testing environment is significantly different from the training ones. Two promising directions to alleviate the distribution shifts are domain generalization (DG) and unsupervised domain adaptation (UDA). DG methods often decouple and eliminate the domain-specific features, so as to improve the generalization performance of the unseen domain Wang et al. (2023). Regarding to UDA, recent methods alleviate the domain shifts via generating pseudo labels Li et al. (2022); Yuan et al. (2023); Yang et al. (2021) or latent feature distribution alignment Xu et al. (2023); Wang et al. (2023). However, without taining data from various viewpoints, camera parameters and environment, it is very challenging for purely visual perception to learn perspective- and environment-independent features. Our observations indicate that 2D detection in a single-view (camera plane) often have a stronger ability to generalize than multi-camera 3D object detection, as shown in Fig. 1. Drawing from this insight, we introduce a method that projects 3D detection results onto the 2D camera plane, ensuring consistency with 2D results. This approach effectively corrects erroneous 3D information arising from domain shifts. Furthermore, we introduce a strategy of perspective debiasing, where by rendering images from various viewpoints, we aim to enhance the model's robustness to different perspectives. This strategy aids the network in learning perspective-invariant features, which are crucial for accurate object detection across varying viewpoints and domain conditions. To achieve this, we present a novel MC3D-Det framework grounded in perspective debiasing. This framework bridges different planes, enabling the learning of perspective- and context-invariant features against domain shifts. Our approach involves two main steps: 1) rendering diverse view maps from BEV features, and 2) rectifying the perspective bias of these maps. The first step leverages implicit foreground volumes (IFV) to relate the camera and BEV planes, allowing for the rendering of view maps with varied camera parameters. The second step, in the source domain, uses random camera positions and angles to supervise the camera plane map rendered from IFV, promoting the learning of perspective- and context-independent features. Similarly, in the target domain, a pre-trained 2D detector aids in rectifying BEV features. Notably, our model-agnostic approach preserves the original network structure without incurring additional inference costs, facilitating seamless integration across various models and simplifying deployment. This not only reduces development and maintenance complexity but also ensures efficiency and resource conservation, crucial for real-time applications and long-term, large-scale deployments. To verify our method, we established the UDA benchmark on MC3D-Det and instantiated our framework on BEVDepth, achieving excellent results in both DG and UDA protocol. We also pioneer the use of training on virtual datasets, bypassing the need for real scene annotations, to enhance real-world multi-camera 3D perception tasks. In summary, the core contributions of this paper are: * We propose a generalizable MC3D-Det framework based on perspective debiasing, which can not only help the model learn the perspective- and context-invariant feature in the source domain, but also utilize 2D detector to further correct the spurious geometric features in the target domain. * We make the first attempt to study unsupervised domain adaptation on MC3D-Det and establish a benchmark. Our approach achieved the state-of-the-art results on both UDA and DG protocols. * We explore the training on virtual engine without the real scene annotations to achieve real-world MC3D-Det tasks for the first time. ## 2 Related Works ### Vision-based 3D object detection Multi-camera 3D object detection (MC3D-Det) targets to identify and localize objects in 3D space, received widespread attention (Ma et al., 2022; Li et al., 2022). Recently, most of MC3D-Det methods extract image features and project them onto the bird's-eye view (BEV) plane for better Figure 1: Domain gap challenges cause MC3D-Det to sometimes produce spurious and deteriorate depth estimations. By contrast, 2D detectors typically demonstrate more precise performance against domain gap, suggesting potential strategies to adjust 3D detector inaccuracies. integrating the spatial-temporal feature. Orthographic feature transform (OFT) and Lift-splat-shoot (LSS) provide the early exploration of mapping the multi-view features to BEV space (Roddick et al., 2019; Philion and Fidler, 2020). Based on LSS, BEVDet enables this paradigm to the detection task competitively (Huang et al., 2021). BEVDepth utilizes LiDAR as depth supervisory information to enhance the view projection (Li et al., 2023a). BEVformer further designs a transformer structure to automatically extract and fuse BEV features, leading to excellent performance on 3D detection (Li et al., 2022d). FB-BEV further combines LSS-based method and transformer-based method to improve model performance (Li et al., 2023b). PETR series propose 3D position-aware encoding to enable the network to learn geometry information implicitly (Liu et al., 2022; 2023). These methods have achieved satisfactory results on the in-distribution dataset but may show very poor results under cross-domain protocols. ### Cross Domain Protocols on Detection Domain generalization or unsupervised domain adaptation aims to improve model performance on the target domain without labels. Many approaches have been designed for 2D detection, such as feature distribution alignment or pseudo-label methods (Muandet et al., 2013; Li et al., 2018; Dou et al., 2019; Facil et al., 2019; Chen et al., 2018; Xu et al., 2020; He and Zhang, 2020; Zhao et al., 2020). These methods can only solve the domain shift problem caused by environmental changes like rain or low light. For the MC3D-Det task, there is only one study for domain shift, which demonstrates that an important factor for MC3D-Det is the overfitting of camera parameters (Wang et al., 2023a). Essentially, the fixed observation perspective and similar road structures in the source domain lead to spurious and deteriorated geometric features. However, without additional supervision, it is very difficult to further extract perspective- and context-independent features on the target domain. ### Virtual Engine for Automatic Driving Virtual engines can generate a large amount of labeled data, and DG or UDA can utilize these virtual data to achieve the perception of real scenes. According to the requirements, the virtual engine has better controllability and can generate various scenarios and samples: domain shift (Sun et al., 2022), vehicle-to-everything (Xu et al., 2022; Li et al., 2022b), corner case (Kim et al., 2022; Wang et al., 2023c). Through these virtual scenarios, we can easily measure the autonomous vehicle's perception, decision-making and planning abilities. So, breaking the domain gap between virtual and real datasets can further facilitate the closed-loop form of visually-oriented planning (Jia et al., 2023). To our best knowledge, there are no studies that only use virtual engine without real scenes labels for MC3D-Det. ## 3 Preliminaries ### Problem Setup Our research is centered around enhancing the generalization of MC3D-Det. To achieve this goal, we explore two widely used and practical protocols, namely, domain generalization (DG) and unsupervised domain adaptation (UDA). \(\bullet\) For DG on MC3D-Det task, our primary objective is to leverage solely the labeled data from the source domain \(D_{S}=\{X_{s}^{i},Y_{s}^{i},E_{s}^{i},E_{s}^{i}\}\) to improve the generalization of model. Here, the \(i\)-th sample contains \(N\) multi view images \(X^{i}=\{I_{1},I_{2},...,I_{N}\}\) (superscript is omitted for clearity) and the corresponding intrinsic \(K^{i}\) and extrinsic parameters \(E^{i}\) of camera. The labels of source domain \(Y_{s}^{i}\) includes location, size in each dimension, and orientation. \(\bullet\) For UDA on MC3D-Det task, additional unlabeled target domain data \(D_{T}=\{X_{t}^{i},K_{t}^{i},E_{t}^{i}\}\) can be utilized to further improve the generalization of model. The only difference between DG and UDA is whether the unlabeled data of the target domain can be utilized. ### Perspective Bias To detect the object's location \(L=[x,y,z]\) at the BEV space, corresponding to the image plane \([u,v]\), most MC3D-Det methods involves two essential steps: (1) get the the image features from the j-th camera by the image encoder \(F_{img}\). (2) map these feature into BEV space and fuse them to get the final location of objects by BEV encoder \(F_{bev}\): \[L =F_{bev}(F_{img}(I_{1}),...,F_{img}(I_{N}),K,E) \tag{1}\] \[=L_{gt}+\Delta L_{img}+\Delta L_{bev},\] where \(L_{gt}\), \(\Delta L_{img}\) and \(\Delta L_{bev}\) are the ground-truth location and the bias of img encoder (\(F_{img}\)) and BEV encoder (\(F_{bev}\)). Both \(\Delta L_{img}\) and \(\Delta L_{bev}\) are caused by overfitting limited viewpoint, camera parameter, and similar environments. Without additional supervision in the target domain, \(\Delta L_{img}\) and \(\Delta L_{img}\) are difficult to be mitigated. So we turn the space bias into the bias of a single perspective. We show the perspective bias \([\Delta u,\Delta v]\) on the uv image plane as: \[[\Delta u,\Delta v]==[\frac{k_{u}(u-c_{u})+b_{u}}{d(u,v)},\frac{k_{v}(v-c_{v} )+b_{v}}{d(u,v)}]. \tag{2}\] where \(k_{u}\), \(b_{u}\), \(k_{v}\), and, \(b_{v}\) is related to the domain bias of BEV encoder \(\Delta L_{BEV}\), and \(d(u,v)\) represents the final predicted depth information of the model. \(c_{u}\) and \(c_{v}\) represent the coordinates of the camera's optical center in the uv image plane. The detail proof and discussion in appendix C. Eq. 2 provides us with several important inferences: (1) the presence of the final position shift can lead to perspective bias, indicating that optimizing perspective bias can help alleviate domain shift. (2) Even points on the photocentric rays of the camera may experience a shift in their position on the uv image plane. Intuitively, the domain shift changes the BEV feature position and value, which arises due to overfitting with limited viewpoint and camera parameters. To mitigate this issue, it is crucial to re-render new view images from BEV features, thereby enabling the network to learn perspective- and environment-independent features. In light of this, the paper aims to address the perspective bias associated with different rendered viewpoints to enhance the generalization ability of the model. ## 4 Method To reduce bias stated in Eq. 2, we tailored a generalizable framework (PD-BEV) based on perspective debiasing as shown in Fig. 2. Our framework is model-agnostic, and we demonstrate its effectiveness by optimizing BEVDepth as an example. Figure 2: The generalizable framework (PD-BEV) based on perspective debiasing. The main pipeline of BEVDepth is shown in the bottom part of the figure. With the supervision of heatmaps and virtual depth, the semantic and geometric knowledge is injected into preliminary image features in advance. Then, implicit foreground volume (IFV) is tailored as a carrier for the camera plane and the BEV plane. The rendered heatmaps from IFV are supervised by 3D boxes in the source domain and by the pre-trained 2D detector in the target domain. The green flow means the supervision of the source domain and the red flow is for the target domain. The RenderNet shares the same parameters. ### Semantic Rendering We first introduce how to establish the connection between 2D image plane and BEV space. However, most MC3D-Det methods utilize the BEV plane representations without height dimension (Huang et al., 2021; Li et al., 2023;b), so we propose the implicit foreground volume for rendering new viewpoints. Specifically, we use a geometry-aware decoder \(D_{geo}\) to transform the BEV feature \(F_{bev}\in\mathbb{R}^{C\times X\times Y}\) into the intermediate feature \(F^{{}^{\prime}}_{bev}\in\mathbb{R}^{C\times 1\times X\times Y}\) and \(F_{height}\in\mathbb{R}^{1\times Z\times X\times Y}\), and this feature is lifted from BEV plane to an implicit foreground volume \(V_{ifv}\in\mathbb{R}^{C\times Z\times X\times Y}\): \[V_{ifv}=\text{sigmoid}(F_{height})\cdot F_{bev}. \tag{3}\] Eq. 3 lifts the object on the BEV plane into 3D volume with the estimated height position \(\text{sigmoid}(F_{height})\). \(\text{sigmoid}(F_{height})\) represents whether there is an object at the corresponding height. Ideally, the volumes \(V_{ifv}\) contain all the foreground objects information in the corresponding position. To render semantic features of different viewpoints, we propose the Multi-View Semantic Rendering (MVSR). Specifically, we first randomly perturb the camera's position \((x+\triangle x,y+\triangle y,z+\triangle z)\) and orientation \((\theta_{yaw}+\triangle\theta_{yaw},\theta_{pitch}+\triangle\theta_{pitch}, \theta_{roll}+\triangle\theta_{roll})\). Based on the camera's position and observation orientation, we generate the coordinate of multiple rays \(r^{w,h}_{i}=[x^{w,h},y^{w,h},z^{w,h}]\) to sample from implicit foreground volumes \(V_{ifv}\) and aggregate them into the camera plane feature \(F_{render}\): \[F(w,h)_{render}=\sum_{i=1}^{n}V_{ifv}(x^{w,h},y^{w,h},z^{w,h}), \tag{4}\] where \(r^{w,h}_{i}=[x^{w,h},y^{w,h},z^{w,h}]\) represents the ray coordinates of \(w\)-th row and \(h\)-th column camera plane in the implicit foreground volumes \(V_{ifv}\). The rendered camera plane feature \(F_{render}\) is then fed into the RenderNet \(R\), which is the combination of several 2D convolutional layers, to generate the heatmaps \(h_{render}\in\mathbb{R}^{N_{cls}\times W\times H}\) and attributes \(a_{render}\in\mathbb{R}^{N_{cls}\times W\times H}\). \(N_{cls}\) means the number of categories. The detailed structure of RenderNet is introduced in appendix B.4. The semantic heatmaps and attributes can be constrained on the source and target domains to eliminate perspective bias \([\Delta u,\Delta v]\). ### Perspective debiasing on source domain To reduce perspective bias as stated in Eq. 2, the 3D boxes of source domain can be used to monitor the heatmaps and attributes of new rendered view. In addition, we also utilize normalized depth information to help the image encoder learn better geometry information. #### 4.2.1 Perspective Semantic Supervision Based on Sec. 4.1, the heatmaps and attributes from different perspectives (the output of RenderNet) can be rendered. Here we will explain how to regularize them to eliminate perspective bias Eq. 2. Specifically, we project the object's box from ego coordinate to the \(j\)-th 2D camera plane using the intrinsic \(K^{\prime}_{j}\) and extrinsic parameters \(E^{\prime}_{j}\) of the rendering process: \(\hat{P}_{j}=(ud,vd,d)=K^{\prime}_{j}E^{\prime}_{j}P\), where \(\hat{P}_{j}\) and \(P\) stand for the object on 2.5D camera plane and 3D space, \(d\) represents the depth between the object and the view's optical center. Based on the position of the object on the image plane, the category heatmaps \(h_{gt}\in\mathbb{R}^{N_{cls}\times W\times H}\) can be generated Yin et al. (2021). The object's dimensions (length, width and height) \(a_{gt}\in\mathbb{R}^{N_{cls}\times W\times H}\) are also projected to the uv plane. Following (Yin et al., 2021), focal loss \(Focal()\) (Lin et al., 2017) and L1 loss \(L1\) are used to supervise the class information and object dimensions on source domain: \[\mathcal{L}_{render}=Focal(h_{render},h_{gt})+L1(a_{render},a_{gt}). \tag{5}\] Additionally, we also train a 2D detector for the image feature using 3D boxes by \(\mathcal{L}_{ps}\), which uses the same mapping and supervision methods as above. The only difference is that the 3D boxes are projected using the original intrinsics \(K\) and extrinsics \(E\) of the camera. 2D detectors can be further applied to correct the spurious geometry in the target domain. #### 4.2.2 Perspective Geometry Supervision Providing the explicit depth information can be effective in improving the performance of multi-camera 3D object detection (Li et al., 2023a). However, the depth of the network prediction tends to overfit the intrinsic parameters. So, following (Park et al., 2021; Wang et al., 2023a), we force the DepthNet to learn normalized virtual depth \(D_{virtual}\): \[\begin{split}\mathcal{L}_{pg}=BCE(D_{pre},D_{virtual}),\\ D_{virtual}=\frac{\sqrt{\frac{1}{f_{d}^{2}}+\frac{1}{f_{e}^{2}} }}{U}D,\end{split} \tag{6}\] where \(BCE()\) means Binary Cross Entropy loss, and \(D_{pre}\) represents the predicted depth of DepthNet. \(f_{u}\) and \(f_{v}\) are of v and v focal length of image plane, and \(U\) is a constant. It is worth noting that the depth \(D\) here is the foreground depth information provided using 3D boxes rather than the point cloud. By doing so, The DepthNet is more likely to focus on the depth of foreground objects. Finally, when using the actual depth information to lift semantic features into BEV plane, we use Eq. 6 to convert the virtual depth back to the actual depth. ### perspective debiasing on target domain Unlike the source domain, there is not 3D labeled in the target domain, so the \(\mathcal{L}_{render}\) can't be applied. Subtly, the pre-trained 2D detector is utilized to modify spurious geometric BEV feature on the target domain. To achieve this, we render the heatmaps \(h_{render}\) from the implicit foreground volume with the original camera parameters. Focal loss is used to constrain the consistency between the pseudo label of 2D detector and rendered maps: \[\begin{split}\mathcal{L}_{con}=Focal(h_{render},h_{pseudo}), \\ h_{pseudo}=\left\{\begin{array}{ll}1,&h>\tau\\ h,&else\end{array}\right.,\end{split} \tag{7}\] where \(Focal(,)\) is original focal loss (Lin et al., 2017). \(\mathcal{L}_{con}\) can effectively use accurate 2D detection to correct the foreground target position in the BEV space, which is an unsupervised regularization on target domain. To further enhance the correction ability of 2D predictions, we enhanced the confidence of the predicted heatmaps by a pseudo way. ### Overall Framework Although we have added some networks to aid in training, these networks are not needed in inference. In other words, our method is suitable for most MC3D-Det to learn perspective-invariant features. To test the effectiveness of our framework, BEVDepth (Li et al., 2023a) is instantiated as our main pipeline. The original detection loss \(\mathcal{L}_{det}\) of BEVDepth is used as the main 3D detection supervision on the source domain, and depth supervision of BEVDepth has been replaced by \(\mathcal{L}_{pg}\). In summary, our final loss of our work is: \[\mathcal{L}=\lambda_{s}\mathcal{L}_{det}+\lambda_{s}\mathcal{L}_{render}+ \lambda_{s}\mathcal{L}_{pg}+\lambda_{s}\mathcal{L}_{ps}+\lambda_{t}\mathcal{L }_{con}, \tag{8}\] where \(\lambda_{s}\) sets to 1 for source domain and sets to 0 for target domain, and the opposite is \(\lambda_{t}\). In other words, \(\mathcal{L}_{con}\) is not used under the DG protocol. ## 5 Experiment To verify the effectiveness, we elaborately use both DG and UDA protocol for MC3D-Det. The details of datasets, evaluation metrics and implementation refer to appendix B. ### Domain Generalization Benchmark For DG protocol, we replicate and compare the DG-BEV (Wang et al., 2023a) and the baseline BEVDepth (Li et al., 2023a). As shown in Tab. 1, our method has achieved significant improvement in the target domain. It demonstrate that IFV as a bridge can help learn perspective-invariant features against domain shifts. In addition, our approach does not sacrifice performance in the source domain and even has some improvement in most cases. It is worth mentioning that DeepAccident was collected from a Carla virtual engine, and our algorithm also achieved satisfactory generalization ability by training on DeepAccident. In addition, we have tested other MC3D-Det methods, and their generalization performance is very poor without special design as shown in Sec. 5.2. ### Unsupervised Domain Adaptation Benchmark To further validate debiasing on target domain, we also established a UDA benchmark and applied UDA methods (including Pseudo Label, Coral (Sun & Saenko, 2016), and AD (Ganin & Lempitsky, 2015)) on DG-BEV. As shown in Tab. 1, our algorithm achieved significant performance improvement. This is mainly attributed to the perspective debiasing, which fully utilizes the 2D detector with better generalization performance to correct the spurious geometric information of 3D detector. Additionally, we found that most algorithms tend to degrade performance on the source domain, while our method is relatively gentle. It is worth mentioning that we found that AD and Coral show significant improvements when transferring from a virtual dataset to a real dataset, but exhibit a decline in performance when testing on real-to-real testing. This is because these two algorithms are designed to address style changes, but in scenarios with small style changes, they may disrupt \begin{table} \begin{tabular}{c|c|c c c c c|c c c c} \hline \hline \multicolumn{2}{c|}{Nus \(\rightarrow\) Lyft} & \multicolumn{6}{c|}{Source Domain (nuScenes)} & \multicolumn{6}{c}{Target Domain (Lyft)} \\ \hline Method & Target-Free & mAP\(\uparrow\) & mATE\(\downarrow\) & mASE\(\downarrow\) & mOAE\(\downarrow\) & NDS\(\uparrow\) & mAP\(\uparrow\) & mATE\(\downarrow\) & mASE\(\downarrow\) & mOAE\(\downarrow\) & NDS\({}^{+}\) \\ \hline Oracle & & - & - & - & - & - & 0.598 & 0.474 & 0.152 & 0.092 & 0.679 \\ \hline BEVDepth & ✓ & 0.336 & 0.689 & 0.274 & 0.581 & 0.395 & 0.114 & 0.981 & 0.174 & 0.413 & 0.296 \\ DG-BEV & ✓ & 0.330 & 0.692 & **0.272** & 0.584 & 0.397 & 0.284 & 0.768 & 0.171 & 0.302 & 0.435 \\ **PD-BEV** & ✓ & **0.334** & **0.688** & 0.276 & **0.579** & **0.399** & **0.304** & **0.709** & **0.169** & **0.289** & **0.458** \\ \hline Pseudo Label & & 0.320 & 0.694 & 0.276 & 0.598 & 0.388 & 0.294 & 0.743 & 0.172 & 0.304 & 0.443 \\ Coral & & 0.318 & 0.696 & 0.283 & 0.592 & 0.387 & 0.281 & 0.768 & 0.174 & 0.291 & 0.435 \\ AD & & 0.312 & 0.703 & 0.288 & 0.596 & 0.381 & 0.277 & 0.771 & 0.174 & 0.288 & 0.381 \\ **PD-BEV\({}^{+}\)** & & **0.331** & **0.686** & **0.275** & **0.591** & **0.396** & **0.316** & **0.684** & **0.165** & **0.241** & **0.476** \\ \hline \multicolumn{2}{c|}{Lyft \(\rightarrow\) Nus} & \multicolumn{6}{c|}{Source Domain (Lyft)} & \multicolumn{6}{c}{Target Domain (nuScenes)} \\ \hline Method & Target-Free & mAP\(\uparrow\) & mATE\(\downarrow\) & mASE\(\downarrow\) & mOAE\(\downarrow\) & NDS\({}^{+}\) & mAP\(\uparrow\) & mATE\(\downarrow\) & mASE\(\downarrow\) & mOAE\(\downarrow\) & NDS\({}^{+}\) \\ \hline Oracle & & - & - & - & - & - & 0.516 & 0.551 & 0.163 & 0.169 & 0.611 \\ \hline BEVDepth & ✓ & **0.598** & **0.474** & 0.152 & 0.092 & **0.679** & 0.098 & 1.134 & 0.234 & 1.189 & 0.176 \\ DG-BEV & ✓ & 0.591 & 0.491 & 0.154 & 0.092 & 0.672 & 0.251 & 0.751 & 0.202 & 0.813 & 0.331 \\ **PD-BEV** & ✓ & 0.593 & 0.478 & **0.150** & **0.084** & 0.677 & **0.263** & **0.746** & **0.186** & **0.790** & **0.344** \\ \hline Pseudo Label & & 0.580 & 0.538 & 0.153 & **0.079** & 0.657 & 0.261 & 0.744 & 0.201 & 0.819 & 0.306 \\ Coral & & 0.574 & 0.511 & 0.164 & 0.105 & 0.649 & 0.244 & 0.767 & 0.212 & 0.919 & 0.302 \\ AD & & 0.568 & 0.521 & 0.161 & 0.126 & 0.649 & 0.247 & 0.761 & 0.223 & 0.902 & 0.309 \\ **PD-BEV\({}^{+}\)** & & **0.589** & **0.489** & **0.150** & 0.091 & **0.672** & **0.280** & **0.733** & **0.182** & **0.776** & **0.358** \\ \hline \multicolumn{2}{c|}{DeepAcci \(\rightarrow\) Nus} & \multicolumn{6}{c}{Source Domain (DeepAccident)} & \multicolumn{6}{c}{Target Domain (nuScenes)} \\ \hline Method & Target-Free & mAP\(\uparrow\) & mATE\(\downarrow\) & mASE\(\downarrow\) & mOAE\(\downarrow\) & NDS\({}^{+}\) & mAP\(\uparrow\) & mATE\(\downarrow\) & mASE\(\downarrow\) & mOAE\(\downarrow\) & NDS\({}^{+}\) \\ \hline Oracle & & - & - & - & - & - & 0.516 & 0.551 & 0.163 & 0.169 & 0.611 \\ \hline BEVDepth & ✓ & 0.334 & 0.517 & 0.741 & 0.274 & 0.412 & 0.087 & 1.100 & 0.246 & 1.364 & 0.169 \\ DG-BEV & ✓ & 0.331 & 0.519 & 0.757 & 0.264 & 0.408 & 0.159 & 1.075 & 0.232 & 1.153 & 0.207 \\ **PD-BEV** & ✓ & **0.345** & **0.499** & **0.735** & **0.251** & **0.425** & **0.187** & **0.931** & **0.229** & **0.967** & **0.239** \\ \hline Pseudo Label & & 0.312 & 0.522 & 0.785 & 0.271 & 0.393 & 0.151 & 1.112 & 0.238 & 1.134 & 0.202 \\ Coral & & 0.314 & 0.544 & 0.796 & 0.274 & 0.388 & 0.164 & 1.045 & 0.242 & 1.104 & 0.208 \\ AD & & 0.312 & 0.539 & 0.787 & 0.263 & 0.391 & 0.166 & 1.013 & 0.251 & 1.073 & 0.207 \\ **PD-BEV\({}^{+}\)** & & **0.344** & **0.488** & **0.737** & **0.248** & **0.426** & **0.207** & **0.862** & **0.235** & **0.962** & **0.260** \\ \hline \multicolumn{2}{c|}{DeepAcci \(\rightarrow\) Nus} & \multicolumn{6}{c}{Source Domain (DeepAccident)} & \multicolumn{6}{c}{Target Domain (Lyft)} \\ \hline Method & Target-Free & mAP\(\uparrow\) & mATE\(\downarrow\) & mASE\(\downarrow\) & mOAE\(\downarrow\) & NDS\({}^{+}\) & mAP\(\uparrow\) & mATE\(\downarrow\) & mASE\(\downarrow\) & mOAE\(\downarrow\) & NDS\({}^{+}\) \\ \hline Oracle & & - & - & - & - & - & 0.598 & 0.474 & 0.152 & 0.092 & 0.679 \\ \hline BEVDepth & ✓ & 0.334 & 0.517 & 0.741 & 0.274 & 0.412 & 0.045 & 1.219 & 0.251 & 1.406 & 0.147 \\ DG-BEV & ✓ & 0.331 & 0.519 & 0.757 & 0.264 & 0.408 & 0.135 & 0. semantic information. As for the Pseudo Label algorithm, it can improve the model's generalization performance by increasing confidence in some relatively good target domains, but blindly increasing confidence in target domains can actually make the model worse. ### Ablation Study To further demonstrate the effectiveness of our proposed algorithm, we conducted ablation experiments on three key components: 2D Detetctor Pre-training \(\mathcal{L}_{ps}\) (DPT), source domain debiasing \(\mathcal{L}_{render}\) (SDB), and target domain debiasing \(\mathcal{L}_{con}\) (TDB). DPT and MVSR are designed for the source domain, while TDB is designed for the target domain. In other words, we report the results under the UDA protocol only when using TDB, while the results of other components are reported under the DG protocol. As presented in Tab 2, each component has yielded improvements, with SDB and TDB exhibiting relatively significant relative improvements. SDB can better capture perspective-invariant and more generalizable features, while TDB leverages the strong generalization ability of 2D to facilitate the correction of spurious geometric features of the 3D detector in the target domain. DPT makes the network learn more robust features by adding supervision to the image features in advance. These findings underscore the importance of each component in our \begin{table} \begin{tabular}{c c|c c|c c} \hline \hline & & \multicolumn{2}{c|}{Nus \(\rightarrow\) Lyft} & \multicolumn{2}{c}{DeepAcci \(\rightarrow\) Lyft} \\ \hline DPT & SDB & TDB & mAP \(\uparrow\) & NDS*\(\uparrow\) & mAP\(\uparrow\) & NDS*\(\uparrow\) \\ \hline & & & 0.279 & 0.433 & 0.132 & 0.188 \\ ✓ & & & 0.290 & 0.438 & 0.143 & 0.205 \\ & ✓ & & 0.300 & 0.453 & 0.147 & 0.209 \\ ✓ & ✓ & & 0.304 & 0.458 & 0.151 & 0.212 \\ \hline ✓ & ✓ & ✓ & **0.316** & **0.476** & **0.171** & **0.238** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study of different modules of PD-BEV. 2D Detetctor Pre-training (DPT), source domain debiasing (SDB), and target domain debiasing (TDB). TDB only is used for UDA protocol. In other words, the bottom line is the UDA result, and the rest is the DG result. Figure 3: Visualization of final MC3D-Det results. Our approach allows for more accurate detection and greatly reduces the presence of duplicate boxes. In front (b) and back (e) view, our method predicts more accurate and fewer duplicate boxes than DG-BEV. In left-back view, our method detects the location of the object more accurately. Please zoom in while maintaining the color. \begin{table} \begin{tabular}{c|c c|c c} \hline \hline & \multicolumn{2}{c|}{w/o ours} & \multicolumn{2}{c}{w ours} \\ \hline Nus \(\rightarrow\) Lyft & mAP \(\uparrow\) & NDS*\(\uparrow\) & mAP\(\uparrow\) & NDS*\(\uparrow\) \\ \hline BEVDet & 0.104 & 0.275 & 0.296 & 0.446 \\ BEVFormer & 0.084 & 0.246 & 0.208 & 0.355 \\ FB-OCC & 0.113 & 0.294 & 0.301 & 0.454 \\ \hline \hline \end{tabular} \end{table} Table 3: The plug-and-play capability testing of our method. We tested more MC3D-Det algorithms under the DG and tried to add our algorithm for further improvement. algorithm and highlight the potential of our approach for addressing the challenges of domain gap in MC3D-Det. ### Further Discussion Here we try to migrate our framework to more MC3D-Det methods to prove the universality capability. We also give some visualizations to demonstrate the effectiveness of our framework. **The plug-and-play capability of the method**. Our framework is model-agnostic. Any MC3D-Det algorithm with image feature and BEV feature can be embedded with our algorithm. Our algorithm framework is applied to BEVDet Huang et al. (2021), BEVformer Li et al. (2022) and FB-BEV Li et al. (2023), as shown in Sec. 5.2. As the results show, our method can significantly improve the performance of these algorithms. This is because our algorithm can help the network learn perspective-invariant features against domain shifts. **Perspective Debiasing.** To better explain the effect of our perspective debiasing, we visualizes the heatmaps of 2D detector and the IFV in Fig. 4 (UDA protocol for Nus\(\rightarrow\)Lyft). In the target domain, the 2D detector has good generalization performance and can accurately detect the center of the object in Fig. 4 (b). However, the heatmap rendered from the IFV is very spurious and it is very difficult to find the center of the objectin in Fig. 4 (c). Fig. 4 (d) shows that rendered heatmaps of IFV can be corrected effectively with 2D detectors. **Visualization.** To better illustrate our algorithm, we visualized the final detection results of our algorithm and DG-BEV. As shown in Fig. 3, the detection results of our algorithm are more accurate, especially for the detection of distant objects. And our algorithm has fewer duplicate boxes, because the 2D detector can effectively correct the spurious geometric feature of the 3D detector and improve the confidence. We further visualized some interesting cases as shown in Fig. 5, and our algorithm can even detect some results that were not labeled in the original dataset, because the 2D detector has more generalization performance and further improves the performance of the 3D detector. Figure 4: Visualization of heatmaps on target domain: (a) ground-truth, (b) 2D detector, (c) rendered from IVF, and (d) revised by 2D detector. The green rectangles indicates that our algorithm has improved the confidence of the detector prediction. The blue rectangles represent unlabeled objects that our algorithm detects. Please zoom in while maintaining the color. Figure 5: Detected unlabeled objects. The first line is the 3D box of the ground-truth, and the second line is the detection result predicted by our algorithm. The blue box indicates that our algorithm can detect some unlabeled boxes. Please zoom in while maintaining the color. ## 6 Summary This paper proposes a framework for multi-camera 3D object detection (MC3D-Det) based on perspective debiasing to address the issue of poor generalization for unseen domains. We firstly render the semantic maps of different view from BEV features. We then use 3D boxes or 2D pre-trained 2D detectors to correct the spurious BEV features. Our framework is model-agnostic, and we demonstrate its effectiveness by optimizing multiple MC3D-Det methods. Our algorithms have achieved significant improvements in both DG and UDA protocols. Additionally, we explored training only on virtual annotations to achieve real-world MC3D-Det tasks.
2308.04267
The Vulnerable Nature of Decentralized Governance in DeFi
Decentralized Finance (DeFi) platforms are often governed by Decentralized Autonomous Organizations (DAOs) which are implemented via governance protocols. Governance tokens are distributed to users of the platform, granting them voting rights in the platform's governance protocol. Many DeFi platforms have already been subject to attacks resulting in the loss of millions of dollars in user funds. In this paper we show that governance tokens are often not used as intended and may be harmful to the security of DeFi platforms. We show that (1) users often do not use governance tokens to vote, (2) that voting rates are negatively correlated to gas prices, (3) voting is very centralized. We explore vulnerabilities in the design of DeFi platform's governance protocols and analyze different governance attacks, focusing on the transferable nature of voting rights via governance tokens. Following the movement and holdings of governance tokens, we show they are often used to perform a single action and then sold off. We present evidence of DeFi platforms using other platforms' governance protocols to promote their own agenda at the expense of the host platform.
Maya Dotan, Aviv Yaish, Hsin-Chu Yin, Eytan Tsytkin, Aviv Zohar
2023-08-08T14:08:45Z
http://arxiv.org/abs/2308.04267v1
# The Vulnerable Nature of Decentralized Governance in DeFi ###### Abstract Decentralized Finance (DeFi) platforms are often governed by Decentralized Autonomous Organizations (DAOs) which are implemented via governance protocols. Governance tokens are distributed to users of the platform, granting them voting rights in the platform's governance protocol. Many DeFi platforms have already been subject to attacks resulting in the loss of millions of dollars in user funds. In this paper we show that governance tokens are often not used as intended and may be harmful to the security of DeFi platforms. We show that (1) users often do not use governance tokens to vote, (2) that voting rates are negatively correlated to gas prices, (3) voting is very centralized. We explore vulnerabilities in the design of DeFi platform's governance protocols and analyze different governance attacks, focusing on the transferable nature of voting rights via governance tokens. Following the movement and holdings of governance tokens, we show they are often used to perform a single action and then sold off. We present evidence of DeFi platforms using other platforms' governance protocols to promote their own agenda at the expense of the host platform. ## 1 Introduction Cryptocurrencies such as Bitcoin [41] and Ethereum [15, 48] facilitate monetary _transactions_ between users in a distributed and decentralized manner. Users who wish to have their transaction processed by the system can broadcast it to entities called _miners_, who in turn collect transactions in _blocks_. As transactions are ordered within each block, and as blocks contain a reference to at least one preceding block, an ordered ledger of transactions commonly called a _blockchain_ is formed. Various mechanisms such as Proof of Work (PoW) [21] and Proof of Stake (PoS) [9] are used to maintain the integrity and security of the ledger. The differences between them are outside the scope of our work, but we note that exact terminology used by each might differ, too. For example, in Ethereum, miners are also called _validators_. For brevity, we will stick to the terminology as used in Bitcoin. The security of blockchain protocols relies on blocks being quickly propagated throughout the network [20], therefore requiring a size-limit on blocks which limits the system's throughput. Thus, if the amount of pending transactions exceeds the maximal throughput, users can prioritize a transaction over competing ones by offering a _fee_ to the first miner to include it in a block [27]. The Ethereum blockchain allows transactions to contain software programs specified using a formal virtual-machine [28] execution model called the _EVM_ (Ethereum virtual machine) [48]. These programs, also known as _smart contracts_[7], can be _deployed_ to the blockchain, e.g., stored on it, thereby allowing users to interact with them by creating transactions that invoke their functions. Transaction fees on the Ethereum network are often referred to as "gas". At times of high demand, gas prices for including transactions are high, and at times of low demand they drop. ### Decentralized Finance DeFi platforms, that implement traditional financial instruments on top of a decentralized mechanism, have emerged as a leading use-case for smart contracts on the Ethereum blockchain [15, 48]. In this work, we will focus on two types of DeFi platforms: _DEX_s (Decentralized exchanges) and decentralized lending platforms. DEXs such as Uniswap [45] enable users to exchange or _swap_ tokens amongst each other without requiring any form of direct interaction between the exchanging parties [50]. These tokens are commonly implemented using the ERC-20 standard [39, 46], which is an inter-operable specification for fungible tokens [22, 47]. Decentralized lending platforms such as Aave [49] and Compound [33] let users take and give loans. There are two main types of loans available on DeFi platforms (1) long-term _collateralized_ loans which are secured by up-front deposits (e.g., collateral) [51], and (2) _flashloans_[19] which are loans given for the duration of a single transaction. The transaction atomicity offered by the EVM allows one to ensure that if a flashloan is not repaid by the conclusion of the transaction that took it, the transaction is reverted [12]. In this paper we chose to focus on the Ethereum blockchain, as it is do date the largest and most active blockchain in terms of DeFi TVL. We also focus on the following DeFi platforms: Aave V2, Uniswap, Compound, Balancer. At the time of writing this paper, these are the biggest DeFi platforms with an active governance protocol, with a combined total value locked (TVL) of over 12B USD. They are also the platforms with the highest value governance tokens on the market at the time of writing this paper. ### Governance Protocols The management of funds in DeFi platforms is often done via DAOs. This is done algorithmically via smart contracts that are deployed on the blockchain. Smart contracts that follow the _proxy design pattern_[36] appoint admins that can change the address of the delegate contracts. These admins need not be humans. For example, the admins for the various of Compound's smart contracts are smart contracts themselves which enable decentralized decision-making [25, 33]. Such contracts are also called _goverranance protocols_[29, 30]. These contracts are publicly visible and audit-able to anyone who has the technical skill to read them. There are typically some aspects of platform management that remains alterable over time, to adjust the platforms to market events, infrastructure upgrades and so on. While these platforms are typically privately owned and managed, they many times delegate many aspects of the managements of funds and decisions regarding crucial policy changes to their users. This is done in an attempt to increase the perceived transparency and decentralization of these platforms. This is what is typically referred as a _goverranance protocol_. Governance protocols of different platforms vary. Some governance protocols involve both on and off chain steps. The execution and implementation is done on-chain. In Fig. 1 we demonstrate the steps involved in life-cycle of a proposal in Aave's governance protocol. Off chain activity mainly consists of introducing pending change proposals to the community and open discussion about their implications. Practically, this delegation of decision making is often done through the utilization of _goverranance tokens_. These tokens dispersed to community members as a reward for active participation in the platform, through depositing or lending funds, making exchanges etc. [26]. The tokens then become tradable on the same markets that they enable controlling. Decentralized governance in the on-chain setting broadly implies one token equals one vote. Governance tokens can be earned, and can be printed by the platform. Voting is done via an on-chain transaction, and thus incurs a cost due to gas fees. Votes have monetary value, people might not vote because their tokens are put to other uses, e.g. as collateral, or even selling your vote to make a monetary profit. Votes are often on topics which don't directly affect the voter, e.g. change the parameters for some token which the voter doesn't hold, and information regarding the content and implications of the votes may be vague. In fact, many times a voting party may introduce a new policy change on which other parties can vote, and make the description intentionally misleading. It is left to other voters to individually understand the implications of the suggestion. Votes can be automated using bots and can be delegated to others (either explicitly, or by loaning out your tokens). Note that governance token can be traded on the free market, just as any other token. This means that individuals can buy voting shares of the platforms even without actively participating in the platform. **Escrowed Governance Tokens** _Vote Escrowed Tokenomics_ is a different way of issuing governance power to users, meant to increase the commitment of governance voter and proposers in the longevity and health of the platform. In the vote escrow mechanism, users lock tokens in the platform, and in return are rewarded _veTokens_, representing voting rights. Tokens are distributed according to the time a user committed to locking the funds on the platform, longer locking periods grant higher voting power. The purpose is to align users' incentives with the health of the platform. Several DeFi platforms now use veTokens in their governance protocol, including Curve.fi [18] and Balancer [8]. ### Our Contributions In this paper we show misuse of governance protocols by bad actors. In particular, changes made to the platform are not always at the benefit of the platform, and are sometimes made by user who only hold on to their voting rights for the duration of a single proposal. We find that these protocols often do not encourage active participation in the governance mechanisms based on real-world data about voting patterns which indicate low participation rates and that voting is very homogeneous. This enables bad actors to gain disproportional voting power in these protocols. We map out governance attacks in which individuals use governance protocols to attack the platform they control. We show that the vested escrow governance design is not immune to this type of abuse. Finally, we identify platforms that utilize other platform's governance mechanisms in order to promote their own interest, sometimes at the expense of the native platform. Paper structureIn Section 2 we analyze voting patterns in popular DeFi platforms. In Section 3 we explore the vulnerability of governance attack to misuse: In Section 3.1 we detail some recent attacks on DeFi platforms that utilize the platform's own governance mechanism to the attacker's advantage. In Section 3.2 we examine cross-platform governance activity. We focus both on platforms holding significant stake in other platforms, and even proposing changes to other platform to promote their own agenda. In Section 4 we review related work, and conclude in Section 5. ## 2 Governance Voting and Proposing Patterns In this section, we dive deeper into voting patterns in several main governance protocols. We analyze the holding and voting patterns of users and find evidence of voting centrality in the sense that users typically do not vote against proposals. This can be explained by the fact that voting on proposals costs funds reflected in transaction fees. Fees vary according to network Figure 1: Aave’s governance proposal pipeline. There are both on and off chain components to the process. At the point of moving from the off-chain to the on-chain process, the proposer must give proof of passing the off-chain process in the form of an IPFS hash. Uploading to IPFS can only be done by Aave’s team. congestion. An attacker can leverage this to propose malicious attacks at times of high congestion, making the attack prevention more expensive for users. In Table 1 we see an overview of several key aspects of the four major DeFi platforms examined in this paper. Balancer is the only platform to utilize escrowed governance tokens, making the voting rights not trade-able for a period of time. We see that all four platforms have a point of centralization in the Guardian entity. This entity, controlled by a _community multisig_, has the power to cancel proposals. The multisig is composed of several users that are believed by the creators of the platform to hold the platform's best interest at heart, and to cancel malicious proposals. For instance, in Aave this multisig requires 6 out of 10 of the holders to sign an order to kill an active proposal. This is a point of centralization which can be a safeguard, but also might be subject to abuse1. Additional points of centralization are found in integrating a platform's on/off chain proposal process. For instance, In Aave, once a proposal passes the off-chain stage it needs to be merged into Aave's repository in order to generate a valid IPFS hash for uploading to the blockchain, see Fig. 1. This upload can only be done by Aave's developers. Except Balancer, all platforms have some measures of automatic code validation for pending proposals [14]. All platforms also have some form of execution delay between the time a proposal is uploaded, is voted on and is then implemented to prevent flash-loan attacks, which we discuss in Section 3.1. Both of these safe-rails are meant to defend the platform from malicious attacks. However, they are not bulletproof. For instance, later in this section we detail cases of users who only purchase and hold on to a large portion of governance tokens for the duration of a single proposition of their own making. They vote to pass these proposals and then transfer on their governance tokens. In Section 3.2 we dive deeper into these use-cases and show that these were proposals that were harmful to the platform. These were not stopped by the safe-rails, and they past with overwhelming majority, one of them even past unanimously. Footnote 1: For instance, the famous Ronin bridge hack involved corrupting 5 of the 9 required shares [34] ### Voting and proposal centrality in Aave, Compound and Uniswap Table 2 summarizes proposition and voting data from three major platforms. In all three platforms the percentage of governance tokens that participate in voting is very low, with Compound leading the chart at 11.2%. In Table 2 we also see that an overwhelming majority of proposals pass with an overwhelming majority of votes in favor. We look at the equivalent of an "h- index" of the three platforms, i.e. calculated by counting the percent of proposals that won at least that same percent of votes. In Aave, for instance, at least 86 in the sense that there are 86% of the proposals with at least 86% of the votes. In Table 1 we also see indication of voting centrality, in all platforms a majority of all proposals passes with over 99% majority (out of the total tokens used to vote, not of all available tokens). Combining these results we have that voting participation is low and the user that do vote, do so in a homogeneous manner. A plausible explanation is that casting a vote in a governance protocol requires creating an on-chain transaction, which incurs transaction fees. These fees vary according to network congestion. This means that the price of voting changes with blockchain conditions that be unrelated to the DeFi platforms and certainly to the governance protocols. In Fig. 2 we see Figure 2: Number of votes cast during times of active governance proposals vs. the Ethereum gas price in Aave, Compound and Uniswap. In all three platforms the gas price is negatively correlated with the number of votes cast on governance proposals. indication that there is negative correlation between the Ethereum gas price and the number of votes cast for active proposals, which could imply that voters tend to think twice about voting when the price of voting is high. This also introduces as new avenue for attacker to upload governance proposals at times of high transaction fees. Table 1 also summarizes additional points of centralization in the proposal process in the different platforms. ### Governance Token Holding Patterns and Movement in Aave As a case study, we collected data on governance token holders in Aave. We looked at how these holdings change over time, and whether the token holders indeed participate in the governance process, both in proposing and voting. Results are summarised in Fig. 4b, Fig. 4a. Excluded from Fig. 4b are (1) Aave's reserve pool, the largest token holder out of the top 600 holders. Over the entire life of the protocol, the reserve tokens were only used once to vote in proposal \(106\)2, and was dormant for all of the rest. (2) Aave's genesis team. In Fig. 4a we see that that there are several cases of users who only held on to governance tokens for the duration of the proposal they introduces, and transferred them elsewhere upon completing their proposing business. This implies that the governance protocol might not always be utilized by long term stake holders of the platform, but rather by persons of interest that buy the tokens to promote a single proposal of interest to them and then dump them when the proposal ends its life-cycle. We map these instances to several proposals that were passed and were eventually reversed by the community since they were harmful to the platform. This is detailed in Section 3.2. Footnote 2: “Adjust Aave Governance Level 2 requirements” which changed the structure of the governance protocol itself (lowering the quorum requirements) Data shows tokens move frequently trough DEXs and other DeFi platforms, explained by the fact that AAVE tokens earn interest in various DeFi platforms. Combined with evidence showing that voters and proposers only hold on to AAVE tokens for short periods of times when they are actively voting/proposing paints a picture of these tokens being a quick way for parties of interest to make vast changes to the code of the platform without necessarily having a long term interest in the health of the platform or even an interest in honestly participating in the platform for its intended use. Additionally, we identify users that vote in multiple platforms. In Fig. 3 We can see the movement of governance from the three platforms - Aave, Compound and Unisw \begin{table} \begin{tabular}{c c c c c} \hline \hline **Platform** & \begin{tabular}{c} **Proposals can be** \\ **cancelled by Guardian** \\ \end{tabular} & \begin{tabular}{c} **Combines On and** \\ **Off-chain Protocols** \\ \end{tabular} & \begin{tabular}{c} **Automatic Proposal** \\ **Code Validation** \\ \end{tabular} & \begin{tabular}{c} **\# Proposals (\% Passed with** \\ **99\% majority)** \\ \end{tabular} \\ \hline Aave & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & 109 (76.1\%) \\ Compound & \(\checkmark\) & - & \(\checkmark\) & 134 (61.7\%) \\ Uniswap V3 & \(\checkmark\) & \(\checkmark\) & \(\checkmark\) & 24 (66.7\%) \\ Balancer & \(\checkmark\) & \(\checkmark\) & - & 286 (61.2\%) \\ \hline \hline \end{tabular} \end{table} Table 1: An overview of popular governance platforms, centralization points, proposal process flow and centrality in voting. \begin{table} \begin{tabular}{l c c c} \hline \hline & **Aave** & **Compound** & **Uniswap** \\ \hline Proposals & 109 & 134 & 24 \\ H-Index & 86 & 91 & 16 \\ \hline Mean & 3.67\% & 11.21\% & 6.41\% \\ Std. dev & 2.92\% & 5.64\% & 3.81\% \\ Min & 0\% & 0\% & 0.00\% \\ Median & 3.15\% & 10.62\% & 6.64\% \\ Max & 23.53\% & 25.97\% & 11.93\% \\ \hline \hline \end{tabular} \end{table} Table 2: Percentage of circulating governance tokens used for voting, by platform. Evidently, Compound users feel more incentivised to vote than Aave’s. Also, Uniswaps’ usage of governance proposals is significantly lower the the other two platforms. all three platforms. They consist of two platforms that aid in voting/delegating votes (Sybil and Excel-sior), the group "Blockchains at UCLA", DeFi Pulse Index (whom we discuss in depth in Section 3.2), and the rest are either pseudo-names or unidentified. ## 3 Vulnerability of Governance Protocols In this section, we review use-cases of governance protocols being vulnerable to misuse by governance token holders. In Section 3.1 are outright attacks, scenarios in which the user manages to either steal funds from the platform by momentarily holding governance tokens, or manipulate the reward scheme. Use-cases in Section 3.2 are more subtle, the misuse is not obvious at first, and is hence not stopped by safeguards of the protocol. In retrospect they are sometimes harmful to the host platform, and require action by the community to remedy the situation. ### Governance Attacks and Risks Any user with sufficient stake of the governance token can propose changes, in code, overriding the current smart contract implementation of the platform. Proposals are sometimes difficult to verify, especially for end users that are not proficient in reading code. In smart contracts, there are clever ways of disguising malicious code. These ways have been exploited by attackers in order to upload malicious proposals that are seemingly harmless. Governance attacks are a growing way for bad actors to steal funds from DeFi platforms. These attacks are often very well hidden and incredibly sophisticated, and are not discovered by the various safety measures implemented by platforms to prevent them. #### 3.1.1 Balancer and Humpy Balancer is a DeFi platform that uses vote-escrowed governance via veBAL tokens. veBAL tokens are distributed to users who commit to locking funds into Balancers' platform. The "voting strength" a user has is determined both by the quantity of the locked tokens and the duration funds are locked. The voting strength decays on a weekly basis (as the funds approach being unlocked). This design means that committing to holding BAL long term will give an advantage to in strength to long term holders, and is supposed to increase the stake of governance voters hold in the health of the platform over time. veBAL holders can obtain governance rights, and accrue a portion of protocol revenue. Following the introduction of veBAL to Balancers' governance, a user under the pseudoname _Humpy_ managed to get control of 35% of the entire supply of veBAL tokens. Humpy leveraged veBAL to create a CREAM/WETH pool and set the pool trading fees to 10%. Then, Humpy directed its veBAL holdings across multiple addresses and directed BAL emissions to the pool's gauge. This way, Humpy managed to direct 1.8M USD of cumulative BAL emissions to itself, via the to the CREAM/WETH gauge. This was a great profit for Humpy, at the expense of Balancer's platform, as they only generated 17K USD in revenue from the new pool over the same period. The Balancer community noticed this occurrence mitigated the problem by issuing proposal BIP 19 at an attempt to better align the veBAL design with Balancer's revenue. The pro Figure 3: Governance token movement for Aave (Purple), Compound (Green) and Uniswap (orange). The red nodes are voters on all three platforms, grey nodes voted in two out of the three. posal passed and Balancer introduced several new safeguards. Following this, Humpy changed strategies, moving away from low-cap pools into the Tetu 20WETH/80BAL/TetuBAL pool. Which again caused Humpy to be a fast printer of BAL tokens. This created a proposition war between Humpy and the Balancer community. The saga came to an end with a peace treaty proposal [37] between Balancer and Humpy, where both parties agreed to give Humpy control of 17.5% of all future BAL emissions. We visually present the data of the distribution of Balancers veBAL token holders over time in Fig. 5. We can see how Humpy's stake, as well as increasing amounts of tokens being transferred to Tetu, which is also associated with Humpy. In this case, an oversight in the design of the governance mechanism created an opportunity for a selfish, utility maximizing user to leverage the Balancer's governance to grow its own wealth at the direct expense of the wealth and stability of the platform. This was only resolved by surrendering significant power permanently to the attacker, essentially making it a major stakeholder indefinitely. Clearly, this is an undesirable outcome from a mechanism that is supposed to manage the health of the platform. #### 3.1.2 Beanstalk Beanstalk is a lending platform that allowed its governance contract to change the address of the owner of the collateral of the platform. It was attacked [42] by an exploiter who transferred all of the collateral of the platform (estimated at just over 182M USD) to themselves, collapsing the platform as a result. The attack was issued by an individual who was able to take a flashloan of Beanstalk's governance tokens and in doing so became a majority holder of governance tokens, allowing them to make dictatorship decisions in the governance contract. The attacker was able to upload a seemingly innocent governance Figure 4: AAVE token holdings over time. We watch how these tokens are transferred between holders at times proposals for proposers(left) and voters (right). suggestion (donating funds to Ukraine) while hiding the actual functionality using Ethereum's Diamond standard [40] (which is a sophisticated version of the proxy mechanism) and was also able to use the emegranctCommit() method to transfer to himself a sum that covered his flashloan, alongside over 180M USD in profits from the attack. #### 3.1.3 Tornado Cash Tornado Cash [43] is a popular Ethereum based zero-knowledge mixer implemented as a smart contract. The platform is governed by a DAO via the TORN governance token. On May 20th 2023 the DAO handling operations, funds and future plans of Tornado Cash was taken over by an unidentified attacker [35]. The attacker uploaded a malicious proposal that hid a code function that granted them fake votes. The attackers proposal imitated an earlier version - except with some malicious code that allowed for the update of logic that gave the attacker access to all governance votes. #### 3.1.4 Compound COMP and negative interest rates on loans In two distinct incidents, Compounds' governance reward mechanism (i.e. COMP token distribution) created condition in which the interest rate on COMP loans in Compounds' own platform were negative. In the first case, COMP rewards rates were applied at the same rate for both suppliers and borrowers for any single market, which created negative interest rates when borrowing certain assets from the platform. Governance proposal 62 [17] changed the distribution to fix the problem. Proposal 68 [16] was imposed to remedy another negative interest situation. Compound used to reward cCOMP3 borrowing with COMP tokens. This meant the net rate for borrowing COMP was negative. In both these examples, a user that borrows COMP token both makes revenue on _borrowing_ funds and increases their voting power in the platform, as the COMP token is the governance token used to vote on and propose changes to the Compound protocol. In addition to being bad for the platform's balance, the fact that this happened with the platform's governance token made the risk worse, since the negative interest rate de-facto meant that users borrowing COMP tokens disproportionately increased their voting power in the platform. Footnote 3: this token is printed by Compound to represent positions in the COMP token ### Cross-Platform Governance activity In this section we present evidence of platforms holding stake in other platforms' governance. This stake can either be bought by a competing platform, or sometimes be gained by simply holding user's tokens which happen to be governance tokens in other platforms. This creates dependencies between platforms, exposing them to devastating risks. We present two examples of proposals in Aave's governance protocol in which users leveraged Aave's governance mechanism to propose and vote on a single transaction to promote their own token on Aave's platform. Both proposals passed and were implemented without much objection. Both were later overturned upon Figure 5: veBAL token holders over time. Each color represents a specific user. There are several addresses associated with Humpy. Attribution of addresses to Humpy was done by looking at voters that voted against proposal BIP 28 (Kill CREAM/WETH Gauge), a proposal which stopped Humpy’s original money printing pool. Humpy is also accountable for the growing portion of the Tetu pool. new government proposals, due to extreme drops in the value of the tokens, that were substantial enough to risk the health of the Aave ecosystem. #### 3.2.1 Terra's UST in Aave Terra is a blockchain protocol and payment platform for algorithmic stable-coins. On March 8th 2022 proposal number 65 was created on Aave's governance protocol [4]. The title of the proposal is "Add Terra USD (UST) to Aave v2". The proposer of this transaction is Ethereum address \(0xff...011\). This also the biggest voter for the transaction, with 230.9K out of the total 487.7K votes in support of this proposal. The proposal passed and was executed on Aave's platform. The proposer got ownership of the AAVE tokens just before this proposal, and discarded them right after the vote. The Terra USD token collapsed on May 12th 2022 after it depegged from the USD, an event that wiped out almost $45 billion in market capitalisation within a week [38]. Adding the UST token to Aave's platform did not promote the health of the system. During the process of preparing the proposal on-chain this address received funds from address \(0xbe...e682\) which is associated with Terraform Labs, the company behind the now collapsed Terra stable-coin UST, hinting that Terraform Labs was behind this proposal. This in potentially catastrophic to Aave's platform, as loans taken out with UST as collateral will most likely never be repaid, as the collateral is now worthless. Aave only noticed this after the collapse. On May 19th 2022, the Aave DAO executed proposal 75 [5], this time for "Freezing UST and Updating stETH Parameters". This proposal also passed with 450K votes in favor of freezing the asset on Aave's platform. #### 3.2.2 DeFi Pulse Index and Meta-Governance The DeFi Pulse Index (DPI) is a DeFi platform aiming to implement a decentralized capitalization-weighted index that tracks the performance of DeFi assets on the Ethereum blockchain. DeFi Pulse index purchased different ERC-20 tokens across the Ethereum DeFi ecosystem, in the native tokens to the relevant platforms. In Aave, Compound, Balancer and Uniswap the native token is also a governance token. DPI is aware of this and has implemented a "Meta-governance" protocol. Currently, this enables DPI holders to vote on changes to the Aave, Compound, and Uniswap (DPI stated they will expand to other protocols in the future). Interestingly, the only governance proposal that DPI holder have voted on in the Aave platform was proposal 27 [1] to "begin the on-boarding process for listing DeFi Pulse Index (DPI) as collateral on the Aave ARC market". The proposal was created by Ethereum address \(0xff...32E\), an address funded by DPI in transaction \(0xff0xff.37...0xff\). This proposal whitelisted DPI on the Aave platform, making it viable as collateral to loan-takers on the Aave platform. Later, in proposal 189 [6] "Freeze DPI on V2 Ethereum" Aave decided to stop support for DPI due to "implied centralization risk of DPI compared with the direct holding of the basket of underlying assets". DeFi Pulse Index was able to gain enough AAVE governance tokens to momentarily vote their own token, DPI, into Aave's protocol. It took the Aave community over a year to realize that having DPI as collateral in Aave is harmful to the platform and the ecosystem due to increased risk of cascade in case of a drop in DPI's value, and correct it. #### 3.2.3 Platforms Holding Stake in Other Platforms In some cases, platforms choose to hold stake in other platforms in what is meant to be a benign partnership. The nature of governance tokens however, means these stake-holdings can cause harm to the health of at least one of the platforms. Two notable examples are: **Binance and Uniswap** On Oct. 2022, Binance [13] shifted \(4.6M\) UNI tokens between two different wallets owned by Binance. This caused \(13M\) UNI tokens to be automatically delegated to Binance's wallet on Uniswap by accident [31]. This made Binance one of the largest delegates in Uniswap at the time. Although Binance did not vote with the UNI tokens, they could have. This is an example of how a bug in Binance's platform directly affects Uniswap's governance. Aave and BalancerThe Aave community has passed two governance decisions (proposal 87 [3] and proposal 115 [2] to buy stake in BAL tokens, as part of a strategic partnership between the platforms. They are aiming to re-lock these tokens back into Balancer in order to gain stake in veBAL, which in addition to a monetary reward gives them voting power in Balancer's governance. This is again a point of centralization in the market which makes the platforms co-dependent, meaning a collapse in one platform can pull down the other. ## 4 Related Work Although governance protocols underline many prominent DeFi projects (such as Uniswap, Aave, and the others which were covered in this work), the literature dedicated to the subject is rather sparse. Kiayias _et al._[32] present a SoK of governance protocols, which defines several properties of interest such as who is granted suffrage by each protocol, and the confidentiality of the voting process. They then perform a qualitative analysis of both the governance of blockchain consensus mechanism (e.g., the method by which improvements are suggested and adopted by cryptocurrencies such as Bitcoin and Ethereum), and the on-chain protocols used by DeFi protocols. Barbereau _et al._[10] quantify the decentralization of the governance mechanisms of Uniswap, Maker, SushiSwap, Yearn Finance, Universal Market Access by applying various metrics to the governance token balances of the aforementioned mechanisms' users. Sun _et al._[44] attempt to quantify the level of centralization in MakerDao's governance. Fritsch _et al._[26] map governance holding power and clusters that vote together for Uniswap and compound. Barbereau _et al._[11] analyze initial distribution of governance tokens and centrality in voting. Fan _et al._[23] study how issuing governance tokens affects user's incentives to participate in a DeFi platform, mainly to provide liquidity. ## 5 Conclusions This studies the design of popular governance mechanisms and analyzes points of centralization in protocols that are meant to be decentralized. It shows that participation rates are low, users tend to vote even less at times of high transaction fees, and voting is highly centralized. We identify specific examples of users who hold governance tokens for the duration of a single proposal which they created and voted for with a majority of the voting power. These proposals were retroactively discovered as harmful to the platform, and reverted at a later date. From all of these a picture is painted, implying that governance tokens are often not fulfilling the goal for which they are meant. Specifically, users of the platform do not use them to vote for proposals that create a more stable and safe platform overtime, but rather often use them to promote their own goals, which are sometimes harmful to the platform itself. These results align with ample research on referendums [24]. The delegation model might be a remedy to some of the issues discussed in this paper. It however does not solve the incentives problem of users losing money by voting instead of investing their governance token. Exploring mechanisms that combine a delegation model with financial incentives is therefore a good direction for future work.
2307.05924
Applying SDN to Mobile Networks: A New Perspective for 6G Architecture
The upcoming Sixth Generation (6G) mobile communications system envisions supporting a variety of use cases with differing characteristics, e.g., very low to extremely high data rates, diverse latency needs, ultra massive connectivity, sustainable communications, ultra-wide coverage etc. To accommodate these diverse use cases, the 6G system architecture needs to be scalable, modular, and flexible; both in its user plane and the control plane. In this paper, we identify some limitations of the existing Fifth Generation System (5GS) architecture, especially that of its control plane. Further, we propose a novel architecture for the 6G System (6GS) employing Software Defined Networking (SDN) technology to address these limitations of the control plane. The control plane in existing 5GS supports two different categories of functionalities handling end user signalling (e.g., user registration, authentication) and control of user plane functions. We propose to move the end-user signalling functionality out of the mobile network control plane and treat it as user service, i.e., as payload or data. This proposal results in an evolved service-driven architecture for mobile networks bringing increased simplicity, modularity, scalability, flexibility and security to its control plane. The proposed architecture can also support service specific signalling support, if needed, making it better suited for diverse 6GS use cases. To demonstrate the advantages of the proposed architecture, we also compare its performance with the 5GS using a process algebra-based simulation tool.
Rashmi Yadav, Rashmi Kamran, Pranav Jha, Abhay Karandikar
2023-07-12T05:44:33Z
http://arxiv.org/abs/2307.05924v4
# Applying SDN to Mobile Networks: A New Perspective for 6G Architecture ###### Abstract The upcoming Sixth Generation (6G) mobile communications system envisions supporting a variety of use cases with differing characteristics, e.g., very low to extremely high data rates, diverse latency needs, ultra massive connectivity, sustainable communications, ultra-wide coverage etc. To accommodate these diverse use cases, the 6G system architecture needs to be scalable, modular, and flexible; both in its user plane and the control plane. In this paper, we identify some limitations of the existing Fifth Generation System (5GS) architecture, especially that of its control plane. Further, we propose a novel architecture for the 6G System (6GS) employing Software Defined Networking (SDN) technology to address these limitations of the control plane. The control plane in existing 5GS supports two different categories of functionalities - handling end user signalling (e.g., user registration, authentication) and control of user plane functions. We propose to move the "end-user signalling functionality" out of the mobile network control plane and treat it as user service, i.e., as payload or data. This proposal results in an evolved service-driven architecture for mobile networks bringing increased simplicity, modularity, scalability, flexibility and security to its control plane. The proposed architecture can also support service specific signalling support, if needed, making it better suited for diverse 6GS use cases. To demonstrate the advantages of the proposed architecture, we also compare its performance with the 5GS using a process algebra-based simulation tool. Software-defined networking, Mobile networks, Service-driven architecture. ## I Introduction The notable rise in the range of diverse use cases with differing attributes has paved the way for the continued evolution of mobile networks. The upcoming 6th Generation Mobile Communication System (6GS) is envisioned to support peak data rate (\(\geq\)200 Gbps), very high mobility (500-1000 Km/h), very low latency (0.1-1 ms), connection density in the range of \(10^{6}\)-\(10^{8}\) devices/Km\({}^{2}\), reliability of \(10^{-5}\)-\(10^{-7}\)[1]. Moreover, it is expected to witness further diversity of use cases with the emergence of newer categories of use cases. Focus Group on Technologies for Network 2030 (FG NET-2030) [2] has identified and included the following use cases in its report: Holographic-type communications, Tactile Internet for Remote Operations, Intelligent Operation Networks, Network and Computing Convergence, Digital Twin, Space-Terrestrial Integrated Network, Industrial IoT with cloudification etc. A scalable, flexible and modular network architecture is one of the essential ingredients towards tackling this immense diversity of use cases in future mobile networks. Third Generation Partnership Project (3GPP) adopted technologies such as Network Function Virtualization, Control and User Plane Separation, Network slicing for Fifth Generation System (5GS), which resulted in improved scalability and flexibility of 5GS over the previous generation mobile communications systems such as Fourth Generation System (4GS). However, there is scope for further improvement in mobile network architecture especially that of its control plane through the application of Software Defined Networking (SDN) technology. A survey of the existing research related to SDN-based enhancements in the mobile network control plane is presented next. The work in [3] proposes a centralised control plane for multi-Radio Access Technology (multi-RAT) Radio Access Network (RAN) to enhance the simplicity and flexibility of the network. Relocation of the control plane functionality of RAN to the Core Network (CN) to reduce the signalling cost between RAN and core has been discussed in [4]. Authors in [5] proposed a decentralized control plane architecture for the 5GS with independent control functions for different control events for flexible and scalable networks. An SDN architecture where a middle cell and a middle cell controller are introduced between the macro cell and the small cell to reduce the control overhead of the macro cell and to address the scalability problems is proposed in [6]. In [7], authors proposed a new 5GS core architecture based on the SDN concept. They introduced a centralised SDN controller for easier and more flexible management of the user plane. In [8], a hierarchical control plane is designed to lighten the load of the controller. It focuses on the vertical scalability of the control plane. In [9], a scalability metric for the SDN control plane is proposed. Besides, a comparison between different SDN architectures is analysed via mathematical methods. In addition, there is a vast amount of literature on SDN-based network architectures, albeit unrelated to mobile networks [10, 11]. To summarize, current research in the context of the application of SDN technology to mobile networks mainly focuses on the centralized or distributed architecture of the control plane for reduced control overheads or scalability purposes. However, to the best of our knowledge, there is a limited discussion/rethink on certain other aspects of network architecture, such as, what functionality should constitute the mobile network control plane within an SDN-based framework. Is the network control plane right place for "end user signalling handling" functionality? Should "Non-Access Stratum (NAS) messages" be handled by CN control plane functions such as Access and Mobility Management Function (AMF) or should this functionality be moved out of AMF? Should the user authentication function (Authentication Server Function (AUSF) in 5GS) be part of the CN control plane? These questions assume even more importance in the upcoming 6GS era, where a massive increase in the number of UEs is expected and an accompanying growth in end-user signalling has the potential to over-burden the network control plane. In one of our earlier works [12], we briefly analysed these questions. In order to bring in additional enhancements to mobile network architecture, especially to its control plane, we propose to separate end user (User Equipment (UE)) signalling handling from the control plane functions. In a significant departure from the existing cellular networks, the proposed architecture views UE signalling as payload, i.e., a form of data traversing through the cellular network, not much different from other types of data such as Video streaming or Web browsing. We analyse the proposed architecture using Performance Evaluation Process Algebra (PEPA) [13], a formal language used to model distributed systems. We also provide a comparative analysis of the proposed architecture and the existing 5GS architecture through example call flows for Protocol Data Unit (PDU) session establishment and UE handover procedures. We demonstrate a significant reduction in the number of control messages exchanged in the proposed architecture along with the network's scalability. The rest of the paper is organised as follows: Section II provides limitations of the existing 5GS mobile network architecture. Section III provides an overview of the proposed architecture and highlights its advantages. Section IV includes an information flow comparison of the existing and proposed architecture for PDU session establishment and handover procedures. Section V describes the system model using PEPA. Section VI covers the performance analysis. Section VII provides the conclusion and future work. ## II Limitations of existing 5GS Architecture In this section, we have captured some of the limitations of the existing 5GS architecture especially that of its control plane. Although there can be other limitations too say pertaining to radio technology, etc., those are not discussed here. ### _Tight coupling of user plane control and UE signalling in control plane_ The existing 5GS architecture supports the control and user plane separation. The 5GS control plane performs user plane control (network resource control, e.g., setting up data path through the user plane) and UE signalling handling functionalities (e.g., NAS/RRC (Radio Resource Control) message exchange with UEs). There is a tight coupling between these two categories of functionalities, i.e., between user plane control and UE signalling handling and certain CN (e.g., AMF) and RAN gNodeB-Centralized Unit-Control Plane (gNB-CU-CP) control plane functions in the existing 5GS perform both. A detailed description of control plane functionality is provided in [14]. As demonstrated here, decoupling of UE signalling handling functionality from User plane control functionality may lead to a more modular and scalable network architecture. ### _Limited alignment with SDN paradigm_ SDN is a networking paradigm which separates the control plane of a network from its user (data) plane and centralizes the network's intelligence in the control plane. Although there are differing views in industry/academia on how to define an SDN-based network architecture, we can still discern a broad agreement on the topic [5, 15, 16]. The existing 5GS architecture incorporates the concept of SDN, resulting in architectural features such as the separation of the user plane from the control plane [14]. However, closer observation shows that the 5GS architecture does not align completely with the SDN paradigm. Besides controlling the user plane, the 5GS control plane also exchanges signalling messages with UEs to provide services such as authentication and also collect service requirements, e.g., requirements for PDU connectivity service. The functionality of signalling exchange with UEs may fit better within the service plane instead of the control plane. ### _Non-uniform handling of services_ Services in the existing 5GS can be categorized into the following two types: 1. Application-based services such as Media streaming services, IP Multimedia subsystem services, Mission-critical services, Multicast/Broadcast Services (MBS) etc. 2. Other than these application-based services, the 5GS network also provides services such as initial access, registration, authentication, PDU connectivity (connectivity to data networks), and connected mode mobility support. Such services can be called built-in (or intrinsic) network services. The two categories of services (Application based services and built-in network services) are enabled differently in the 5GS. As Application (Service) Functions (AFs) are independent and decoupled from the core and RAN functions of mobile networks, they access the control plane functions of the mobile CN over a standardized interface to enable service delivery through the user plane. However, the delivery of built-in services is tightly integrated within the control plane of the 5GS network (RAN and CN) itself. It also leads to the usage of special paths for signalling exchange with UEs, different from the regular data paths and brings certain inconsistencies to the architecture. For example, the Performance Measurement Function (PMF), a sub-function within the User Plane Function (UPF), exchanges "Measurement Assistance Information", a type of signalling information with UEs to aid the access traffic steering, switching, and splitting (ATSSS) functionality at UPF. This signalling information is exchanged via a regular data path (i.e. user plane) between the UE and the PMF. This mechanism is different from how other signalling information such as "radio measurement reports" to support the handover procedure is exchanged. ### _Complex protocols between control plane and user plane_ The existing 5GS control plane architecture impacts the interface design (protocols) between the control and user planes. For instance, F1 Application Protocol (F1AP) is the protocol used on the interface between the RAN control plane (gNB-CU-CP) and the RAN user plane (gNB-Distributed Unit (gNB-DU) or RAN-DU). It is used to configure gNB-DU and also carries RRC (UE signalling) messages for UEs. Integrating both these types of functionalities in a single protocol results in a relatively complex communication protocol between gNB-CU-CP and gNB-DU. ## III Service driven architecture for 6GS mobile networks This section presents the proposed architecture, which addresses the architectural limitations of the existing 5GS (as discussed in Section II) and highlights a few other advantages. In the proposed work, we aim to separate the UE signalling handling from the control plane and treat them as a service to the user to enhance modularity and flexibility in the mobile network control plane. With the proposed separation, the control plane is left with only the user plane control functionality, as shown in Fig. 1. The UE signalling handling functionality is moved out of the control plane to the service/application plane. The service plane consists of various in-built and external service functions, as shown in Fig. 1, such as the PDU Session Service Function (handles PDU session establishment and management providing PDU connectivity service), Mobility Service Function (responsible for handling UE mobility), Registration Service Function (handles UE registration with the network), Authentication Service Function (manages UE authentication), Multicast/Broadcast Services and a few others. Due to the reorganisation of the architecture, it offers various architectural and performance advantages discussed next. Please note that there may be separate controllers in the CN and RAN, as shown in Fig. 3. Similarly, we have a separate resource plane (user plane) for RAN and the CN. Further, the proposed architecture's user or resource plane may remain the same as the 3GPP 5GS. ### _Advantages of the proposed 6GS architecture_ This section highlights a few advantages of the proposed work. Segregation of UE signalling handling functionality from the control plane simplifies the control plane, which **enhances the modularity** of the control plane. The reorganised architecture also **aligns well with the SDN paradigm** as the control plane is redesigned to perform only user plane control functionality as discussed in Section II-B. The proposed architecture also allows internal (or built-in 5GS) services to be treated the same way as external application-based services, leading to **uniform handling of various services**. Further, this proposal results in the simplification of the control messages. For instance, the number of sessions management-related messages is reduced due to the setup of a direct path between UE and the service function (detailed in Section IV-B), leading to **simplified call flows**. Also, the number of hops between the RAN controller and the CN controller in the proposed architecture is less than the corresponding entities in 5GS, i.e., between gNB-CU-CP and the Session Management Function (SMF), respectively, which further results in the performance improvement in terms of control plane latency and resource utilisation. Transposition of UE signalling handling functionality to functions in service plane **simplifies the protocols** between the control pane and the user plane such as Next Generation Application Protocol (NGAP) between the CN control plane and RAN and F1AP between the RAN control plane (gNB-CU-CP) and the RAN user plane (gNB-DU). The existing 5GS uses the same type of signalling messages for all use cases. However, it is possible to have different signalling requirements for different use cases, e.g., the Internet of Things (IoT) and human users. The proposed architecture may support this requirement by employing **use case specific signalling** service functions. Our proposal can also support **flexible function deployment and chaining** as various service functions, such as the PDU session service function, mobility service function, registration service function, and authenti Fig. 1: Control plane architecture for proposed architecture [17] cation service function, can be placed flexibly and chained together to serve UEs. An additional advantage towards signalling security is presented here. 3GPP specification [18] highlights the exposed AMF which is vulnerable to replay attacks of NAS signalling messages between the UE and AMF (control plane of the CN). In a similar way, [19] presents the exposed RAN which is susceptible to replay attacks to RRC signalling messages between the UE and RAN (gNB-CU-CP (control plane of RAN)) as the Uu interface also carries sensitive RRC signalling. Furthermore, the European Union Agency for Cybersecurity (ENISA) [20], in its report recommends that the N2 interface between the 5GS RAN and AMF is a target for attackers since they carry sensitive signalling between the RAN and the CN. Therefore, in this context, the proposed architecture may have some advantages towards the **UE signalling security** between the UE and the signalling service function. Since UE signalling is segregated from the control plane (of RAN and CN) and is terminated to a separate signalling server, it leads to the possibility of localizing the attack originating from a UE within the signalling server without compromising the network control plane where the architectural and logical control and management of RAN and CN are located. This segregation allows us to improve the UE-related signalling security of future mobile networks. ## IV Information Flow Comparison In this section, we compare the information flows of the proposed architecture and the existing 5GS architecture. We consider the PDU session establishment and mobility services example to differentiate the working of the existing 5GS and the proposed architectures. Fig. 2 and Fig. 3 show the entities involved in PDU session signalling for the 5GS and the proposed architecture, respectively. In 5GS, messages are exchanged between UE and SMF for PDU session-related signalling via RAN (it requires both gNB-DU and gNB-CU) and AMF. However, signalling messages are directly exchanged between UE and the (PDU session service function (PSSF)) service function via RAN (it requires only RAN-DU) in the proposed architecture, as shown in Fig. 3, which implies that in the existing 5GS, signalling takes place through multiple hops. In contrast, the number of hops is reduced in the proposed architecture. Further, the control plane collects all requirements from the PSSF (which in turn are received by PSSF from the UE as shown in Fig. 3) via the application-control interface and establishes the PDU session. The complete message sequences for establishing PDU sessions for the existing 5GS are detailed in [17] while simplified call flow for the proposed architecture is shown in Fig. 41. Please note that the controllers do not require response messages from the resource (user) plane, as the controller knows about user plane resource information; it handles resource decision-making. Therefore, the proposed architecture eliminates many such messages. For example, the N4 session modification request and response are exchanged between SMF and UPF in 5GS architecture [17], while the session modification command (message 3 in Fig. 4 and message 9 in Fig. 7) is exchanged between the CN controller and CN user plane (UPF) in the proposed architecture. There is no need for a session modification response message from the UPF. Hence, these reductions in the messages simplify both the session establishment and mobility procedure (to be discussed next). Please note that even though using RAN-User Plane (RAN-UP) and other network functions/messages is necessary, we have shown only the CN functions in the call flow to keep the analysis tractable even though RAN functions will also be required in real systems. However, keeping the RAN functions out of the call flows is not likely to alter the conclusions drawn here. This note applies to mobility services also. Footnote 1: In call flows and simulations, only those messages are considered and compared which are different in proposed and existing architectures ### _Mobility as a service_ We consider mobility as another service to illustrate the difference between the existing 5GS and the proposed ar Fig. 4: PDU session establishment procedure in the proposed architecture Fig. 3: Network entities, signalling and control message flow for PDU session establishment in the proposed architecture Fig. 2: Network entities, signalling and control message flow for PDU session establishment in 5GS chitecture. Fig. 5 and Fig. 6 show the network entities, signalling and control message flow of the existing 5GS and proposed architecture, respectively. S-DU and T-DU represent source gNB-DU and target gNB-DU, respectively. Similarly, the Source-Centralized Unit-User Plane (S-CU-UP) and Target-Centralized Unit-User Plane (T-CU-UP) represent source gNB-CU-UP and target gNB-CU-UP, respectively. S-CU-CP and T-CU-CP represent source gNB-CU-CP and target gNB-CU-CP, respectively. Also, the interaction between the RAN controller and the CN controller is through the inter-controller interface, as shown in Fig. 6. Signalling takes place between UE and MSF via S-DU before handover while after handover it is through T-DU. Likewise, the data path between UE and UPF is by way of S-UP before handover while it is via T-UP after handover. Each action is associated with a specific rate value, \(r\). The rate (number of actions performed per unit time) models the expected duration of the action in the PEPA component and is taken as reference from [21, 22] and [23]. Let us now understand the details of modelling of NF states as shown in Table I. Consider UE as an example. The UE acquires the processor in its initial state (\(acc_{uep}\), \(r_{a}\)) and performs the processing action (\(process\), \(r_{iat}\)) before sending a request. The second state, \(Ue_{2}\), models the request (\(req_{phase}\), \(r_{r}\)) and response (\(rep_{phase}\), \(r_{r}\)) messages exchanged between UE and PSSF for the PDU session establishment. NFs acquire processors to process a request/response. In Table I, UEP, PSSFP, CONP and UFPP are the processing entities for UE, PSSF, CN controller (CON) and UPF respectively. These processing entities are modelled such that each NF processor has two states. For instance, the first state of UEP, \(Uep_{1}\), is for acquiring the processor (\(acc_{uep}\)), and the second state, \(Uep_{2}\), performs the processing action (\(process\)). Similarly, the other NFs and their processing entities are modelled. As discussed in this section, the system model uses the following additional parameters: \(n\) denotes the number of UEs; \(N_{pgsf}\), \(N_{con}\), and \(N_{upf}\) are the number of NF instances for PSSF, CN controller (CON), and UPF, respectively. Similarly, \(N_{pgsfp}\), \(N_{comp}\), and \(N_{upfp}\) are the number of PSSF processor (PSSPP), CN controller processor (CONP) and UPF processor (UPFP), respectively. Please note that each processor can handle a set of concurrent threads, \(N_{t}\). Thus, the product \(N_{nf}\)\(\cdot\)\(N_{nfp}\)\(\cdot\)\(N_{t}\) (as mentioned in the system model equation) represents the total number of threads for a type of NF. Moreover, the product \(N_{nf}\)\(\cdot\)\(N_{nfp}\) is the total number of processors allocated to a type of NF, e.g., for UPF processor. The system equation represents the overall system model. The cooperation operator ("\(\bowd\)"), for example, A \(\bowd\), B, models the interactions between NFs A and B over the actions defined in the cooperation set \(L\). It can be noted that it is possible that component A \(\bowd\) B will have different behaviour from component A \(\bowd\) B if L\(\neq\)K. Let us consider an example from Fig. 4, where PSSF and CN controller (CON) interact with each other for session context request/response \(req_{sc}/rep_{sc}\). These actions are defined in cooperation set \(L_{2}\), as shown in Table I. Therefore, the system equation \(Pssf_{1}\)[\(N_{pgsf}\)\(\cdot\)\(N_{pgsfp}\)\(\cdot\)\(N_{t}\)] \(\bowd\)\(Con_{1}\)[\(N_{con}\)\(\cdot\)\(N_{comp}\)\(\cdot\)\(N_{t}\)], models the interaction between PSSF and CN controller over the cooperation set \(L_{2}\). In a similar way, the overall system equation, as shown in Table I and Table II represents the interaction between the various NFs as shown in the two call flows, Fig. 4 and Fig. 7, respectively. ## VI performance evaluation This section presents the performance comparison between the existing 5GS and the proposed architecture analysed using the PEPA Eclipse plug-in [24], a software tool integrated into the popular Eclipse platform. This tool supports various performance measures [22] as discussed below, which help evaluate the network's performance. 1. **Session establishment rate (or the number of successful handovers in the case of mobility**): The number of session establishments are measured for the action (say, \(rep_{phase}\), which describes the completion of the session establishment procedure), representing the session establishment rate. Similarly, the number of successful handovers is measured for the action '\(session\)'(as performed by UPF NF in Table II), which signifies the completion of the handover procedure. 2. **Average response time**: It measures the UE waiting time for any specific request and reflects the system's operating speed. We consider the average response time as the duration of the completion of the session establishment procedure. Similarly, we consider the mobility procedure's average response time as the completion of the handover procedure. Fig. 7: Mobility procedure in the proposed architecture \begin{tabular}{|l|l|} \hline \multicolumn{2}{|c|}{**Mobility**} \\ \hline **PEPA Modules** & **Code Description** \\ \hline UE NF & \(Ue_{1}\frac{def}{def}\) (\(acc_{uep}\), \(r_{a}\))\(\_\)\(\_\)\(\_\)\(\_\)\(\_ configuration for proposed and existing architectures compared to the session establishment rate achieved using a basic configuration, the proposed architecture has achieved a higher session establishment rate than the 5GS. The saturation point for existing 5GS, as shown in Fig. 8, is around 10,000 users i.e. it can serve a maximum number of 10,000 users, while the session establishment rate for the proposed architecture saturates at around 20,000 users. Similarly, Fig. 9 shows that 5GS saturates at around 34,000 users. As the saturation point is reached, the network drops the incoming requests from the users. This means that with the given number of processors/NFs, the proposed architecture can achieve a higher session establishment rate. In contrast, more processors/NFs are required to support more number of session establishments. The processor utilisation for all the NFs of the existing 5GS and the proposed architecture for basic and the scaled configuration is shown in Fig. 10 and Fig. 11, respectively. For instance, the PSSFP reaches its maximum utilisation explaining the saturation point for the session establishment rate. Although at this point, CONP and UPPP are not fully utilised. These results show that the request processing chain fails if an NF becomes a bottleneck for the consecutive chain. Scalability for the existing 5GS and the proposed architecture is evaluated based on Equation 1. It is plotted in Fig. 12 based on the results obtained for session establishment rate, average response time and utilisation from the PEPA-based simulation and modelling. As stated earlier, we consider the following two configurations \(m_{1}\) and \(m_{2}\) for estimating the scalability metric. Fig. 12 shows that the existing 5GS can serve 10,000 users for a basic configuration, and the proposed architecture can serve 20,000 users. Similarly, the existing 5GS reaches its saturation point at 34,000 users, and the proposed architecture saturates at 62,000 users for scaled configuration. Therefore, it implies that the proposed architecture performs better and can serve more users than the existing 5GS. Besides, the proposed is more scalable with increased users for the same number of NFs/processors. Please note that a similar explanation for all the performance measures (successful handovers, processor utilization and scalability) holds in the case of mobility service. #### Vi-B2 **Mobility Service** This section presents the comparative analysis of the existing 5GS and the proposed architecture for the mobility service. Similar to the session establishment, the analysis is performed for the basic and the scaled configurations. Therefore, the basic configuration for the proposed Fig. 11: Processor utilisation of session establishment for the proposed and the 5GS architecture having scaled configuration Fig. 8: Session establishment (number of sessions per unit time) for the proposed and the 5GS architecture having the basic configuration Fig. 10: Processor utilisation of session establishment for the proposed and the 5GS architecture having the basic configuration Fig. 9: Session establishment (number of sessions per unit time) for the proposed and the 5GS architecture having the scaled configuration architecture is given as (\(N_{upt}\), \(N_{msf}\), \(N_{ran}\), \(N_{cn}\), \(N_{upf}\)) = (1,2,2,1,1) and for the 5GS architecture is (\(N_{sdu}\), \(N_{scu}\), \(N_{tdu}\), \(N_{tcu}\), \(N_{camf}\), \(N_{smf}\), \(N_{upf}\)) = (1,1,1,1,1,1). Similarly, the scaled configuration for the proposed architecture is (\(N_{upt}\), \(N_{msf}\), \(N_{ran}\), \(N_{cn}\), \(N_{upf}\)) = (3,6,6,3,3) and for the 5GS architecture is given as (\(N_{sdu}\), \(N_{scu}\), \(N_{tdu}\), \(N_{tcu}\), \(N_{amf}\), \(N_{smf}\), \(N_{upf}\)) = (3,3,3,3,3,3). Here \(N_{upt}\), \(N_{msf}\), \(N_{ran}\), \(N_{cn}\), \(N_{upf}\) are the number of Target-User Plane (T-UP), MSF, RAN controller, CN controller and UPF respectively in the system model. Similarly, \(N_{sdu}\), \(N_{scu}\), \(N_{tdu}\), \(N_{tcu}\), \(N_{amf}\), \(N_{smf}\), \(N_{upf}\) are the number of S-DU, S-CU, T-DU, T-CU, AMF, SMF, and UPF respectively. Please note that for brevity, we have not split S-CU into S-CU-CP and S-CU-UP and T-CU into T-CU-CP and T-CU-UP while modelling the mobility call flow procedure for the 5GS. Further, we provide an equal number of functions and associated processors to the 5GS and the proposed architecture for justified comparison. After reaching the saturation point, the system starts to drop handovers. Fig. 13 and Fig. 14 show that the proposed architecture serves more successful handovers per unit time compared to the existing 5GS for both the basic and the scaled configurations, respectively. The saturation point for the existing 5GS is 20,000 users, while for the proposed, the saturation is 30,000 users for the basic configuration. Similarly, the saturation point for the existing 5GS is around 60,000 users, while for the proposed, the saturation is around 90,000 users for the scaled configuration. The number of successful handovers per unit of time has increased using a scaled configuration for both architectures. Fig. 15 and Fig. 16 are the result of processor utilisation for both the 5GS and the proposed architecture. Fig. 17 shows the scalability results in the case of mobility service for 5GS and the proposed architectures. It can be observed from the scalability results that 5GS reaches its saturation point earlier than the proposed architecture and the proposed architecture is more scalable. We have considered PDU session establishment and mobility services as examples to analyse the performance of the proposed architecture using the PEPA-based simulation method. Based on the performance results and other benefits, it can be concluded that the proposed architecture is a promising option for future networks to handle vast and diverse traffic demands. We plan to extend this work to analyse other features/services of mobile networks, such as authentication, network slicing, development of protocols between (signalling) service functions and the control plane, and addressing security threats in the 6GS mobile network (touched upon in section III) in future. ## Acknowledgment We acknowledge the Ministry of Electronics and Information Technology (MeitY), India, for supporting the project.
2302.09718
Railway Virtual Coupling: A Survey of Emerging Control Techniques
This paper provides a systematic review of emerging control techniques used for railway Virtual Coupling (VC) studies. Train motion models are first reviewed, including model formulations and the force elements involved. Control objectives and typical design constraints are then elaborated. Next, the existing VC control techniques are surveyed and classified into five groups: consensus-based control, model prediction control, sliding mode control, machine learning-based control, and constraints-following control. Their advantages and disadvantages for VC applications are also discussed in detail. Furthermore, several future studies for achieving better controller development and implementation, respectively, are presented. The purposes of this survey are to help researchers to achieve a better systematic understanding regarding VC control, to spark more research into VC and to further speed-up the realization of this emerging technology in railway and other relevant fields such as road vehicles.
Qing Wu, Xiaohua Ge, Qing-Long Han, Yafei Liu
2023-02-20T02:07:32Z
http://arxiv.org/abs/2302.09718v1
# Railway Virtual Coupling: A Survey of Emerging Control Techniques ###### Abstract This paper provides a systematic review of emerging control techniques used for railway Virtual Coupling (VC) studies. Train motion models are first reviewed, including model formulations and the force elements involved. Control objectives and typical design constraints are then elaborated. Next, the existing VC control techniques are surveyed and classified into five groups: consensus-based control, model prediction control, sliding mode control, machine learning-based control, and constraints-following control. Their advantages and disadvantages for VC applications are also discussed in detail. Furthermore, several future studies for achieving better controller development and implementation, respectively, are presented. The purposes of this survey are to help researchers to achieve a better systematic understanding regarding VC control, to spark more research into VC and to further speed-up the realization of this emerging technology in railway and other relevant fields such as road vehicles. Virtual coupling; train motion model; gap references; consensus control; model prediction control; sliding mode control; machine learning + Footnote †: publication: IEEE Transactions on Intelligent Vehicles on 16-Jan-2023 ## I Introduction A N important objective of railway signaling is to keep trains that are or will be running on the same section of track separated at a safe distance. Many of the current railway signaling systems are using the Fixed Block Signaling (FBS) model as shown in Fig. 1(a). This model is provably reliable from today's technological perspective. However, it is obviously not efficient for railway traffic as a significant amount of track space is unoccupied between adjacent running trains. With the ever-increasing demands from passenger and freight transport, enabled by technological advances in smart sensors and wireless communications, the Moving Block Signaling (MBS) model as shown in Fig. 1(b) has been developed and implemented on some railways. The MBS model uses on-board signaling systems rather than the way-side version of the FBS model. Movement authority blocks move with the trains rather than being fixed on the track. In this way, the distance between any two adjacent running trains can be significantly decreased. The shortest distance can be the absolute braking distance (ABD) of the following train plus a certain safety margin. One can easily appreciate that traffic efficiency can be significantly improved by implementing the MBS model. Over the past several years, an exciting railway signaling model called Virtual Coupling (VC), as shown in Fig. 1(c), has attracted enormous interest from railway industry and academia. It was proposed at the end of the 1990s [1, 2]. The idea is to run trains in the MBS model with a relative braking distance (RBD) rather than the ABD. Earlier research was carried out by researchers from Technical University Braunschweig [3, 4] and University of Paderborn [5, 6]. VC studies were not very active before 2015 partially due to the limitations from Information and Communication Technologies [7, 8, 9]. With recent advances in those areas and the driving forces from the pursuit of higher capability, flexibility, and modularity, the European initiative Shift2Rail included Virtual Coupling research into its strategic master plan in 2015, which sparked a lot of research in this area [10, 11, 12, 13, 14]. Xun _et al._[15] provided a list of projects that were recently funded all around the world. VC can significantly reduce train operation headways [16] to increase line capacity [17]. VC can also help to reduce energy costs under specific specialized cases [18]. Market potential studies [19] have indicated that VC train operations can be very attractive to customers of various rail transport sectors including high-speed, main-line and regional with benefits that are especially relevant for freight trains. System feasibility for VC has also been discussed and proven at various stages [20, 21, 22]. The first implementation and tests on low-speed trans were reported in 2019 [23]. The control system of connected trains can be regarded as a networked control system, which usually consists of four parts: sensors, communications, controllers, and actuators. This paper focuses on the controller part which has also been identified as a critical step towards the successful implementation of VC operations [21, 22, 24]. Autonomous driving and control for intelligent road vehicles [25, 26, 27, 28, 29] have been well studied and can contribute to the control development for railway VC. However, railway trains have larger and more complex systems Fig. 1: Railway signaling models: (a) Fixed Block, (b) Moving Block, and (c) Virtual Coupling than road vehicles. Longitudinal Train Dynamics (LTD) [30] are also significantly different from longitudinal road vehicle dynamics [31] for controller design and implementation. This motivates us to initiate a systematic review regarding the emerging control techniques used for railway VC studies. The rest of the paper is arranged as follows. Section II reviews train motion models, which lay a foundation in the VC controller development. Section III outlines the objectives, which should be observed while developing the controllers. Section IV elaborates the emerging control techniques, that have been used for VC studies. Section V discusses the advantages and disadvantages of the reviewed VC control techniques. Section VI presents some topics, that can be interesting for future VC research. Section VII concludes the paper. ## II Train Motion Models Train motion models are a fundamental part of the VC controller development. Such models can be regarded as special versions of LTD models [42]. Both train motion and LTD models have a focus on the longitudinal motions by neglecting vehicle lateral and vertical motions. A difference is that train motion models often neglect relative motions between adjacent vehicles in the same train whilst the relative motions are often an important part of LTD studies. This section reviews train motion model formulations that are used in VC controllers as well as the force elements considered in these controllers. ### _Model Formulations_ Consider a group of \(N\left(\geq 2\right)\) automated trains whose longitudinal motions can be regulated in a distributed and cooperative manner. In this case, each train can be regarded as an 'agent' in the context of a multi-agent system. From an algebraic graph perspective, the communication topology can be modelled by a generic digraph \(\mathcal{G}=\{\mathcal{V},\mathcal{A},\mathcal{E}\}\), where \(\mathcal{V}=\{1,2,\cdots,N\}\) is the node (or vertex) set, \(\mathcal{A}=[a_{ij}]_{\mathcal{N}\times N}\) is the adjacency matrix with nonnegative adjacency elements (or called coupling gains) \(a_{ij}\), and \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) is the edge set of paired nodes with \((j,t)\) denoting an edge rooted at node \(j\) and ended at node \(i\). For any \(i,j\in\mathcal{V}\), the adjacency element \(a_{ij}>0\) means that there is an information link from node \(j\) to node \(i\); and \(a_{ij}=0\), otherwise. Reaching this step, a typical motion model for each train \(i\in\mathcal{V}\) can be given as [32, 33]: \[\dot{x}_{i,t} =v_{t,t} \tag{1}\] \[(1+\gamma)M_{i}\dot{v}_{j,t} =F_{t,t}^{db}-F_{t,t}^{req}-F_{t,t}^{req}-\omega_{i,t}\] (2) \[\begin{cases}F_{t,t}^{req}&=\big{(}k_{1,t}+k_{2,t}v_{t,t}+k_{3,t} k_{4,t}v_{t,t}^{2}\big{)}M_{i}g\\ F_{t,t}^{req}&=\big{(}k_{5,t}/R_{c}(x_{i,t})\big{)}M_{i}g\\ F_{t,t}^{req}&=sin\big{(}\theta(x_{i,t})\big{)}M_{i}g\end{cases} \tag{3}\] where \(x_{i,t}\) and \(v_{t,t}\) denote the longitudinal position and velocity of train \(i\) at the continuous time \(t\in\mathbb{R}_{20}\), respectively; \(\gamma\) represents the effect of rotational inertia of rotational components such as wheelsets and motor rotors; \(M_{i}=M_{0}+\bar{M}_{i}\) denotes the unknown mass of the train with \(M_{0}\) being the nominal (measured) part and \(\bar{M}_{i}\) being the uncertain part of the mass, respectively; \(F_{t,t}^{db}\) is the tractive/brake (T/B) force; \(F_{t,t}^{req}\) is the rolling resistance; \(F_{t,t}^{req}\) is the curvature resistance; \(F_{t,t}^{req}\) is the track gradient force; \(\omega_{i,t}\) stands for the other uncertain inputs that were not specifically modelled by the previous components; \(k_{1,t},k_{2,t},k_{3,t}\) are the basic empirical parameters for train rolling resistance; \(k_{4,t}\) is an extra resistance parameter for tunnel resistance; \(g\) is the gravity constant; \(k_{5,t}\) is curving resistance coefficient; \(R_{c}(x_{i,t})\) is track curve radius; and \(\theta(x_{i,t})\) is the track gradient. Note that, when the track profile can be identified beforehand and the real-time train speed information is available, one may further define a normalized control (acceleration) input \(u_{i,t}=(F_{t,t}^{db}-F_{t,t}^{req}-F_{t,t}^{req}-F_{t,t}^{req})/((1+\gamma)M_{ 0})\) and a lumped unknown input \(w_{i,t}=-((1+\gamma)\bar{M}_{i}\dot{v}_{i,t}+\omega_{i,t})/((1+\gamma)M_{0})\). Then, the motion model above can be formulated as the following linear second-order state-space model: \[\dot{x}_{i,t} =v_{i,t},\ \dot{v}_{i,t}=u_{i,t}+w_{i,t},\ i\in\mathcal{V}\quad\Leftrightarrow\] \[\dot{s}_{i,t} =As_{i,t}+B\big{(}u_{i,t}+w_{i,t}\big{)},\ i\in\mathcal{V} \tag{4}\] where \(s_{i,t}=[x_{i,t},v_{i,t}]^{T},A=[0,1;0,0]\), and \(B=[0;1]\). The primary control objective is then to design a high-level cooperative longitudinal control law \(u_{i,t}\) for each automated train \(i\in\mathcal{V}\). Once \(u_{i,t}\) is determined, the actual low-level T/B force \(F_{t,t}^{db}\) can be calculated based on the pre-saved track information and resistance characteristics. The train motion model above is a second-order model which includes information of train position and velocity only. Table I lists the train motion models used for various VC studies. Most existing studies used the second-order motion models. Alternatively, there are also third-order state-space train motion models that can be used to incorporate more information into the train motion models. In addition to (2) and (2), some third-order models also use the following force dynamics [34]: \[\tau_{i}\dot{F}_{t,t}^{db}=u_{i,t}-F_{t,t}^{db} \tag{5}\] where \(\tau_{i}=\tau_{0}+\tilde{\tau}_{i}\) denotes the uncertain inertial lag (in unit of seconds) for the train motions, and \(u_{i,t}\) represents the actual T/B input, which is regarded as the desired control input. If one further denotes \(s_{i,t}=[x_{i,t},v_{i,t},F_{t,t}^{db}]^{T},A=[0,1;0,0;0,1/((1+\gamma)M_{0});0, 0,-1/\tau_{0}]\), \(B=[0;0;1/\tau_{0}],E=[0;-1/(1+\gamma)M_{0};0],F=[0;0;-1/((1+\gamma)M_{0});0, 0;-1/\tau_{0}],f_{i,t}=F_{t,t}^{req}+F_{t,t}^{req}+F_{t,t}^{req}\) and \(w_{i,t}=[(1+\gamma)\bar{M}_{i}\dot{v}_{i,t}+\omega_{i,t},\tilde{\tau}_{i}F_{t,t }^{db}]^{T}\), the following nonlinear third-order state-space model can be derived: \[\dot{s}_{i,t}=As_{i,t}+Bu_{i,t}+Ef_{i,t}+Fw_{i,t},i\in\mathcal{V} \tag{6}\] where the system matrices \(A,B,E,F\) are known constants but the nonlinear resistances \(f_{i,t}\) and the lumped uncertainties \(w_{i,t}\) allow to be generally unknown. Naturally, the existence of the unknown and nonlinear inputs \(f_{i,t}\) and \(w_{i,t}\) makes the above train motion model more realistic and comprehensive. However, they also pose a significant challenge to controller design. In general, some dedicated control strategies, such as nonlinear train control and adaptive train control, are required to deal with the analysis and synthesis challenges. For convenience of analysis and design, the force term \(\hat{F}_{iL}^{db}\) can be further replaced by acceleration via taking the time-derivative on both sides of (2.2) and combining (2.5) and then applying the exact feedback linearization technique [35]. One then obtains the following linear third-order 'position-velocity-acceleration' train motion model [36]: \[\dot{x}_{i,t}=v_{i,t}\,\dot{v}_{i,t}=a_{i,t},\tau_{i}\dot{a}_{i,t}=u_{i,t}-a_{i,t}- \omega_{i,t} \tag{2.7}\] where \(u_{i,t}\) represents the desired acceleration command to be designed. Similarly, a compact third-order state-space model can be expressed as \[\dot{s}_{i,t}=As_{i,t}+B(u_{i,t}+w_{i,t}),i\in\mathcal{V} \tag{2.8}\] where \(s_{i,t}=[x_{i,t},v_{i,t},a_{i,t}]^{T},A=[0,1,0;0,0,1;0,0,-1/\tau_{0}]\), \(B=[0;0;1/\tau_{0}]\), \(w_{i,t}=-\bar{t}_{i}\dot{a}_{i,t}-\omega_{i,t}\). Comparatively, the nonlinear state-space motion models (2.6) and linear state-space motion models (2.4)(2.8) offer a trade-off between the accuracy of train motion modelling and the simplicity of train motion controller design. Usually, the nonlinear models have better accuracy for train motion modelling; but they also rely on the proper and accurate handling (e.g., estimation and/or compensation) of the unknown and nonlinear inputs. On the other hand, linear motion models generally facilitate the analysis and synthesis of the train control systems. However, they necessitate the accuracy of _a priori_ knowledge regarding the track information and train system parameters. If such information is not available or not accurate, such motion models become less instructional or even inapplicable. ### _Force Elements_ From the LTD perspective, the force elements considered include in-train forces, tractive forces, dynamic brake forces, air brake forces, rolling resistance (can include tunnel resistance), curving resistance and gradient forces. As discussed before, due to the neglecting of inter-vehicle motions in the VC controller development, train motion models usually do not consider in-train forces. Meanwhile, train controllers generally lump tractive forces, dynamic braking forces and air brake forces together as a single T/B force. Table 1 lists the force components that were considered in different VC studies. All the models considered T/B forces and rolling resistance. Most of the models had constraints for T/B forces and the constraints had different forms. For example, Su _et al._[37] directly limited the maximum tractive forces whilst Di Meo [36] and Liu _et al._[38] limited the allowable train acceleration. Two different limits could be used for traction and brake cases as used in [39]. Studies that did not use specific maximum force constraints during controller development were also found. It is worth mentioning that the tractive force limits have at least two different regions as shown in Fig. 2(a). The first region is limited by the maximum allowable electrical current in the traction system whilst the second region is limited by the maximum power of the system. Similar speed-dependent maximum force limits also exist for dynamic brake forces and frictional brake forces as discussed in [40, 41, 42]. Felez _et al._[43] used two limits for the VC controller: one for the maximum forces and the other for the maximum power (the product of speed and force). The speed-dependent limit was acknowledged in [37, 44]. How the limit was implemented was not clear from the publications. Several studies have considered tunnel resistance by adding an extra term into the rolling resistance formula. Most of the \begin{table} \begin{tabular}{p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}} \hline Ref. & Order & \multicolumn{5}{c|}{Force elements} & \multicolumn{1}{p{28.5pt}}{Contr} \\ \cline{3-8} & & T/ B & R & R & T & C & G & U & \multicolumn{1}{p{28.5pt}}{} \\ & & B & R & I & R & R & F & F & \\ \hline [MISSING_PAGE_POST] 63] & 2nd & & & Y & Y & Y & CFC \\ \hline \end{tabular} \end{table} Table 1: TRAIN MOTION MODELS FOR VC STUDIES (T/B: traction and brake force; RI: rotational inertia; RR: rolling resistance; TR: tunnel resistance; CR: curve resistance; GF: grade force; UF: uncertain force; Y: considered; SMC: sliding mode control; MPC: model predicative control; ML: machine learning; CBC: consensus-based control; and CFC: constraint following control) Figure 2: Considerations for train motion models: (a) tractive force limits, and (b) influences of rotational inertia (simulated train speed during emergency brake) [41] models have considered curving resistance and gradient forces which are important for realistic models. For some high-speed train studies, such as [61], it was assumed that the track grades and curvatures were small and negligible. A small number of models have used an uncertain force component to indirectly model forces such as tunnel resistance, curving resistance and gradient forces. Very few models have considered the influence of rotational inertia. Due to the simplifications applied in train motion models, rotational kinetic energy in components such as wheelsets and motor rotors cannot be simulated by modelling translations. However, during acceleration or deceleration processes, a portion of tractive or brake energy will be used to change the rotational motions. In train motion modelling, the influence of rotational inertia can be simulated by added an equivalent mass to the total translational mass of the system: \[M_{\gamma}=k_{\gamma}J_{\gamma}/r^{2} \tag{9}\] where \(k_{\gamma}\) is a relation parameter between the circumferential speed of the rotational component and the translational speed of the vehicle (for wheelset \(k_{\gamma}=1\) ); \(M_{\gamma}\) is the equivalent rotational mass, \(J_{\gamma}\) is the rotational inertia of the rotational component; and \(r\) is the radius of the rotational component. The influence of rotational inertia on simulated train speed can be evident as shown in Fig. 2(b). Hence, it is recommended to be included in train motion models. ## III Control Objectives Control objectives need to be defined and formulated prior the design of VC controllers. Relative train motion controls are discussed first in this section as the primary objective of the VC controller development. Various constraints that were used during the development of VC controllers are also discussed. ### Relative Motion Control This section first discusses various gap references that can be used for VC operations. Then, motion control objectives under two different operational architectures (multi-agent and predecessor-follower) are discussed.
2310.10058
A lattice model with Fibonacci degree of degeneracy
In this paper, we explore two different methods of finding the degrees of degeneracy for lattice model systems, specifically constructing one with a Fibonacci degree of degeneracy. We also calculate the number of ground states per site as the golden ratio $(\phi)$ for the system that we constructed and extend our results to systems with $k-$Step Fibonacci degrees of degeneracy. Finally, I end with a few open questions that we may examine for future works.
Athena Wang
2023-10-16T04:48:43Z
http://arxiv.org/abs/2310.10058v2
# A lattice model with Fibonacci degree of degeneracy ###### Abstract. In this paper, we explore two different methods of finding the degrees of degeneracy for lattice model systems, specifically constructing one with a Fibonacci degree of degeneracy. We also calculate the number of ground states per site as the golden ratio (\(\phi\)) for the system that we constructed, ending with a few open questions that we may examine for future works. **Keywords:** degree of degeneracy, ground states per site, Quantum lattice model, Hilbert space, local Hermitian operator, quantum computing, modular arithmetic, recursive algorithm ###### Contents * 1 Introduction * 2 Hermitians and Hamiltonians * 3 Degree of Degeneracy of \(H\) * 3.1 Modular Arithmetic and Matrices. * 3.2 Recursion and Qubit Basis States * 4 Ground States Per Site * 5 Conclusion * 6 Acknowledgements ## 1. Introduction Lattice models, as defined within mathematical physics, provide a discrete, grid-like structure for depicting physical systems. Originating from condensed matter physics to model crystalline structures, these models have since pervaded numerous areas of theoretical physics due to their unique properties. The investigations of these models offer invaluable insights into fundamental phenomena, including phase transitions, magnetization, and scaling behavior, and contribute significantly to the comprehension of quantum field theory [1]. Moreover, they provide a practical means of approximating continuum theories, effectively introducing an ultraviolet cutoff to prevent divergences and facilitating numerical computations [4]. In classical physics, lattice models are generally described by an energy function on the phase space, with the Ising model [5] being one of the prototypical examples in this context. Quantum mechanics, meanwhile, with its peculiar characteristics of superposition and entanglement, offers a distinct approach to using lattice models. To mathematically encapsulate these unusual features, we must employ more advanced mathematical structures, such as Hilbert spaces and Hermitian operators, fundamental concepts that form the baseline of quantum theory. These more advanced structures also make it possible to describe richer phenomena, with ground-state degeneracy being an interesting example. In a quantum lattice model, we have a finite-dimensional Hilbert space \(V\) representing the spin on each site and a local interaction that involves only a finite number of nearby sites. The Hilbert space of the whole system is a tensor product of all the \(V\)s labeled by different sites (represented by \(V^{\otimes n}\)), and the Hamiltonian is the sum of a number of summands that apply a local transformation. An eigenvector with the lowest eigenvalue is called a ground state--or the state using the least amount of energy--and the dimension of the corresponding eigenspace is called the number of ground states, or degree of degeneracy. In this paper, we investigate an example such that the degree of degeneracy is the Fibonacci number \(F_{n+1}\) (where \(n\) is the size of the system). In particular, the average number of ground states per site is \(\lim\sqrt[n]{F_{n+1}}=\frac{1+\sqrt{5}}{2}\). More precisely, let us consider a lattice model on a line with length \(n\), with each number \(i=1,2,\cdots,n\) labeling a _site_ on which we have a _local Hilbert space of states_\(V_{i}\cong V=\langle|0>,|1>\rangle\), or the Hilbert space labeled by the bases (or qubits) \(|0>\) and \(|1>\). The _space of states for the whole system_ is \(\otimes_{i}V_{i}\simeq V^{\otimes n}\). We will consider a local interaction for each pair of neighboring sites \(i,i+1\) given by a Hermitian operator \(H_{i,i+1}\) on \(V_{i}\otimes V_{i+1}\simeq V^{\otimes 2}\), which maps \(|11>\) to itself and all other standard basis vectors, \(|00>,|01>,|10>\), to \(0\). The Hamiltonian \(H\) of the whole system is an operator on \(V^{\otimes n}\) given by the sum of all local interactions \(H_{i,i+1}\). In this paper, we specifically investigate the following property of this Hamiltonian \(H\): **Theorem 1.1**.: \(H\) _is a non-negative definite Hermitian operator, and the degree of degeneracy of the system (i.e. \(\dim(\ker H)\)) with Hermitian \(H\) acting on \(n\) sites is the Fibonacci number \(F_{n+1}\)._ To start, we will split the proof of **Theorem 1.1** into two parts: the first half, which proves that \(H\) is a non-negative definite Hermitian operator and introduces the necessary structures, and the second half, which proves that the degree of degeneracy of the system is \(F_{n+1}\). ## 2. Hermitians and Hamiltonians In this section, we will focus on introducing the Hermitian and the Hamiltonian. **Definition 1**.: _A Hermitian matrix \(A\) of a Hermitian operator \(H\) has the property that \(A^{*}=A\), where "\(A^{*}\)" represents the conjugate transpose (\(\overline{A^{T}}\)) of \(A\). Therefore, for every entry \(a_{ij}\), representing the entry in row \(i\) and column \(j\) of matrix \(A\), \(a_{ij}=\overline{a_{ji}}\), making it an extension of symmetric matrices in the complex numbers._ Hermitians, with their complex number entries describing different states and electron configurations of a transformation, are an integral part of quantum theory. Just like symmetric matrices, Hermitians simplify the parameterization of a transformation of a system over time. With this definition, we can now check that \(H_{i,i+1}\), which maps \(|00>\mapsto 0\), \(|01>\mapsto 0\), \(|10>\mapsto 0\), and \(|11>\mapsto|11>\), is a Hermitian. Our Hermitian matrix \(A\) would be \[\begin{bmatrix}0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&1\end{bmatrix}\] with the four entries on the diagonal (\([0,0,0,1]\)) representing exactly how the bases in the system interact with each other when the operator is applied to adjacent sites. Since \(A^{*}=A\), operator \(H_{i,i+1}\) is clearly Hermitian. **Definition 2**.: _The Hamiltonian \(H=\sum_{i=1}^{n-1}H_{i,i+1}\), the sum of all local interactions \(H_{i,i+1}\)._ For the specific Hamiltonian \(H\) of our system, we can prove the following results. **Proposition 2.1**.: _The matrix of the Hamiltonian is diagonal._ _Remark. For this proof, we will be using a 1-1 correspondence between a basis vector with \(n\) sites and a standard matrix with \(2^{n}\) entries. For every qubit of the basis, there are two corresponding matrices. As an example,_ \[|000...00>\mapsto\begin{bmatrix}1\\ 0\\ 0\\...\\ 0\end{bmatrix}\text{ and }|000...01>\mapsto\begin{bmatrix}0\\ 1\\ 0\\...\\ 0\end{bmatrix}.\] _There are exactly \(2^{n}\) total basis vectors with \(n\) sites and \(2^{n}\) standard matrices with \(2^{n}\) entries._ Proof.: For a diagonal \(2^{n}\times 2^{n}\) matrix \(A\) with eigenvalues \(a_{1},a_{2},a_{3}...a_{2^{n}}\), \(Ae_{i}=a_{i}e_{i}\) for standard unit vectors \(e_{i}\), represented by a \(1\) in the \(ith\) row and \(0\) in every other row of a column matrix. Then, since the basis vector mappings of \(V\otimes V\) are defined to be multiples of the vectors themselves (with \(|00>\mapsto 0,|01>\mapsto 0,|10>\mapsto 0\), and \(|11>\mapsto\ |11>\)), \[H_{i,i+1}e_{j}=\lambda_{j}e_{j}\text{ for any }i=1,2,...n-1\text{ and }j=1,2,...2^{n}.\] In fact, for an \(e_{j}\) that corresponds to an \(n\)-site basis vector with the sites \(i\) and \(i+1\) represented by a \(1\), \(\lambda_{j}=1\) (since \(|11>\mapsto|11>\)). Otherwise, \(\lambda_{j}=0\) (since all other bases map to \(0\)). Therefore, the matrix \(H_{i,i+1}\) would be diagonal with real entries (either \(0\) or \(1\)). Since the sum of diagonal matrices will also be diagonal, matrix \(H=H_{1,2}+H_{2,3}...+H_{n-1,n}\) will also be diagonal with real entries. As diagonal matrices are symmetric, it is then apparent that the following corollaries are true. **Corollary 1**.: \(H\) _is symmetric._ **Corollary 2**.: \(H\) _is a non-negative definite system._ _Remark. "Non-negative definite systems" are often called "positive semi-definite systems" in some texts._ Proof.: From **Proposition 2.1**, we proved that every \(\lambda_{j}\), or eigenvalue of \(H_{i,i+1}\), is either \(0\) or \(1\) in our system with Hamiltonian \(H\). Since both \(0\) and \(1\) are non-negative and the sum of non-negative integers are also non-negative, all the entries on the diagonal--or all the eigenvalues of \(H\)--will be non-negative. Therefore, it will be a non-negative definite system. For any non-negative definite system, we call it a _well-behaved quantum system_. Now that we understand that Hamiltonian \(H\) is a non-negative definite system, we only need to check that \(H\) is Hermitian. **Corollary 3**.: \(H\) _is Hermitian._ Proof.: Note that if \(A\) was Hermitian and its entry \(a_{ij}\in\mathbb{R}\), \(a_{ij}=\overline{a_{ji}}=a_{ji}\). Hence if all entries of \(A\in\mathbb{R}\), \(A\) would be a symmetric matrix, which is also Hermitian. Since \(H\) is symmetric (**Corollary 1**) with real entries, system \(H\) would also be Hermitian. As a result, the first half of the theorem is proved. Hamiltonian \(H=\sum_{i=1}^{n-1}H_{i,i+1}\) where \(H_{i,i+1}\) are Hermitian operators that map \(|11>\) to itself and all other standard basis vectors to \(0\) is a non-negative definite Hermitian operator. ## 3. Degree of Degeneracy of \(H\) We will now examine the interesting results given by analyzing the degree of degeneracy of the system \(H\). In order to investigate this problem, we will present two different approaches: one using modular arithmetic and matrices and the other using recursion and qubit basis states. By displaying both, we offer a glimpse into the surprisingly diverse connections between quantum computing and other fields of math. Let us first define the tensor product, a fundamental concept in the following two approaches. **Definition 3**.: _A tensor product between two matrices is a combination of two states interacting with each other. It is represented by the symbol, \(\otimes\)._ **Definition 4**.: _A ground state is the lowest eigenspace of a system._ **Definition 5**.: _The degree of degeneracy of a system \(H\) is the number of ground states of the system. In this case, it will be the same as \(\ker(\dim H)\)._ ### Modular Arithmetic and Matrices We will first define the tensor product in terms of matrices. **Definition 6**.: _The tensor product displayed in matrices is also sometimes called the Kronecker product [2], which is characterized by_ \[\begin{bmatrix}a_{1,1}&a_{1,2}&...&a_{1,n}\\ a_{2,1}&a_{2,2}&...&a_{2,n}\\...&...&...&...\\ a_{m,1}&a_{m,2}&...&a_{m,n}\end{bmatrix}\otimes B=\begin{bmatrix}a_{1,1}B&a_{ 1,2}B&...&a_{1,n}B\\ a_{2,1}B&a_{2,2}B&...&a_{2,n}B\\...&...&...&...\\ a_{m,1}B&a_{m,2}B&...&a_{m,n}B\end{bmatrix}\] _where \(B\) is another matrix._ _Remark. If \(A\) is an \(m\times n\) matrix and \(B\) is a \(j\times k\) matrix, \(A\otimes B\) is a \(mj\times nk\) matrix._ Before, we defined that Hamiltonian \(H=\sum_{i=1}^{n-1}H_{i,i+1}\). Therefore, \(Hv\,=\,\sum_{i=1}^{n-1}H_{ii,i+1}v\) where vector \(v\in V^{\otimes n}\), and \(\mathrm{im}(H)=\sum_{i=1}^{n-1}\mathrm{im}(H_{i,i+1})\). Thus, let's take a look at \(H_{i,i+1}\) individually. Because \(H_{i,i+1}\) only acts on adjacent sites \(i,i+1\), \(\mathrm{im}(H_{i,i+1})\) is spanned by \[<v_{1}\otimes v_{2}\otimes v_{3}..v_{i-1}\otimes A(v_{i}\otimes v_{i+1}) \otimes v_{i+2}...\otimes v_{n}>.\] where \(A\) is the matrix of the Hermitian operator. Note that every vector (or state) \(v_{i}\) can be represented as \(\begin{bmatrix}a_{0}\\ a_{1}\end{bmatrix}\) with the \(a_{0}\) representing the probability of the electron being in basis state \(|0>\) and \(a_{1}\) representing the probability of the electron being in basis state \(|1>\). Because all \(v_{i}\) are \(2\times 1\) matrices, each \(\mathrm{im}(H_{ii+1})\) can be represented as a \(2^{n}\times 1\) column matrix, where each row corresponds with a basis element of the system. For example, \[\begin{bmatrix}a_{0}\\ a_{1}\end{bmatrix}\otimes\begin{bmatrix}b_{0}\\ b_{1}\end{bmatrix}=\begin{bmatrix}a_{0}b_{0}\\ a_{0}b_{1}\\ a_{1}b_{0}\\ a_{1}b_{1}\end{bmatrix}\] where \(a_{0}\) and \(b_{0}\) correspond with the basis element \(|0>\), \(a_{1}\) and \(b_{1}\) correspond with the basis element \(|1>\), \(a_{0}b_{0}\) corresponds with \(|00>\), \(a_{0}b_{1}\) corresponds with \(|01>\), \(a_{1}b_{0}\) corresponds with \(|10>\), and \(a_{1}b_{1}\) corresponds with \(|11>\). In system \(H\), finding the degree of degeneracy is the same as finding \(\dim(\ker H)\), as the system is non-negative definite (which means the total energy of the system cannot go below 0) and we can find a state such that the total energy of the system is 0. We can now use a simple, well-known result in linear algebra that proves that \(ker(H)=\)im\((H)^{\perp}\)[6]. **Theorem 3.1**.: _For any matrix \(A:\mathbb{R}^{n}\mapsto\mathbb{R}^{m}\), \(\mathrm{im}(A)^{\perp}=\ker(A^{T})\)._ Proof.: Suppose that \(\mathrm{im}(A)=\mathrm{span}\{v_{1},v_{2},...v_{n}\}\). Then, \(A=\begin{bmatrix}\uparrow&\uparrow&...&\uparrow\\ v_{1}&v_{2}&...&v_{n}\\ \downarrow&\downarrow&...&\downarrow\end{bmatrix}\). If vector \(x\in\mathrm{im}(A)^{\perp}\), \[\begin{cases}v_{1}\cdot x=0\\ v_{2}\cdot x=0\\...\\ v_{n}\cdot x=0\end{cases}\Leftrightarrow\begin{bmatrix}\leftarrow&v_{1}& \rightarrow\\ \leftarrow&v_{2}&\rightarrow\\...&...&...\\ \leftarrow&v_{n}&\rightarrow\end{bmatrix}\begin{bmatrix}\uparrow\\ x\\ \downarrow\end{bmatrix}=A^{T}x=0\] which means that \(x\in\ker(A^{T})\). The reverse argument can be made for the converse. Therefore, im\((A)^{\perp}\) = ker\((A^{T})\). **Corollary 4**.: _If matrix \(A\) is symmetric, im\((A)^{\perp}\) = ker\((A)\)._ Proof.: Because \(A\) is symmetric, \(A^{T}=A\). Hence, im\((A)^{\perp}\) = ker\((A)\). Since \(H\) is symmetric (**Corollary 1**), ker\((H)\)=im\((H)^{\perp}\) (**Corollary 4**). Therefore, to find the dimension of the kernel of the system, we can find the number of orthonormal basis vectors \(x\) such that im\((H)x=0\). Note that all such vectors are standard column matrices, with one row having a value of 1 and the rest having a value of 0. If im\((H)\) is spanned by \(\begin{bmatrix}r_{0}\\ r_{1}\\...\\ r_{2^{n}}\end{bmatrix}\), where every \(r_{j}\) for \(j=1,2,...2^{n}\) is non-negative, each vector \(x\in\mathrm{im}(H)^{\perp}\) is a standard basis vector with 1 in row \(r_{j}\) if \(r_{j}=0\) in im\((H)\). Without ideas regarding dim\((\mathrm{im}(H))\), we can start with smaller cases. Let's denote rows that are not 0 as "-". Note that \(v\) is a vector, so we will denote its values corresponding to bases \(|0>\) and \(|1>\) as "-" too. Thus, \(v^{\otimes n}\) is a \(2^{n}\times 1\) column matrix where every entry is "-". Let \(n=2\). Then, \(\text{im}(H)=A(v\otimes v)=\begin{bmatrix}0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&1\end{bmatrix}\begin{bmatrix}-\\ -\\ -\\ -\end{bmatrix}=\begin{bmatrix}0\\ 0\\ 0\\ -\end{bmatrix}\) So, the basis for the kernel of this matrix will be \[\begin{bmatrix}1\\ 0\\ 0\\ 0\end{bmatrix},\begin{bmatrix}0\\ 1\\ 0\\ 0\end{bmatrix},\,\text{and}\,\begin{bmatrix}0\\ 0\\ 1\\ 0\end{bmatrix}.\] This kernel has a dimension of 3, exactly the same as the number of 0s in the matrix. \[\text{For }n=3,\,\text{im}(H_{1,2})=A(v\otimes v)\otimes v=\begin{bmatrix}0\\ 0\\ 0\\ -\end{bmatrix}\otimes\begin{bmatrix}-\\ -\end{bmatrix}=\begin{bmatrix}0\\ 0\\ 0\\ 0\\ 0\\ -\end{bmatrix}\] \[\text{im}(H_{2,3})=v\otimes A(v\otimes v)= \begin{bmatrix}-\\ -\end{bmatrix}\otimes\begin{bmatrix}0\\ 0\\ 0\\ -\end{bmatrix}=\begin{bmatrix}0\\ 0\\ 0\\ -\\ 0\\ 0\\ 0\\ -\end{bmatrix}\] \[\text{im}(H)= \text{im}(H_{1,2})+\text{im}(H_{2,3})=\begin{bmatrix}0\\ 0\\ -\\ 0\\ 0\\ 0\\ -\end{bmatrix}.\] Thus, since there are 5 rows of 0 in the matrix, \(\dim\ker(H)=5\). Indeed, the basis for the kernel of the matrix is \[\begin{bmatrix}1\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\end{bmatrix},\begin{bmatrix}0\\ 1\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\end{bmatrix},\begin{bmatrix}0\\ 0\\ 1\\ 0\\ 0\\ 0\\ 0\end{bmatrix},\begin{bmatrix}0\\ 0\\ 0\\ 0\\ 0\\ 0\end{bmatrix},\begin{bmatrix}0\\ 0\\ 0\\ 0\\ 0\\ 0\end{bmatrix},\begin{bmatrix}0\\ 0\\ 0\\ 0\\ 0\\ 0\end{bmatrix},\begin{bmatrix}0\\ 0\\ 0\\ 0\\ 0\end{bmatrix},\begin{bmatrix}0\\ 0\\ 0\\ 0\\ 0\end{bmatrix},\begin{bmatrix}0\\ 0\\ 0\\ 0\\ 0\end{bmatrix},\begin{bmatrix}0\\ 0\\ 0\\ 0\\ 0\end{bmatrix},\begin{bmatrix}0\\ 0\\ 0\\ 0\\ 0\end{bmatrix}.\] So, in general, \(\operatorname{im}(H_{ii+1})\) can be spanned by \[(v^{\otimes(i-1)})\otimes\begin{bmatrix}0\\ 0\\ 0\\ -\end{bmatrix}\text{ (or }A(v\otimes v))\text{ }\otimes(v^{\otimes(n-i-1)}).\] The result is a \(2^{n}\times 1\) column matrix, constructed by repeating a \(2^{n-i+1}\times 1\) column matrix with a value of \(0\) in the first \(\frac{3}{4}\) rows and \(1\) in the last \(\frac{1}{4}\) rows \(2^{i-1}\) times. Since \(\operatorname{im}(H)=\sum_{i=1}^{n-1}\operatorname{im}(H_{i,i+1})\), every row with a value of \(1\) in the matrix spanning \(\operatorname{im}(H_{i,i+1})\) for some \(i=1,2,...n-1\) would not have a value of \(0\) in the final matrix spanning \(\operatorname{im}(H)\). Thus, \(\operatorname{im}(H)\) will be spanned by a column matrix where non-negative values occur at rows \(0,-1,...-(x-1)\pmod{4x}\) for all positive integers \(x\leq\frac{2^{n}}{4}\) if the rows of the matrix spanning \(\operatorname{im}(H)\) are labeled from \(1\) through \(2^{n}\). There will always be some \(i=1,2,...n-1\) such that the repeated \(2^{n-i+1}\) matrix has a dimension of \(4x\) for a fixed \(x\leq\frac{2^{n}}{4}\). This generalization can also be viewed from another perspective: rather than using complementary counting, we can count the number of \(0\)s in the matrix spanning \(\operatorname{im}(H)\) directly. The \(0\)s in the final matrix must satisfy \(1,2,...3x\pmod{4x}\) for all positive integers \(x\leq\frac{2^{n}}{4}\). Now, we have all the tools to prove the final half of **Theorem 1.1**. **Proposition 3.1**.: _The dimension of the kernel, or the degree of degeneracy, of the system \(H\) is \(F_{n+1}\) for a given \(n\). Here, \(F_{0}=1,F_{1}=1,F_{2}=2,\) etc._ Proof.: We will begin by using induction on \(n\), the number of sites of the system \(H\). Base Case: When \(n=2\), \(x\leq\frac{2^{2}}{4}=1\). Then, \(0\) is in rows \(1,2\), or \(3\pmod{4}\). There are \(3\) zeroes in the matrix spanning \(\operatorname{im}(H)\), which means that \(\dim(\ker H)=3\). This supports our inductive hypothesis. Inductive Step: Suppose that the degree of degeneracy of \(H\) when \(n=k-1\) is \(F_{k}\) and when \(n=k\) is \(F_{k+1}\). Then, when \(n=k+1\), \(x=2^{k-1}\) adds a new modular equation to satisfy, where \(0\)s can only exist on rows that are \(1,2,...3\cdot 2^{k-1}\pmod{2^{k+1}}\). Since \(3\cdot 2^{k-2}<3\cdot 2^{k-1}\), all the rows of \(0\)s that satisfied the conditions of \(n=k\) also satisfy the conditions of \(n=k+1\). The rest of the possible rows of \(0\)s, labeled from \(2^{k}+1\) to \(3\cdot 2^{k-1}\), if put in \(\pmod{2^{k}}\), will just be \(1,2,3...2^{k-1}\). Since \(2^{k-1}<3\cdot 2^{k-2}\), the numbers already satisfied the condition required for the case when \(n=k\), and we only need to check the conditions for \(n=k-1\). Thus, we are checking for the numbered rows that satisfy \(1,2,...3x\pmod{4x}\) when \(x\leq\frac{2^{k-1}}{4}\). This is equivalent to checking the case of \(n=k-1\). Therefore, since (# of 0s in the matrix spanning \(\operatorname{im}(H)\) when \(n=k+1\)) = (# of 0s in the matrix spanning \(\operatorname{im}(H)\) when \(n=k\))+(# of 0s in the matrix spanning \(\operatorname{im}(H)\) when \(n=k-1\)), \(F_{n+1}+F_{n}=F_{n+2}\), which is \(\dim(\ker H)\), or the degree of degeneracy of system \(H\) when it has \(n+1\) sites. This supports our inductive hypothesis. Hence, our induction is complete. By using **Corollary 3** and **Proposition 3.1**, we have now proved **Theorem 1.1**. However, there is another interesting result that we can obtain by using **Proposition 3.1**. **Theorem 3.2**.: \(F_{n+1}=2^{n}-(2^{n-2}F_{0}+2^{n-3}F_{1}+...2^{0}F_{n-2})\) _when \(n\geq 2\)._ Proof.: For this proof, we will use complementary counting to count the rows of the final matrix \(H\) that the value of 0 cannot be placed in. In total, the sum of the number of rows of the final matrix that can contain 0 and the number of rows of the final matrix that cannot contain 0 will be \(2^{n}\). From **Proposition 3.1**, \(F_{n+1}\) is the number of rows of the final matrix that can contain 0. Therefore, we only need to find the number of rows that cannot contain 0, which is given by finding the rows that fit any one of the modular equations \(0,-1...-(x-1)\pmod{4x}\) for some \(x\leq\frac{2^{n}}{4}\). We will also use induction. Base Case: When \(n=2\), \(x\leq\frac{2^{2}}{4}=1\). So, the only modular equation for the row to satisfy is 0 \(\pmod{4}\). There is only 1 row that satisfies this: row 4. Since \(1+F_{2+1}=1+3=2^{2}\), we can re-arrange this to get \(F_{3}=2^{2}-1\), which supports the inductive hypothesis. Inductive Step: Suppose that \[F_{n+1}=2^{n}-(2^{n-2}F_{0}+2^{n-3}F_{1}+...2^{0}F_{n-2})\] for \(n=k\), where \(F_{k+1}\) is the number of rows that contain 0 in \(H\), \(2^{k}\) is the total sum, and \((2^{k-2}F_{0}+2^{k-3}F_{1}+...2^{0}F_{k-2})\) is the number of rows that do not contain 0 in \(H\). Then, for \(n=k+1\), the number of rows that satisfy the modular equations from case \(n=k\) is doubled since the number of rows in the column matrix is doubled. Another modular equation is also added: \(0,-1,...-(2^{k-1}-1)\pmod{2^{k+1}}\). However, some of the rows satisfy multiple modular equations, meaning that we only want to count the new rows that add to our non-zero row count. Therefore, out of the rows \(0,-1,...-(2^{k-1}-1)\pmod{2^{k+1}}\), we subtract off all the modular conditions that can already be satisfied by the earlier case \(n=k\). There are exactly \(2^{k-1}\) modular conditions, so every modular condition set by case \(n=k\) will only appear at most once in \(0,-1,...-(2^{k-1}-1)\pmod{2^{k+1}}\). However, every row above \(-(2^{k-1}-1)\) in the case \(n=k\) will not be considered in the new condition for case \(n=k+1\) (as there are \(2^{k}\) rows for the case \(n=k\)), so we must subtract the number of rows fitting case \(n=k-1\). Note that this is because all rows above \(-(2^{k-1}-1)\) are found by fitting the conditions for the case \(n=k-1\). Therefore, the number of new rows that add to our non-zero row count is \[2^{k-1}-(2^{k-2}F_{0}+2^{k-3}F_{1}+...2^{0}F_{k-2}-(2^{k-3}F_{0}+2^{k-4}F_{1}+... 2^{0}F_{k-3}))\] \[=2^{k-1}-(2^{k-3}F_{0}+2^{k-4}F_{1}+...2^{0}F_{k-3}+2^{0}F_{k-2})\] \[=F_{k}-F_{k-2}\] \[=F_{k-1}.\] Then, for case \(n=k+1\), the number of rows that are non-zero is \[2\cdot(2^{k-2}F_{0}+2^{k-3}F_{1}+...2^{0}F_{k-2})+F_{k-1},\] which simplifies to \[2^{k-1}F_{0}+2^{k-2}F_{1}+...2^{0}F_{k-1}.\] Since \(F_{k+2}\) is the number of rows that contain 0 and \(2^{k+1}\) is the total number of rows in the matrix spanning \(\operatorname{im}(H)\) for system \(H\) with \(k+1\) sites, our inductive hypothesis is proven. Thus, this theorem holds true. ### Recursion and Qubit Basis States We have now already defined that \(H_{v}=\sum_{i=1}^{n-1}H_{i,i+1}v\) where \(v\in V^{\otimes n}\). However, since every vector \(v\) can be expressed as a linear combination of the basis vectors, we only need to consider when \(v\) is a basis vector to create the basis of the kernel. Now, since only 2 adjacent qubits interact at a time with operator \(H_{i,i+1}\), \(H_{i,i+1}v=v\) if and only if \(v=|...11...>\) when the 1s represent the spins of site \(i\) and \(i+1\), respectively. Otherwise, \(H_{i,i+1}v=0\) if \(v=|...00...>\), \(|...01...>\), or \(|...10...>\) at sites \(i\) and \(i+1\) since they map it to 0. So, \(Hv=\alpha v\) where \(\alpha\) is the number of adjacent 1s in \(v\). Then, \(\ker H=\{v:Hv=0\}\) and \(\alpha=0\). Therefore, there is a bijection between the number of sequences of length \(n\) such that there are no adjacent ones. This is a typical recursion problem. For an \(n\)-digit sequence, there are two possibilities for the last digit: 0 or 1. If the last digit is 0, we are reduced to finding the number of \(n-1\) digit sequences that do not have adjacent ones. If the last digit is 1, the second to last digit must be a 0 to avoid adjacent ones, and we are reduced to finding the number of \(n-2\) digit sequences that do not have adjacent ones. Now, we can take a look at the base cases to see the recursive sequence. When \(n=2\), there are 3 sequences that do not have adjacent ones--namely, \(|00>,|01>\), and \(|10>\). When \(n=3\), there are 5 sequences that do not have adjacent ones--namely, \(|000>,|001>,|010>,|100>\), and \(|101>\). Then, when \(n=4\), the number of sequences that do not have adjacent ones is 3+5=8. When \(n=5\), it is \(5+8=13\). This is the Fibonacci sequence, where the number of \(n\)-digit sequences that do not have adjacent ones is \(F_{n+1}\). Therefore, the dimension of the kernel on a Hilbert space with \(n\) sites is \(F_{n+1}\) where \(F_{0}=1\). This second approach provides a much more efficient result and can possibly be applied to future research with other systems \(H\). However, both approaches provide some insight into quantum lattice models and their ground states. ## 4. Ground States Per Site Let's denote all the ground states per site as \(V_{0}\). Then, all the ground states in \(H\) can be represented by \(V_{0}^{\otimes n}\), so there are \((\dim V_{0})^{n}\) total ground states. Thus, for our problem, \((\dim V_{0})^{n}=F_{n+1}\), the total number of ground states. So, \(\dim V_{0}=\sqrt[n]{F_{n+1}}\). From Binet's Formula, \[F_{n}=\frac{(\frac{1+\sqrt{5}}{2})^{n}-(\frac{1-\sqrt{5}}{2})^{n}}{\sqrt{5}}.\] When \(n\) is large, \(F_{n}\approx\frac{\phi^{n}}{\sqrt{5}}\) where \(\phi=\frac{1+\sqrt{5}}{2}\) (see [3] for full proof). Then, as \(n\) converges to infinity, \[\sqrt[n]{F_{n+1}}\approx\lim_{n\to\infty}\sqrt[n]{\frac{\phi^{n+1}}{\sqrt{5}}} =\frac{\phi^{\frac{n+1}{n}}}{\sqrt{5}^{\lim_{n\to\infty}\frac{1}{n}}}=\frac{ \phi}{\sqrt{5}^{0}}=\phi.\] Formally, \(\dim V_{0}=\frac{1+\sqrt{5}}{2}\), which means that the number of ground states per site, on average, is somewhere between \(1\) and \(2\). ## 5. Conclusion Lattice models, with their versatile properties and broad applicability, are an indispensable asset in mathematical physics, contributing uniquely to our understanding of diverse physical systems and their underlying mechanisms. In this paper, we see an interesting example of a quantum lattice model where despite the number of states on each site being an integer \(F_{n+1}\), the number of ground states on each site is an irrational number. It's natural to conjecture that for more general systems, the number of ground states on each site--if not exactly \(1\) or \(2\)--will be positive algebraic integers all of whose Galois conjugates have strictly smaller absolute values. We will leave this for future works. ## 6. Acknowledgements I would like to thank Kai Xu for his guidance and support throughout the entirety of this project. His insight and mentorship were invaluable through the process, shaping my work and research regarding this topic.
2302.07735
Targeted Attack on GPT-Neo for the SATML Language Model Data Extraction Challenge
Previous work has shown that Large Language Models are susceptible to so-called data extraction attacks. This allows an attacker to extract a sample that was contained in the training data, which has massive privacy implications. The construction of data extraction attacks is challenging, current attacks are quite inefficient, and there exists a significant gap in the extraction capabilities of untargeted attacks and memorization. Thus, targeted attacks are proposed, which identify if a given sample from the training data, is extractable from a model. In this work, we apply a targeted data extraction attack to the SATML2023 Language Model Training Data Extraction Challenge. We apply a two-step approach. In the first step, we maximise the recall of the model and are able to extract the suffix for 69% of the samples. In the second step, we use a classifier-based Membership Inference Attack on the generations. Our AutoSklearn classifier achieves a precision of 0.841. The full approach reaches a score of 0.405 recall at a 10% false positive rate, which is an improvement of 34% over the baseline of 0.301.
Ali Al-Kaswan, Maliheh Izadi, Arie van Deursen
2023-02-13T18:00:44Z
http://arxiv.org/abs/2302.07735v1
# Targeted Attack on GPT-Neo for the SATML Language Model Data Extraction Challenge ###### Abstract Previous work has shown that Large Language Models are susceptible to so-called data extraction attacks. This allows an attacker to extract a sample that was contained in the training data, which has massive privacy implications. The construction of data extraction attacks is challenging, current attacks are quite inefficient, and there exists a significant gap in the extraction capabilities of untargeted attacks and memorization. Thus, targeted attacks are proposed, which identify if a given sample from the training data, is extractable from a model. In this work, we apply a targeted data extraction attack to the SATML2023 Language Model Training Data Extraction Challenge.1 We apply a two-step approach. In the first step, we maximise the recall of the model and are able to extract the suffix for 69% of the samples. In the second step, we use a classifier-based Membership Inference Attack on the generations. Our AutoSkilearn classifier achieves a precision of 0.841. The full approach reaches a score of 0.405 recall at a 10% false positive rate, which is an improvement of 34% over the baseline of 0.301. Footnote 1: Language Models Training Data Extraction Challenge: [https://github.com/google-research/lm-extraction-benchmark](https://github.com/google-research/lm-extraction-benchmark) Data Extraction, Targeted Attacks, Language Models, GPT-Neo, Challenge ## I Introduction Language Models have recently become popular due to their ability to generate natural text and have been applied in various fields, such as Software Engineering [1, 2]. However, neural language models trained on sensitive datasets have been shown to memorize parts of their training data [3, 4, 5]. With a data extraction attack, an adversary can recover individual training examples from the model's training dataset. The ability to extract training data has massive privacy implications. Models which are trained using private datasets might be exposing their records. Models trained using publicly mined data might be violating the contextual integrity of the data of internet users [3]. Recent work has found that developing robust attacks to extract training data is challenging. In this work, we focus on targeted attacks. We use the Language Model Training Data Extraction Challenge, to develop our attack. The benchmark provides a prefix of 50 tokens from the training data, we are tasked with predicting the next 50 tokens (suffix). The targeted model is GPT-Neo with \(1.3\) billion parameters [6]. We propose a two-stage attack strategy shown in Figure 1. In the first stage, the model is prompted to predict multiple suffixes for the given prefix. In this step, we optimise for the recall of the model, i.e., for how many prefixes a correct suffix is generated. In the second step, we make use of a Membership Inference Attack to select the correct suffix from the set of candidates. In this case, precision is more important. We find that contrastive search is the best decoding strategy to generate as many correct candidate suffixes as possible. We design a successful attack based on a binary classifier (based on AutoSkilearn) to classify the candidate suffixes. With our attack, we are able to achieve a recall of 0.405 at a 10% false positive rate, which is a 34% improvement over the baseline of \(0.301\). ## II Membership Inference Attack Security Game In this section, we define a black-box membership inference attack using a security game inspired by Carlini et al. [4]. Given a challenger \(C\) and an adversary \(\mathscr{A}\), the game is defined as follows: 1. The challenger samples a dataset \(D\subset\mathbb{D}\) and trains a model \(M_{\theta}\gets Training_{M}(D)\) on the sampled dataset 2. \(C\) samples a bit \(b\leftarrow\{0,1\}\). If \(b=0\), \(C\) selects a training point \(x\in D\), otherwise \(C\) selects a training point \(x\in((D\cup\mathbb{D})-(D\cap\mathbb{D}))\). The point is then provided to \(\mathscr{A}\). 3. \(\mathscr{A}\) is allowed query access to the model \(M_{\theta}\) and may perform any other polynomial time operations. 4. \(\mathscr{A}\) outputs his prediction bit \(\hat{b}\leftarrow\{0,1\}\) 5. If \(\hat{b}=b\), \(\mathscr{A}\) wins, otherwise \(C\) wins. In other words, the challenger randomly samples a subset \(D\) from the dataset \(\mathbb{D}\) and trains a model \(M_{\theta}\) on the subset. The adversary is then tasked with distinguishing samples that are and are not contained in the training data subset. Note that the adversary does not have access to the underlying distribution of the data, and neither does the adversary have access to the base model \(M\), which makes training shadow models impossible. These limitations on the adversary also loosen the constraints on the model \(M\), which can be trained from scratch in step (1). Other attacks [7] require a functional base model \(M\) which is further fine-tuned on \(D\). ## III Experimental Setup ### _Overview_ We show an overview of our attack in Figure 1. Generation stepWe use the GPT-Neo model to generate suffixes for a given prefix. In the first step of the attack, we aim to increase the recall of the attack. We can generate multiple predictions per prefix, which will be filtered in the next step. It might be enticing to simply increase the number of predictions per prefix to get a higher chance of finding the right suffix. Doing this would increase the attack time and, more importantly, the number of errors in the MIA step. In the relative error-sensitive evaluation setting, this would be inadvisable. MIA stepIn this step, we must infer which generated suffixes are members of the training data. In this step, we optimise for precision. For the sake of simplicity, we only select one sample per prefix. We also order the samples in descending order of confidence, such that the samples which are most probable to be correct are pushed up to the top. The metric we used to measure the performance of this step, and the total attack is the recall at a 10% false positive rate. Concretely, this means that we count the number of correct predictions in the ordered output and stop counting when we count 10% errors. ### _Dataset_ The provided dataset consists of 15K samples. Each sample consists of a prefix and a suffix, both are 50 tokens long. The prefix prepended to the suffix is a 100-token sample from the Pile [8], an 800GB text dataset used to train GPT-Neo. The authors of the benchmark selected the samples such that for a given prefix, there is only a single unique suffix contained in the Pile [8]. As suggested by the authors of the benchmark, we use the first 14K samples to train and we isolate the last 1K samples for internal testing. Once we have obtained our solution we can test it with an additional 1K-sample validation set. ## IV Results ### _Generation Strategies_ Table I shows the results for the different generation strategies. We ran the GPT-Neo model with different decoding settings on 100 prefixes. We prompt the model to generate several different generations per prefix. We used the Greedy, Contrastive, and Beam decoding strategies. We first ran the different generation strategies to generate 10 generations per prefix, on the standard settings. We found that contrastive search obtains the highest recall of the tested stratagems. Further testing with different settings for penalty_alpha and top_k, shows that the standard settings have the highest recall for ten generations. Furthermore, we found that the recall of beam search decreased once we increased the beam size above 10 beams. Overall, we found beam search with a sufficiently large beam size to compete with Contrastive search to be too slow and memory intensive to use. Finally, we use GPT-Neo with the best generation strategy, namely, contrastive search \(\alpha=0.6,k=4\) with \(100\) generations per prefix and plot the rank of the correct prediction in Figure 2. The generations are ranked by the model loss on the generation. Note, that we omit the prefixes for which the model was unable to generate the correct prefix. This figure shows that if the correct prefix is available, it is usually the one with the lowest loss. The remaining challenge is to distinguish between the prefixes which have and the prefixes which do not have a correct suffix associated with them. ### _Classification MIA_ We train several classifiers on the task of distinguishing between members and non-members. We first use GPT-Neo with the best generation strategy, namely, contrastive search \(\alpha=0.6,k=4\) with 100 generations per prefix. We apply this to the entire dataset of 15K prefixes. We apply a filter and only consider the samples with the lowest loss for each prefix, we found that this improves the attack, and reduces the computational costs. We split the data and use the first 14K as training data and the last 1K as a test set. The recall of the generation step on the test set was 0.669, which is in line \begin{table} \begin{tabular}{l l l|c} \hline \hline Strategy & Settings & Generations & Recall \\ \hline Greedy & p=1,k=10 & 10 & **0.50** \\ \hline Contrastive & a=0.6,k=4 & 1 & 0.28 \\ & a=0.6,k=4 & 10 & **0.58** \\ & a=0.6,k=2 & 10 & 0.52 \\ & a=0.9,k=4 & 10 & 0.49 \\ & a=0.2,k=4 & 10 & 0.48 \\ & a=0.9,k=10 & 10 & 0.46 \\ & a=0.6,k=4 & 100 & **0.69** \\ \hline Beam & beam=50 & 3 & 0.57 \\ & beam=10 & 10 & **0.67** \\ & beam=25 & 25 & 0.53 \\ \hline \hline \end{tabular} \end{table} TABLE I: Recall per decoding strategy for 100 prefixes Fig. 1: An overview of the complete attack; From left to right, the prefixes are used by the GPT-Neo model to generate multiple suffixes per prefix, a MIA is then applied to select the presumed correct suffixes ordered by confidence. with our previous findings. After filtering this was reduced to 0.498. We use the Sklearn [9] implementation of the standard classifiers. The classifiers were trained until convergence. The AutoSklearn [10] classifier was trained for 10 minutes, 60 seconds per model, with 16 threads. For tokenization, we use the standard Sklearn TF-IDF pipeline and the Sentence-Transformers package with the 'all-mpnet-base-v2' model. We chose this model because it is the highest-performing one in the sentence embedding benchmark. 2 Footnote 2: Sentence Embedding Benchmark: [https://www.sbert.net/docs/pretrained_models.html](https://www.sbert.net/docs/pretrained_models.html) Besides the prefix and generated suffix, we also include the number of unique generations produced by the model (count), as well as the model loss as features. We plot the permutation importance of the different features to our AutoSklearn model performance in Figure 3. We found that the loss is by far the most important feature, while the textual features do contribute to the performance, their importance is limited. Finally, the number of distinct generations has a minimal contribution to the performance. We tested Logistic Regression, Stochastic Gradient Descent with both Huber and perceptron losses, Support Vector Machines, Gaussian Naive Bayes, and Gradient Boost models. To get the final output of the attack, we simply sort the samples by the probability estimate of the model. For the models that cannot calculate a probability estimate, we apply a filter to remove the predicted non-members and we order the samples by the loss. To score the solutions, we opted to use precision as this attack values a low false positive rate. Furthermore, we also calculate the final accuracy of the attack through the precision at a 10% false positive rate. Note that, the maximum achievable score is limited by the recall of the previous step, namely a recall of 498 at a 10% false positive rate. AutoSklearn, with its automatic feature extraction pipeline, performs best. We found that further increasing the training time, does not improve the models' performance. We found that halving the training time, gave a slight decrease in performance. The actual convergence point lies somewhere between 5 and 10 minutes. Furthermore, all of the models in the constructed ensemble are Gradient Boost models. The second best is a tie between logistic Regression with TF-IDF and AutoSklearn with Sentence-Transformers. Note that while AutoSklearn takes around 10 minutes to train, Logistic Regression only takes around three seconds. We further found that the Sentence-Transformers embeddings are of a much lower dimensionality than TF-IDF (768 vs 28182). We were therefore unable to load the TF-IDF embeddings into AutoSklearn. This reduction greatly speeds up the training process, but the models perform slightly worse. This small difference can be explained by the fact that Sentence-Transformers are Deep-Learning based embedding models, which take the semantic meaning of a sentence in mind, while TF-IDF is a simple statistical method. \begin{table} \begin{tabular}{l l|l l} \hline \hline Feature Extraction & Strategy & Precision & R@10\%FPR \\ \hline Baseline & - & - & 0.301 \\ \hline AutoSklean & - & **0.841** & **0.405** \\ \hline TF-IDF & Log Reg & 0.808 & 0.397 \\ & SGD huber & 0.520 & 0.097 \\ & SGD perceptron & 0.784 & 0.302 \\ & SVM & 0.639 & 0.279 \\ & GaussianNB & 0.599 & 0.273 \\ & Gradient Boost & 0.766 & 0.365 \\ \hline S-Transformers & Log Reg & 0.780 & 0.345 \\ & SGD huber & 0.466 & 0.279 \\ & SGD perceptron & 0.602 & 0.280 \\ & SVM & 0.498 & 0.126 \\ & GaussianNB & 0.608 & 0.231 \\ & Gradient Boost & 0.776 & 0.359 \\ & AutoSklearn & 0.807 & 0.397 \\ \hline \hline \end{tabular} \end{table} Table II: Precision and overall attack score on test set Figure 3: Permutation importance of features Figure 2: Rank of correct prediction (if exists) ### _Validation Scores_ We finally run the trained AutoSklearn model on the validation set provided by the organizers. The final score on the validation set is a recall of **0.413** at a 10% false positive rate. Figure 4 shows the confusion matrix on the validation set, which shows that the model is quite balanced in its predictions, and does not heavily favour precision or recall while achieving high accuracy. ## V Discussion The proposed attack is relatively quick to run, as it does not require any type of fine-tuning or prompt-tuning. The slowest aspect of the attack is the generation step, to generate 100 candidate samples for 1K prefixes, we require around 1 hour on an Nvidia RTX 3080, the MIA itself runs in a few seconds. This is around the same speed as the baseline attack. With our classification-based membership inference attack, we seem to have relatively high precision. We believe that we are approaching the limit set by the generation step. Recall that after generating and filtering, we only extracted the correct suffix for 49.8% of the samples. This indicates that there is still much room for improvement in the generation step of our proposed attack. We only investigated different decoding strategies and did not alter the prefixes. Prompt engineering or prefix-tuning might increase the recall of the generation step and therefore the score of the entire attack. Another possible improvement is to introduce more features into the classifier. Instead of using a different method to create an embedding for the textual features, we can use the embedding vector produced by the GPT-Neo model, this would however turn the attack into a white-box variant. ## VI Conclusion To conclude, we proposed a novel two-phased attack strategy. In the first step, we find the best decoding strategy to maximise the recall of the attack. In the second step, we use a binary classifier to select the best suffix. Our approach was able to show an improvement of 34% over the baseline score with minimal additional runtime requirements over the provided baseline.
2308.12405
Concatenation trees: A framework for efficient universal cycle and de Bruijn sequence constructions
Classic cycle-joining techniques have found widespread application in creating universal cycles for a diverse range of combinatorial objects, such as shorthand permutations, weak orders, orientable sequences, and various subsets of $k$-ary strings, including de Bruijn sequences. In the most favorable scenarios, these algorithms operate with a space complexity of $O(n)$ and require $O(n)$ time to generate each symbol in the sequences. In contrast, concatenation-based methods have been developed for a limited selection of universal cycles. In each of these instances, the universal cycles can be generated far more efficiently, with an amortized time complexity of $O(1)$ per symbol, while still using $O(n)$ space. This paper introduces $\mathit{concatenation~trees}$, which serve as the fundamental structures needed to bridge the gap between cycle-joining constructions based on the pure cycle register and corresponding concatenation-based approaches. They immediately demystify the relationship between the classic Lyndon word concatenation construction of de Bruijn sequences and a corresponding cycle-joining based construction. To underscore their significance, concatenation trees are applied to construct universal cycles for shorthand permutations and weak orders in $O(1)$-amortized time per symbol. Moreover, we provide insights as to how similar results can be obtained for other universal cycles including cut-down de Bruijn sequences and orientable sequences.
J. Sawada, J. Sears, A. Trautrim, A. Williams
2023-08-23T19:58:46Z
http://arxiv.org/abs/2308.12405v3
# Demystifying our Grandparent's De Bruijn Sequences with Concatenation Trees ###### Abstract Some of the most interesting de Bruijn sequences can be constructed in seemingly unrelated ways. In particular, the _Granddaddy_ and _Grandmama_ can be understood by joining necklace cycles into a tree using simple parent rules, or by concatenating smaller strings (e.g., Lyndon words) in lexicographic orders. These constructions are elegant, but their equivalences seem to come out of thin air, and the community has had limited success in finding others of the same ik. We aim to demystify the connection between cycle-joining trees and concatenation schemes by introducing _concatenation trees_. These structures combine binary trees and ordered trees, and traversals yield concatenation schemes for their sequences. In this work, we focus on the four simplest cycle-joining trees using the pure cycling register (PCR): _Granddaddy_ (PCR1), _Grandmama_ (PCR2), _Granny_ (PCR3), and _Grandpa_ (PCR4). In particular, we formally prove a previously observed correspondence for PCR3 and we unravel the mystery behind PCR4. More broadly, this work lays the foundation for translating cycle-joining trees to known concatenation constructions for a variety of underlying feedback functions including the complementing cycling register (CCR), pure summing register (PSR), complementing summing register (CSR), and pure run-length register (PRR). ## 1 Introduction A _de Bruijn sequence_ (DB sequence) of order \(n\) is a circular string of length \(2^{n}\) where every binary string of length \(n\) appears exactly once as a substring. For example, \(00010111\) is a DB sequence of order \(n=3\). DB sequences and associated concepts have countless applications in computer science and beyond (e.g., [5]). DB sequences are most famously constructed by full-length linear feedback shift registers1; they generate each bit in worst-case \(O(n)\) time using \(O(n)\) space. The mathematical background was developed by Golomb [19], and Wolfram remarked "It's probably the single most-used mathematical algorithm idea in history" [33]. A downside is that a primitive polynomial over \(GF(2)\) is required for each \(n\). There is no efficient process for discovering these polynomials, so implementations refer to large precomputed tables [32]. As such, the efficient generation of DB sequences, for any \(n\), remains an enticing theoretical problem with many practical implications. Footnote 1: One caveat is that the all-zero state, \(0^{n}\), is omitted. There are other interesting ways to construct DB sequences including greedy algorithms, recursive algorithms, shift rules based on cycle joining, concatenation constructions, and Eulerian cycles in a related graph. See [1] for many such implementations, including algorithms from [4, 8, 9, 10, 11, 12, 13, 14, 16, 17, 18, 20, 21, 25, 26, 29, 31], among others, that are impacted by our work. In particular, our results relate cycle joining to concatenation algorithms; the latter which include the most efficient DB sequence constructions. * The greedy and graph algorithmic approaches require exponential space. * The shift-rule constructions generally require \(O(n)\) time per bit and use \(O(n)\) space. * The known concatenation-based constructions require \(O(1)\)-amortized time per bit and use \(O(n)\) space. Four of the simplest shift rules use the pure cycling register (PCR) whose feedback function is \(f(a_{1}a_{2}\cdots a_{n})=a_{1}\). The PCR partitions the binary strings of length \(n\) into cycles; the length \(n\) substrings of each cycle form an equivalence class of strings under rotation. These cycles can be joined in a tree-like manner to create a DB sequence. In particular, the following parent definitions lead to the four shift rules in Section 1.3, where each cycle is represented by its lexicographically smallest length \(n\) substring. Figure 1 illustrates the resulting cycle-joining trees for \(n=6\). 1. PCR1 (Last 0): the parent is obtained by flipping the last 0 [12, 17]. 2. PCR2 (First 1): the parent is obtained by flipping the first 1 [7, 17]. 3. PCR3 (Last 1): the parent is obtained by flipping the last 1 [17, 21, 31]. 4. PCR4 (First 0): the parent is obtained by flipping the first 0 [17]. The DB sequence generated by PCR1 is the well-known Ford sequence [12], and is called the _Granddaddy_ by Knuth [22]. It is the lexicographically smallest DB sequence, and it can also be generated by a prefer-0 greedy approach attributed to Martin [23]. Furthermore, Fredricksen and Maiorana [14] demonstrate an equivalent necklace concatenation construction Figure 1: Cycle-joining trees for \(n=6\) from the four parent rules PCR1, PCR2, PCR3, PCR4. For example, the node \(001101\) represents the cycle \(001101\to 011010\to 101001\to 010011\to 100110\to 001101\) created by the PCR. This cycle is joined to a different parent cycle in each tree. In particular, the edge \(001101\)–\(001111\) in PCR1 is obtained by flipping its last 0. that can generate the sequence in \(O(1)\)-amortized time per bit. The DB sequence generated by PCR2 is called the _Grandnama_ by Dragon et al. [7]; it can also be generated by concatenating necklaces in co-lexicographic order. The DB sequence generated by PCR3, which we name _Granny_, was first discovered by Jansen [21], then generalized in [31]. It is conjectured to have a simple concatenation construction by Gabric and Sawada [16]. The DB sequence generated by PCR4, which we name _Grandpa_, was first discovered by Gabric et al. [17]. No concatenation construction for this sequence was previously known and this was one of the primary motivations for this work. While it is known that the PCR1 and PCR2 produce equivalent DB sequences to their corresponding concatenation approaches, the existing proofs offer no higher-level insights or pathways towards generalization. Meanwhile, the connection between PCR3 and a concatenation construction has only been conjectured [16]. And given its simple parent rule, it has been a mystery why the PCR4 does not seem to have a simple concatenation construction. **Main result**: We demystify our grandparents by providing a clear and simple framework to understand the correspondence between PCR-based cycle-joining constructions and the (more efficient) concatenation approaches. We provide evidence that our approach can be generalized and applied to other underlying feedback functions, unifying a vast body of previously published works on DB sequences and universal cycles. The framework introduces a new tree data structure we call a _bifurcated ordered tree_ (bot) which is formally defined in Section 2. By converting a PCR-based cycle-joining tree to a bot, we can apply a special traversal to produce a concatenation construction of the corresponding DB sequence. Our approach can be generalized to produce universal cycles for various subsets of binary strings including those with bounded weight [30], those with forbidden substrings like \(0^{j}\) (where strings are considered cyclically), and cut-down DB sequences [4, 9, 27]. More broadly, our aim is to further develop the mathematical background for these constructions, in the same way as Golomb did for LFSRs. The rest of the paper is outlined as follows. In Section 1.1 we discuss related work based on other feedback functions including the CCR, PSR/CSR, and the PRR. In Section 1.2 we provide some background definitions and notation. In Section 1.3, we outline the cycle-joining method for constructing DB sequences and provide details for PCR1, PCR2, PCR3, and PCR4. In Section 1.4, we provide insight leading to the definition of bifurcated ordered trees presented in Section 2. In Section 3, we formally define our notion of concatenation trees and how they are derived from a cycle-joining tree. This section also presents the notion of an RCL traversal and a new result: a simple correspondence between a PCR-based shift rule and a concatenation construction based on traversing a concatenation tree. In Section 4, we present some algorithmic details. In Section 5 we present avenues of future research and some open problems. ### Related work In this section we discuss related work based on the following feedback functions: * CCR: Complementing Cycling Register with feedback function \(f(a_{1}a_{2}\cdots a_{n})=1\oplus a_{1}=\overline{a_{1}}\), * PSR: Pure Summing Register with feedback function \(f(a_{1}a_{2}\cdots a_{n})=a_{1}\oplus a_{2}\cdots\oplus a_{n}\), * CSR: Complementing Summing Register with feedback function \(f(a_{1}a_{2}\cdots a_{n})=1\oplus a_{1}\oplus a_{2}\cdots\oplus a_{n}\), and * PRR: Pure Run-length Register with feedback function \(f(a_{1}a_{2}\cdots a_{n})=a_{1}\oplus a_{2}\oplus a_{n}\), where \(\oplus\) is addition modulo 2. The first CCR-based shift rule was given by Huang [20], and it was noted to have a very good local 0-1 balance. Three simpler shift rules are given in [17], with one similar to a generic rule similar to PCR3 [21]. There are two known CCR based concatenation constructions [15, 16]. One sequence appears to be equivalent to the CCR2 shift rule from [17], however it has never been proved. The cool-lex concatenation constructions by Ruskey, Sawada, and Williams [25] have equivalent underlying PSR/CSR-based shift rules. This correspondence was not observed until considering larger alphabets in [28], though little insight to the correspondence is provided in the proof. DB sequence constructions based the PSR/CSR are also considered by Etzion and Lempel [10, 11]. The greedy prefer-same and prefer-opposite constructions have recently been found to have a corresponding PRR-based shift rule [26]; however, they have no known concatenation construction. The lexicographic composition algorithm by Fredricksen [13], which can be thought of as a concatenation algorithm, is also conjectured to correspond to a PRR-based shift rule. Preliminary findings (see Appendix C) indicate that the framework outlined in this paper can be further applied to all of these feedback functions. ### Background definitions and notation Let \(\mathbf{B}(n)\) denote the set of binary strings of length \(n\). Let \(\alpha=a_{1}a_{2}\cdots a_{n}\in\mathbf{B}(n)\). Let \(\overline{a_{i}}\) denote the complement of a bit \(a_{i}\). The notation \(\alpha^{t}\) denotes \(t\) copies of \(\alpha\) concatenated together. The _aperiodic prefix_ of \(\alpha\), denoted \(\mathrm{ap}(\alpha)\), is the shortest string \(\beta\) such that \(\alpha=\beta^{t}\) for some \(t\geq 1\); the _period_ of \(\alpha\) is \(|\beta|\). For example, if \(\alpha=01010101\) then \(\mathrm{ap}(\alpha)=01\) and \(\alpha\) has period equal to 2. If the period of \(\alpha\) is \(n\), then \(\alpha\) is said to be _aperiodic_; otherwise, it is said to be _periodic_. A _necklace class_ is an equivalence class of strings under rotation. Given a representative \(\alpha\) of a necklace class, let \([\alpha]\) denote the set of all strings in \(\alpha\)'s necklace class. For example, \([0001]=\{0001,0010,0100,1000\}\) and \([0101]=\{0101,010\}\). A _necklace_ is the lexicographically smallest representative of a necklace class. Let \(\mathbf{N}(n)\) denote the set of all binary necklaces of order \(n\). The six necklaces for \(n=4\) are: \(\mathbf{N}(4)=\{0000,0001,0011,0101,0111,1111\}\). The necklaces in \(\mathbf{N}(6)\) correspond to the node representatives for the trees in Figure 1. Other representatives of necklace equivalence classes will be presented in later sections; they represent a cycle partition of \(\mathbf{B}(n)\) induced by the PCR. The cycles correspond to nodes of the trees that will be defined in later sections. Given a tree \(\mathcal{T}\) of nodes represented by \(\{\alpha_{1},\alpha_{2},\ldots,\alpha_{t}\}\), let \[\mathbf{S}_{\mathcal{T}}=\{\beta\mid\beta\in[\alpha_{i}]\text{ for }1\leq i \leq t\}.\] For example, if \(n=4\) and \(\mathcal{T}\) contains the cycles \(\{0001,0101\}\) then \(\mathbf{S}_{\mathcal{T}}=\{0001,0010,0100,1000,0101,1010\}\). Given \(\mathbf{S}\subseteq\mathbf{B}(n)\), a _universal cycle_ for \(\mathbf{S}\) is a sequence of length \(|\mathbf{S}|\) that contains each string in \(\mathbf{S}\) as a substring (exactly once). For example, a universal cycle for \(\mathbf{S}_{\mathcal{T}}\) above is \(000101\). ### Cycle joining In this section we review how two universal cycles can be joined together to obtain a larger universal cycle. By repeating the process we obtain the cycle-joining trees presented in Figure 1. The corresponding shift rules for PCR1, PCR2, PCR3, and PCR4 are presented and generalized for subtrees (subsets of cycles). If \(\alpha=0a_{2}\cdots a_{n}\) and \(\hat{\alpha}=1a_{2}\cdots a_{n}\), then \(\alpha\) and \(\hat{\alpha}\) are said to be _conjugates_ of each other, and \((\alpha,\hat{\alpha})\) is called a _conjugate pair_. The following well-known result (see for instance Lemma 3 in [29]) based on conjugate pairs is the crux of the cycle-joining approach. Let \(\mathbf{S}_{1}\) and \(\mathbf{S}_{2}\) be disjoint subsets of \(\mathbf{B}_{n}\) such that \(0a_{2}\cdots a_{n}\in\mathbf{S}_{1}\) and \(1a_{2}\cdots a_{n}\in\mathbf{S}_{2}\). If \(U_{1}\) is a universal cycle for \(\mathbf{S}_{1}\) and \(U_{2}\) is a universal cycle for \(\mathbf{S}_{2}\), each with prefix \(a_{2}\cdots a_{n}\), then \(U=U_{1}U_{2}\) is a universal cycle for \(\mathbf{S}_{1}\cup\mathbf{S}_{2}\). #### The Grandparents: Cycle joining based on the PCR Shift-rule constructions of DB sequences are based on an underlying tree of cycles, which we call a _cycle-joining tree_, joined together via conjugate pairs. Perhaps the four simplest DB sequence constructions are the Grandparents based on the cycle-joining trees illustrated in Figure 1. Each tree in this figure has nodes (cycles) represented by necklace representatives \(\mathbf{N}(6)\). Formally, these four cycle-joining trees are defined as follows. * \(\mathbb{T}_{1}(n)\): rooted at \(1^{n}\) and the parent of every other node \(\alpha\in\mathbf{N}(n)\) is obtained by flipping the last 0. * \(\mathbb{T}_{2}(n)\): rooted at \(0^{n}\) and the parent of every other node \(\alpha\in\mathbf{N}(n)\) is obtained by flipping the first 1. * \(\mathbb{T}_{3}(n)\): rooted at \(0^{n}\) and the parent of every other node \(\alpha\in\mathbf{N}(n)\) is obtained by flipping the last 1. * \(\mathbb{T}_{4}(n)\): rooted at \(1^{n}\) and the parent of every other node \(\alpha\in\mathbf{N}(n)\) is obtained by flipping the first 0. Note that for \(\mathbb{T}_{3}(n)\) and \(\mathbb{T}_{4}(n)\), the parent of a node \(\alpha\) is obtained by first flipping the named bit and then rotating the string to its lexicographically least rotation to obtain a necklace. Each node \(\alpha\) and its parent \(\beta\) share a conjugate pair where the highlighted bit in \(\alpha\) is the first bit of one of the conjugates. For example, the nodes \(\alpha=001011\) and \(\beta=011011\) in the PCR2 tree from Figure 1 share conjugate pair \((110110,010110)\). Repeated application of Theorem 1 result in the upcoming four shift rules \(\mathrm{pcr}_{1},\mathrm{pcr}_{2},\mathrm{pcr}_{3},\mathrm{pcr}_{4}\) stated generally for any subtree \(\mathcal{T}\) of the corresponding \(\mathbb{T}_{1}(n)\), \(\mathbb{T}_{2}(n)\), \(\mathbb{T}_{3}(n)\), \(\mathbb{T}_{4}(n)\). Previously, these shift rules were stated for the entire trees in [17], and then for subtrees that included all nodes up to a given level [18], which puts a restriction on the minimum or maximum weight (number of 1s) of any length \(n\) substring. \begin{tabular}{|l|} \hline PCR1: Last 0 (**Granddaddy, Ford**) \\ \hline \(\gamma=a_{j}a_{j+1}\cdots a_{n}0a_{2}\cdots a_{j-1}=a_{j}a_{j+1}\cdots a_{n} 01^{j-2}\). \\ \(\mathrm{pcr}_{1}(\alpha)=\left\{\begin{array}{ll}\overline{a}_{1}&\text{ if $\gamma$ is a necklace and $a_{2}\cdots a_{n}\overline{a}_{1}\in\ \mathbf{S}_{\mathcal{T}}$;}\\ a_{1}&\text{ otherwise.}\end{array}\right.\) \\ \hline \end{tabular} \begin{tabular}{|l|} \hline PCR2: First 1 (**Grandmama**) \\ \hline \(\mathrm{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\ \(0^{n}\) corresponds to concatenating the aperiodic prefixes of each node in the corresponding cycle-joining tree as they are visited in pre-order. This traversal visits the necklaces as they appear in colex order, which is another known concatenation construction [7]. Unfortunately, this _magic_ does not carry over to the PCR3 and PCR4 trees, no matter how we order the children. However, by considering cycle representatives other than necklaces, and considering a more general in-order like traversal, we obtain a general result. The key to finding a concatenation construction for a given shift rule is to tweak the corresponding cycle-joining tree by: (i) determining the appropriate representative of each cycle, (ii) determining an ordering of the children, and (iii) determining how the tree is traversed. Our concatenation trees will be defined formally in Section 3 after we introduce bifurcated ordered trees in Section 2. The goal of this section is to provide some intuition to how we arrived at these trees, which are illustrated in Figure 5 for \(n=6\). For PCR1 and PCR2, the concatenation trees look very similar to their cycle-joining trees. For PCR3, observe that the representatives are obtained by rotating the initial prefix of 0s of a necklace to the suffix; a post-order traversal produces the corresponding DB sequence in Table 1. This traversal corresponds to visiting these representatives in reverse lexicographic order that is equivalent to the construction defined in [16]. The tree for PCR4 is non-trivial and proved to be the basis for discovering our more general result. Each representative is determined based on its parent, and the tree needs to differentiate "left-children" (blue dots) from "right-children" (red dots). The DB sequence for PCR4 is obtained by a somewhat unconventional traversal that recursively visits right-children, followed by the current node, followed by the left-children. For each traversal, _visiting_ a node involves outputting the aperiodic prefix of the given representative. ## 2 Bifurcated ordered trees Our new "concatenation-tree" approach to generating DB sequences relies on tree structures that mix together ordered trees and binary trees. First we review basic tree concepts. Then we introduce our notion of a bifurcated ordered tree together with a traversal called an RCL traversal. An _ordered tree_ is a rooted tree in which the children of each node are given a total order. For example, a node in an ordered tree with three children has a first child, a second child, and a third (last) child. In contrast, a _cardinal tree_ is a rooted tree in which the children of each node occupy specific positions. In particular, a _\(k\)-ary tree_ has \(k\) positions for the children of each node. For example, each child of a node in a \(3\)-ary tree is either a left-child, a middle child, or a right-child. Our type of tree is both ordinal and cardinal. While ordered trees have one "type" of child, our trees will have two types of children. We refer to these trees as _bifurcated ordered trees_ (_bots_), with the two types of children being _left-children_ and _right-children_. To illustrate bifurcated ordered trees, Figure 2 provides all such structures (i.e., all bots) with \(n=3\) nodes. This type of "ordinal-cardinal" tree seems quite natural2, and it is very likely to have been used in previous academic investigations. Nevertheless, the authors have not been able to find an exact match in the literature. In particular, \(2\)-tuplet trees use a different notion of a root, and correspond more closely to ordered forests of bots. A computer program3 to \begin{table} \begin{tabular}{c|c} \begin{tabular}{c} **Shift rule** \\ \end{tabular} & \begin{tabular}{c} **DB sequence for \(n=6\)** \\ \end{tabular} \\ \hline \(\mathrm{per}_{1}\) & \(0\ 00001\ 000011\ 000101\ 000111\ 001011\ 001101\ 00111\ 01011\ 010111\ 011111\ 1\) \\ \(\mathrm{per}_{2}\) & \(0\ 00001\ 001\ 000101\ 01\ 001101\ 000011\ 001011\ 000111\ 001111\ 00111111\ 01\) \\ \(\mathrm{per}_{3}\) & \(1\ 111110\ 111100\ 111000\ 110\ 1101000\ 10110000\ 101110\ 101100\ 101000\ 100000\ 1000000\ 0\) \\ \(\mathrm{per}_{4}\) & \(1\ 111110\ 110\ 100\ 1001010\ 111010\ 1010010\ 100010\ 1111000\ 1110000\ 1110000\ 11000000\ 0\) \\ \end{tabular} \end{table} Table 1: DB sequences resulting from the shift rules that correspond to the cycle-joining trees from Figure 1. enumerate all bots demonstrates that the total number for \(n\) from \(1\) to \(12\) are: \[1,2,7,30,143,728,3876,21318,120175,690690,4032015,23841480.\] When extended for larger \(n\), the sequence corresponds to all 23 entries for sequence A006013 in the Online Encyclopedia of Integer Sequence [2]; however, no relationship to such a tree is noted. ### Right-Current-Left (RCL) traversals The distinction between left-children and right-children in a bot allows for a very natural notion of an _in-order traversal_: visit the left-children from first to last, then the current node, then the right-children from first to last. During our work with concatenation trees (see Section 3) it will be more natural to use a modified traversal, in which the right-children are visited before the left-children. Formally, we recursively define a _Right-Current-Left (RCL) traversal_ of a bifurcated ordered tree starting from the root as follows: * visit all right-children of the current node from first to last; * visit the current node; * visit all left-children of the current node from first to last. Note that the resulting RCL order is not the same as a _reverse in-order traversal_ (i.e., an in-order traversal written in reverse), since the children of each type are visited in the usual order (i.e., first to last) rather than in reverse order (i.e., last to first). An example of an RCL traversal is shown in Figure 3. #### 2.1.1 Properties of RCL traversals Our main result relies on properties exhibited between successive nodes in an RCL traversal. We start with some relationships for a node \(x\) in a bot. * A _right-descendant_ of \(x\) is a node obtained by traversing down 0 or more right-children. * A _left-descendant_ of \(x\) is a node obtained by traversing down 0 or more left-children. * The _rightmost left-descendant_ of \(x\) is the node obtained by repeatedly traversing down the last left-child as long as one exists. * The _leftmost right-descendant_ of \(x\) is the node obtained by repeatedly traversing down the first right-child as long as one exists. Note that a node is its own leftmost right-descendent if it has no right-children. Similarly, a node is its own rightmost left-descendent if it has no left-children. The following remark details the three cases for when two nodes from a bot appear consecutively in RCL order; they are illustrated in Figure 4. Figure 3: A bot with its \(n{=}12\) nodes labeled as they appear in RCL order. Figure 2: All eight bifurcated ordered trees (bots) with \(n{=}3\) nodes. Each left-child descends from a blue \(\bullet\), while each right-child descends from a red \(\bullet\). **Remark 2**.: If a bifurcated ordered tree has RCL traversal \(\ldots,x,y,\ldots\), then one of the following three cases holds: 1. \(x\) is an ancestor of \(y\): \(y\) is the leftmost right-descendant of \(x\)'s first left-child; 2. \(x\) is a descendant of \(y\): \(x\) is the rightmost left-descendent of \(y\)'s last right-child; 3. \(x\) and \(y\) are descendants of a common ancestor \(a\) (other than \(x\) and \(y\)): \(x\) is the rightmost left-descendant and \(y\) is the leftmost right-descendant of consecutive left-children or right-children of \(a\). Moreover, if the traversal sequence is cyclic (i.e., \(x\) is last in the ordering and \(y\) is first), there are three additional cases: 1. \(x\) is an ancestor of \(y\): \(x\) is the root and \(y\) is its leftmost right-descendant; 2. \(x\) is a descendant of \(y\): \(y\) is the root and \(x\) is its rightmost left-descendant; 3. \(x\) and \(y\) are descendants of a common ancestor \(a\) (other than \(x\) and \(y\)): \(x\) is the rightmost left-descendant of the root, and \(y\) is the leftmost right-descendant of the root. The three cases provided for cyclic sequences are stated in a way to convince the reader that all options are considered; however, they can be collapsed to the single case (f) if we allow the common ancestor \(a\) to be \(x\) or \(y\). ## 3 Concatenation trees In this section we describe how to convert a PCR-based cycle joining tree into a labeled bot we call a _concatenation tree_. The parent-child relationship of the trees are the same; however, the node labels (representatives) may change and the order of the children must be defined in addition to whether they are left-children or right-children. One of the more challenging issues deals with how to place the children of periodic representatives. Every node in our concatenation tree is labeled with a string \(\alpha=a_{1}a_{2}\cdots a_{n}\) representing a unique necklace class. Each node is assigned a _change index_\(c\), where \(1\leq c\leq n\). If \(\alpha\) is not the root, its change index indicates the unique index where \(\alpha\) differs from its parent. Let \(\operatorname{flip}(\alpha,j)\) denote \(a_{1}\cdots a_{j-1}\overline{a_{j}}a_{j+1}\cdots a_{n}\). Then \(\operatorname{flip}(\alpha,c)\) is the parent of \(\alpha\) (when \(\alpha\) is not the root). Since it is possible for a node to be joined to its parent in multiple ways when the parent is periodic, it is critical to put a limit on which indices can become a change index. If \(\alpha=(a_{1}\cdots a_{p})^{q}\) has period \(p\) with change index \(c\) where \(kp<c\leq kp+p\) for some integer \(0\leq k<q\), then we say the _acceptable range_ of \(\alpha\) is \(kp+1,\ldots,kp+p\). Note if \(\alpha\) is aperiodic, its acceptable range is \(1,2,\ldots,n\). Formally, given a PCR-based cycle-joining tree \(\mathbb{T}\) rooted at \(r\), a _concatenation tree_\(\mathcal{T}\) is a labeled bot defined as follows: Figure 4: Illustrating the six cases outlined in Remark 2 for when \(y\) follows \(x\) in an RCL traversal. The final three cases hold when the traversal sequence is considered to be cyclic (i.e., \(x\) comes last and \(y\) comes first). In these images, \(\ell_{i}\) and \(r_{i}\) refer to the \(i\)th left and right-child of their parent, respectively, and \(r_{m}\) refers to the last right-child of its parent. Dashed lines indicate leftmost right-descendants (red) and rightmost left-descendants (blue). * The root node is \(r\) and it can be assigned an arbitrary change index \(c\). * The left-children and right-children of each node \(\alpha=a_{1}a_{2}\cdots a_{n}\) with change index \(c\) are defined recursively starting from the root as follows: * The _left-children_ of \(\alpha\) are the nodes with representatives \(\mathrm{flip}(\alpha,j)\) for each \(j=1,2,\ldots,c-1\) where \(j\) is in the acceptable range and \(\mathrm{flip}(\alpha,j)\) belongs to a necklace class of some child of \(\alpha\) in \(\mathbb{T}\). * The _right-children_ of \(\alpha\) are the nodes with representatives \(\mathrm{flip}(\alpha,j)\) for each \(j=c+1\ldots,n\) where \(j\) is in the acceptable range and \(\mathrm{flip}(\alpha,j)\) belongs to a necklace class of some child of \(\alpha\) in \(\mathbb{T}\). The change index for each child is the index \(j\) where it differs from its parent \(\alpha\). **Special case:* * The root \(r\) is the only node where flipping the bit at its change index can produce a child. If such a child exists, then we can consider it to be either the _first_ right-child or the _last_ left-child. This flexibility is required to preserve our earlier observations regarding pre-order and post-order traversals. The concatenation trees for the four cycle-joining trees for PCR1, PCR2, PCR3, and PCR4 from Figure 1 are given in Figure 5. The small gray boxes on top of each node indicate the node's change index. The only concatenation tree with both left-children and right-children is the one corresponding to PCR4; a further example for \(n{=}8\) is provided in Appendix A. In fact, it was the discovery of this tree that lead us to our more general result and the definition of bifurcated ordered trees. The node representatives for PCR1 and PCR2 remain the necklaces; for PCR3, the representatives correspond to necklaces with their initial prefix of \(0\)s rotated to the suffix; for PCR4, we currently can only understand the representatives recursively based on a node's ancestors. Observe that the child of each of the four root nodes has the same change index as the root. For PCR1 and PCR3, the child of the root is considered to be a right-child, while for PCR2 and PCR4, the child is chosen to be a left-child. The rationale for this choice is based on the upcoming definition of an \(\mathrm{RCL}\) sequence; they match the previously known concatenation constructions for the Granddaddy[14], the Grandmamam[7], and the Granny [16] (see Table 1 for \(n=6\)). Let \(\mathrm{RCL}(\mathcal{T})\) denote the _RCL sequence_ produced by traversing the concatenation tree \(\mathcal{T}\) in RCL order, outputting the aperiodic prefix of the node representative when it is visited. For example, the RCL sequences for each tree in Figure 5 are given in Table 1. It is important how we handled the periodic nodes in our concatenation trees, since our goal is to demonstrate that \(\mathrm{RCL}(\mathcal{T})\) produces a universal cycle. For example consider three necklaces (a) 11001100110, (b) 110011001100, and (c) 100011001100 where \(n=12\). They can be joined by flipping the second last 0 in (b) and flipping the first 0 in (c); (a) is the parent of (b) and (b) is the parent of (c). A bot for this cycle-joining tree is shown on the right. It is not a concatenation tree since the change index for the bottom node is outside the acceptable range of its periodic parent. Outputting the aperiodic prefixes of the nodes when visited in RCL order produces \(1100\ 100110011001100\ 11001110\). Since the substring 110011001100 appears twice, it is not a universal cycle. We are ready to formally present our main results. Let \(\mathcal{T}\) be a concatenation tree derived from a PCR-based cycle-joining tree with corresponding shift rule \(F\) whose nodes represent all necklace classes of order \(n\). Then \(\mathrm{RCL}(\mathcal{T})\) is a DB sequence with shift rule \(F\). More generally, we can consider cycle joining an arbitrary number of necklace classes to obtain a universal cycle for certain subsets of binary strings. Let \(\mathcal{T}\) be a concatenation tree derived from a PCR-based cycle-joining tree for an underlying set \(\mathbf{S}\) with shift rule \(F_{\mathbf{S}}\). Then \(\mathrm{RCL}(\mathcal{T})\) is a universal cycle for \(\mathbf{S}\) with shift rule \(F_{\mathbf{S}}\). We prove Theorem 4 (which is also a proof for Theorem 3) in Section 3.2. Let \(\mathcal{T}_{1}\), \(\mathcal{T}_{2}\), \(\mathcal{T}_{3}\), and \(\mathcal{T}_{4}\) denote the concatenation trees derived from the cycle-joining trees \(\mathbb{T}_{1}(n)\), \(\mathbb{T}_{2}(n)\), \(\mathbb{T}_{3}(n)\), and \(\mathbb{T}_{4}(n)\), respectively. Then the following are immediate consequences to Theorem 3. \(\mathrm{RCL}(\mathcal{T}_{1})\) is a DB sequence with shift rule \(\mathrm{pcr}_{1}\). \(\mathrm{RCL}(\mathcal{T}_{2})\) is a DB sequence with shift rule \(\mathrm{pcr}_{2}\). \(\mathrm{RCL}(\mathcal{T}_{3})\) is a DB sequence with shift rule \(\mathrm{pcr}_{3}\). \(\mathrm{RCL}(\mathcal{T}_{4})\) is a DB sequence with shift rule \(\mathrm{pcr}_{4}\). Note that the concatenation sequence \(\mathrm{RCL}(\mathcal{T}_{1})\) is identical to the one for the Granddaddy [14]; the concatenation sequence \(\mathrm{RCL}(\mathcal{T}_{2})\) is identical to the one for the Grandmama [7]; the concatenation sequence \(\mathrm{RCL}(\mathcal{T}_{3})\) is identical to the one conjectured for the Granny [16]; the concatenation sequence \(\mathrm{RCL}(\mathcal{T}_{4})\) is the first known for the Grandpa. These results _demystify_ our grandparents by providing a simple and understandable correspondence between the constructions and their corresponding shift rules. By applying the more general Theorem 4: 1. It is easy to derive concatenation constructions for universal cycles for binary strings with bounded weight that can easily be implemented to run in \(O(1)\)-amortized time per bit by adapting the algorithms detailed in Section 4. Figure 5: Concatenation trees for \(n=6\) based on the four parent rules PCR1, PCR2, PCR3, PCR4. These bifurcated ordered trees (dots) provide additional structure to the unordered cycle-joining trees from Figure 1. This structure provides the missing information for fully understanding the corresponding concatenation constructions. The gray box above each node indicates its change index. 2. By applying the results from [4], the concatenation construction for PCR3 can be adapted by _cutting_ cycles from the cycle-joining tree to efficiently construct cut-down DB sequences in \(O(1)\)-amortized time per bit using \(O(n)\) space [27]. Preliminary results indicate this framework can be used to understand other known concatenation constructions and to find new, previously unknown, concatenation constructions (see Appendix C). ### Properties The following properties are applied in our proof in the next subsection. Clearly, each non-root node \(\alpha\) differs in exactly one bit position as its parent, and that position is \(\alpha\)'s change index \(c\). Consider a concatenation tree \(\mathcal{T}\) containing a node \(\alpha=a_{1}\cdots a_{n}=\beta_{2}a_{c}\beta_{1}\) with change index \(c\) along with a node \(\gamma\) that has change index \(c_{\gamma}\). 1. If \(\gamma\) is a right-descendant of \(\alpha\), then \(\gamma\) has prefix \(\beta_{2}\) and \(c_{\gamma}\geq c\). 2. If \(\alpha\) is a left-descendent of \(\gamma\), then \(\gamma\) has prefix \(\beta_{2}\) and \(c_{\gamma}\geq c\). 3. If \(\gamma\) is a left-descendent of \(\alpha\), then \(\gamma\) has suffix \(\beta_{1}\) and \(c_{\gamma}\leq c\). 4. If \(\alpha\) is a right-descendant of \(\gamma\), then \(\gamma\) has suffix \(\beta_{1}\) and \(c_{\gamma}\leq c\). ### Proof of Theorem 4 Let \(\mathcal{T}\)be a concatenation tree and let \(\alpha_{1},\alpha_{2},\ldots,\alpha_{t}\) denote the nodes of \(\mathcal{T}\) visited in RCL order. Let \(\alpha_{j}\) denote an arbitrary leaf of \(\mathcal{T}\). For each \(1\leq i\leq t\), let \(c_{i}\) denote the change index of \(\alpha_{i}\). Our proof of Theorem 4 inductively constructs the desired tree adding one leaf at a time while maintaining the universal cycle property and shift rule. In the case that \(t=1\), the result is trivial. Assume, that any concatenation tree with less than \(t\) nodes satisfies our hypothesis. Let \(\mathcal{T}\)be a concatenation tree with \(t>1\) nodes and let \(\mathcal{T}^{\prime}\) denote the tree obtained by removing a leaf \(\alpha_{j}\) from \(\mathcal{T}\). Let \(U_{1}=\mathrm{ap}(\alpha_{j+1})\cdots\mathrm{ap}(\alpha_{t})\,\mathrm{ap}( \alpha_{1})\cdots\mathrm{ap}(\alpha_{j-1})\), i.e., a rotation of \(\mathrm{RCL}(\mathcal{T}^{\prime})\). As per our inductive assumption, \(U_{1}\) is a universal cycle for \(\mathbf{S}^{\prime}=\mathbf{S}-[\alpha_{j}]\) with shift rule \(F_{\mathbf{S}^{\prime}}\). Let \(U_{2}=\mathrm{ap}(\alpha_{j})\), which is a universal cycle for \([\alpha_{j}]\). Let \(\alpha_{j}=a_{1}a_{2}\cdots a_{n}=\beta_{2}a_{c_{j}}\beta_{1}\). \(\rhd\) Claim 10. \(U_{1}\) (considered cyclically) has prefix \(\beta_{2}\) and suffix \(\beta_{1}\). Since the parent of \(\alpha_{j}\) is \(\beta_{2}\overline{a_{c_{j}}}\beta_{1}\), the universal cycle \(U_{1}\) must contain the rotation \(\overline{a_{c_{j}}}\beta_{1}\beta_{2}\). Clearly, \(U_{2}\) contains \(a_{c_{j}}\beta_{1}\beta_{2}\). Thus, \(U_{1}\) and \(U_{2}\) can be joined by Theorem 1 via the conjugate pair \((\overline{a_{c_{j}}}\beta_{1}\beta_{2},a_{c_{j}}\beta_{1}\beta_{2})\) and \(U\) is a universal cycle for \(\mathbf{S}\). Moreover, since \(\alpha_{j}\) is joined via the conjugate pair from the underlying cycle-joining tree, it will have shift rule \(F_{\mathbf{S}}\). It remains to prove Claim 10. #### Proof of Claim 10 We begin by considering properties of the nodes immediately before and after \(\alpha_{j}\); they are straightforward consequences of Remark 2 and Remark 9. Assume operations of the indices are taken modulo \(t\), i.e., \(\alpha_{0}=\alpha_{t}\) and \(\alpha_{t+1}=\alpha_{1}\). \(\rhd\) Claim 11. \(\alpha_{j+1}\) has prefix \(\beta_{2}\). Let \(x=\alpha_{j}\) and \(y=\alpha_{j+1}\). We consider the six cases (a-f) from Remark 2 using notation from Figure 4. Since \(\alpha_{j}\) is a leaf, cases (a) and (d) do not apply. (b) From (P2), \(r_{m}\) has prefix \(\beta_{2}\). Since the change index of \(r_{m}\) is less than or equal to \(c_{j}\) and \(r_{m}\) only differs from its parent \(y\) at its change index, \(y\) must also have the prefix \(\beta_{2}\). (c) From (P2), \(\beta_{2}\) is a prefix of \(\ell_{i}\). The change index of \(\ell_{i}\) is strictly less than the change index of \(\ell_{i+1}\) and the two nodes differ only at those two indices (this holds if these nodes are either consecutive left- or right-children), it follows that \(\beta_{2}\) is then a prefix of \(\ell_{i+1}\) as well. Finally, since \(y\) can only differ from \(\ell_{i+1}\) in indices between the change index of \(\ell_{i+1}\) and \(c_{j+1}\), it must also have the prefix \(\beta_{2}\). (e) Follows immediately from (P2). (f) Let \(\alpha_{r}\) be the root of \(\mathcal{T}\). From (P2), \(\alpha_{r}\) has prefix \(\beta_{2}\) and \(c_{j}<c_{r}\). From (P1), \(c_{r}<c_{j+1}\) and \(y\) must also have prefix \(\beta_{2}\). \(\rhd\) Claim 12. \(\alpha_{j-1}\) has suffix \(\beta_{1}\). Proof.: Let \(x=\alpha_{j-1}\) and \(y=\alpha_{j}\). We consider the six cases (a-f) from Remark 2 using notation from Figure 4. Since \(\alpha_{j}\) is a leaf, cases (b) and (e) do not apply. (a) From (P4), \(\ell_{1}\) has suffix \(\beta_{1}\). Since the change index of \(\ell_{1}\) is less than or equal to \(c_{j}\) and \(\ell_{1}\) only differs from its parent \(x\) at it change index, \(x\) must also have the suffix \(\beta_{1}\). (c) From (P4), \(\beta_{1}\) is a suffix of \(\ell_{i+1}\). The change index of \(\ell_{i}\) is strictly less than the change index of \(\ell_{i+1}\) and these two nodes differ only at those two indices (this holds if these nodes are either consecutive left- or consecutive right-children), it follows that \(\beta_{1}\) is a suffix of \(\ell_{i}\) as well. Finally, since \(x\) can only differ from \(\ell_{i}\) in indices between \(c_{j-1}\) and the change index of \(\ell_{i}\), it must also have the suffix \(\beta_{1}\). (d) Follows immediately from (P4). (f) Let \(\alpha_{r}\) be the root of \(\mathcal{T}\). From (P4), \(\alpha_{r}\) has suffix \(\beta_{1}\) and \(c_{j-1}<c_{r}\). From (P3), \(c_{r}<c_{j}\) and \(x\) must also has suffix \(\beta_{1}\). If \(\alpha_{j-1}\) and \(\alpha_{j+1}\) are aperiodic, then by Claim 11 and Claim 12 we are done. If \(t=2\), then we are also done since \(U_{1}\) is considered cyclically. It remains to be considered the cases where either \(\alpha_{j-1}\) or \(\alpha_{j+1}\) are periodic. With these cases, it will become more obvious as to the reason for needing an acceptable range. **Case: \(\alpha_{j+1}\) is periodic** Suppose \(\alpha_{j+1}\) has period \(p\) and acceptable range \(kp+1,\ldots,kp+p\). To handle this case, we demonstrate the following: (i) \(c_{j}\leq kp+p\), and (ii) \(ap(\alpha_{j+1})^{k+1}\) is a prefix of \(U_{1}\). The first point implies that \(\beta_{2}\) is a prefix of \(ap(\alpha_{j+1})^{k+1}\) since \(\beta_{2}\) is a prefix of \(\alpha_{j+1}\) from Claim 11. This, in combination with the second point, implies \(\beta_{2}\) is a prefix of \(U_{1}\). Proof of (i).: Since \(\alpha_{j}\) is a leaf, we step through cases (b), (c), (e), and (f) from Remark 2 following notation from Figure 4 where \(x=\alpha_{j}\) and \(y=\alpha_{j+1}\). (b) The change index for \(r_{m}\) must be less than or equal to \(kp+p\), and because \(\alpha_{j}\) is a left descendant of \(r_{m}\), \(c_{j}\) must be less than or equal to the change index of \(r_{m}\). Thus, \(c_{j}\leq kp+p\). (c) \(c_{j}\) is less than or equal to the change index of \(\ell_{i}\), which is less than the change index of \(\ell_{i+1}\), which is less than or equal to \(c_{j+1}\). Thus, \(c_{j}<c_{j+1}\leq kp+p\). (e) \(\alpha_{j}\) is a left-descendant of \(\alpha_{j+1}\) so clearly \(c_{j}<c_{j+1}\leq kp+p\). (f) \(c_{j}\) is less than to the change index of the root, which is less than or equal to \(c_{j+1}\). Thus, \(c_{j}<c_{j+1}\leq kp+p\). Proof of (ii).: We start by proving a general claim for consecutive nodes in the RCL traversal of \(\mathcal{T}\) and use that to prove a stronger claim. For convenience, consider the notation used in acceptable ranges to be independent of earlier definitions. \(\rhd\)Claim 13.If \(\alpha_{i}\) is periodic with period \(p\) and acceptable range \(kp+1,\ldots,kp+p\), then \(ap(\alpha_{i})^{k}\) is a prefix of \(\alpha_{i+1}\). Proof.: If \(\alpha_{i}\) is not an ancestor of \(\alpha_{i+1}\), the inequality \(kp<c_{i}\) and Claim 11 together imply \(ap(\alpha_{i})^{k}\) is a prefix of \(\alpha_{i+1}\). It remains to consider cases (a) and (d) from Remark 2 where \(x=\alpha_{i}\) is an ancestor of \(y=\alpha_{i+1}\). For case (a), \(y\) is the leftmost right-descendent of \(x\)'s first left-child \(\ell_{1}\). Since \(x\) is periodic, the change index of \(\ell_{1}\) is in \(\alpha_{i}\)'s acceptable range; it is greater than \(kp\). \(y\) is a right descendant of \(\ell_{1}\) and thus \(c_{i+1}>kp\), which means \(y\) differs from \(\ell_{1}\) only in indices greater than \(kp\). For (d) clearly \(y\) differs only in indices greater than or equal to \(c_{i}\), which means \(c_{i+1}>kp\). Thus, for each case, \(ap(\alpha_{i})^{k}\) is a prefix of \(\alpha_{i+1}\). \(\rhd\)Claim 14.If \(\alpha_{i}\) is periodic with period \(p\) and acceptable range \(kp+1,\ldots,kp+p\), then \(ap(\alpha_{i})^{k+1}\) is a prefix of \(ap(\alpha_{i})\cdots ap(\alpha_{t})ap(\alpha_{1})\cdots ap(\alpha_{i-1})\), which is a rotation of \(\mathrm{RCL}(\mathcal{T})\), considered cyclically. Proof.: Note that \(|ap(\alpha_{i})^{k+1}|\leq n\). The proof is by induction on the number of nodes \(t\). If \(t=1\), the result is trivial. Suppose the claim holds for any tree with less than \(t>1\) nodes. Let \(\mathcal{T}\) have \(t\) nodes and let \(\alpha_{i}\) be a leaf node of \(\mathcal{T}\). If there are no periodic nodes, we are done. Otherwise, we first consider \(\alpha_{i}\), then all other periodic nodes in \(\mathcal{T}\). Suppose \(\alpha_{i}\) is periodic with period \(p\) and acceptable range \(kp+1,\ldots,kp+p\). From Claim 13, \(ap(\alpha_{i})^{k}\) is a prefix of \(\alpha_{i+1}\). If \(\alpha_{i+1}\) is aperiodic, then we are done. Suppose, then, that \(\alpha_{i+1}\) is periodic with period \(p^{\prime}\) and acceptable range \(k^{\prime}p^{\prime}+1,\ldots,k^{\prime}p^{\prime}+p^{\prime}\). Let \(\mathcal{T}^{\prime}\) be the tree resulting from \(\mathcal{T}\)when \(\alpha_{i}\) is removed. It follows from (i) that \(kp<c_{i}\leq k^{\prime}p^{\prime}+p^{\prime}\), which implies \(ap(\alpha_{i})^{k}\) is a prefix of \(ap(\alpha_{i+1})^{k^{\prime}+1}\). Additionally, since \(\mathcal{T}^{\prime}\) has less than \(t\) nodes and \(\alpha_{i+1}\) is periodic, \(ap(\alpha_{i+1})^{k^{\prime}+1}\) is a prefix of \(ap(\alpha_{i+1})\cdots ap(\alpha_{t})ap(\alpha_{1})\cdots ap(\alpha_{i-1})\) by our inductive assumption. Therefore, \(ap(\alpha_{i})^{k+1}\) is a prefix of \(ap(\alpha_{i})ap(\alpha_{i+1})\cdots ap(\alpha_{t})ap(\alpha_{1})\cdots ap( \alpha_{i-1})\). Now consider \(\alpha_{i-1}\). If it is aperiodic, then by induction, the claim clearly holds for all periodic nodes in \(\mathcal{T}^{\prime}\). Thus, assume \(\alpha_{i-1}\) is periodic. By showing that \(ap(\alpha_{i-1})ap(\alpha_{i})\cdots ap(\alpha_{t})ap(\alpha_{1})\cdots ap( \alpha_{i-2})\) has the desired prefix, then repeating the same arguments proves will prove the claim holds for every other periodic node in \(\mathcal{T}^{\prime}\). Let \(\alpha_{i-1}\) have period \(p^{\prime\prime}\) and acceptable range \(k^{\prime\prime}p^{\prime\prime}+1,\ldots,k^{\prime\prime}p^{\prime\prime}+p^{\prime\prime}\). If \(\alpha_{i}\) is aperiodic, Claim 13 implies that \(ap(\alpha_{i-1})^{k^{\prime\prime}}\) is a prefix of \(\alpha_{i}=\mathrm{ap}(\alpha_{i})\) and thus the claim holds for \(\alpha_{i-1}\). If \(\alpha_{i}\) is periodic with period \(p\) and acceptable range \(kp+1,\ldots,kp+p\), we already demonstrated that \(ap(\alpha_{i})^{k+1}\) is a prefix of \(ap(\alpha_{i})\cdots ap(\alpha_{t})ap(\alpha_{1})\cdots ap(\alpha_{i-1})\). From Claim 13, \(ap(\alpha_{i-1})^{k^{\prime\prime}}\) is a prefix of \(\alpha_{i}\). Note that (i) and its proof handles cases (b)(c)(e)(f) from Remark 2 implying that \(c_{i-1}<kp+p\) for these cases. Since \(\alpha_{i-1}\) is not necessarily a leaf, we must also consider (a) and (d). In both cases, clearly \(k^{\prime\prime}p^{\prime\prime}<c_{i}\). Either way, \(k^{\prime\prime}p^{\prime\prime}<kp+p\), which means \(ap(\alpha_{i-1})^{k^{\prime\prime}}\) is a prefix of \(ap(\alpha_{i})^{k+1}\). Thus, \(ap(\alpha_{i-1})^{k^{\prime\prime}+1}\) is a prefix of \(ap(\alpha_{i-1})ap(\alpha_{i})\cdots ap(\alpha_{t})ap(\alpha_{1})\cdots ap( \alpha_{i-2})\). Applying Claim 14 to \(\alpha_{j+1}\) gives the desired result. **Case: \(\alpha_{j-1}\) is periodic**: The proof mirrors the case for \(\alpha_{j+1}\). Suppose \(\alpha_{j-1}\) has period \(p\) and acceptable range \(kp+1,\ldots,kp+p\). To handle this case we demonstrate the following: (i) \(c_{j}>kp\), and (ii) \(ap(\alpha_{j-1})^{q-k}\) is a suffix of \(U_{1}\). The first point implies that \(\beta_{1}\) is a suffix of \(ap(\alpha_{j-1})^{q-k}\) since \(\beta_{1}\) is a suffix of \(\alpha_{j-1}\) from Claim 12. This, in combination with the second point, implies \(\beta_{1}\) is a suffix of \(U_{1}\). Proof of (i).: Since \(\alpha_{j}\) is a leaf, we step through cases (a), (c), (d), and (f) from Remark 2 following notation from Figure 4 where \(x=\alpha_{j-1}\) and \(y=\alpha_{j}\). (a) The change index for \(\ell_{1}\) must be greater than to \(kp\), and because \(\alpha_{j}\) is a right descendant of \(\ell_{1}\), \(c_{j}\) must be greater than or equal to the change index of \(\ell_{1}\). Thus, \(c_{j}>kp\). (c) \(c_{j-1}\) is less than or equal to the change index of \(\ell_{i}\), which is less than the change index of \(\ell_{i+1}\), which is less than or equal to \(c_{j}\). Thus, \(kp<c_{j-1}<c_{j}\). (d) \(\alpha_{j}\) is a right-descendant of \(\alpha_{j-1}\) so clearly \(kp<c_{j-1}<c_{j}\). (f) \(c_{j-1}\) is less than or equal to the change index of the root, which is less than \(c_{j}\). Thus, \(kp<c_{j-1}<c_{j}\). Proof of (ii).: As with the prefix section, we start by proving a general claim for consecutive nodes in the RCL traversal of \(\mathcal{T}\) and use that to prove a stronger claim. For convenience, consider the notation used in acceptable ranges to be independent of earlier definitions. \(\rhd\) Claim 15.: If \(\alpha_{i}\) is periodic with period \(p\) and acceptable range \(kp+1,\ldots,kp+p\), then \(ap(\alpha_{i})^{q-k-1}\), where \(q=n/p\), is a suffix of \(\alpha_{i-1}\). Proof.: If \(\alpha_{i}\) is not an descendant of \(\alpha_{i-1}\), the inequality \(c_{i}\leq kp+p\) and Claim 12 together imply \(ap(\alpha_{i})^{k}\) is a prefix of \(\alpha_{i-1}\). It remains to consider cases (b) and (e) from Remark 2 where \(y=\alpha_{i}\) is an ancestor of \(x=\alpha_{i-1}\). For case (b), \(x\) is the rightmost left-descendent of \(y\)'s last right-child \(r_{m}\). Since \(y\) is periodic, the change index of \(r_{m}\) is in \(\alpha_{i}\)'s acceptable range; it is less than or equal to \(kp+p\). \(x\) is a left descendant of of \(r_{m}\) and thus \(c_{i-1}\leq kp+p\), which means \(x\) differs from \(r_{m}\) only in indices less than or equal to \(kp+p\). For (e) clearly \(x\) differs only in indices less than or equal to \(c_{i}\), which means \(c_{i-1}\leq kp+p\). Thus, for each case, \(ap(\alpha_{i})^{q-k-1}\) is a suffix of \(\alpha_{i-1}\). \(\rhd\) Claim 16.: If \(\alpha_{i}\) is periodic with period \(p\) and acceptable range \(kp+1,\ldots,kp+p\), then \(ap(\alpha_{i})^{n/p-k}\), is a suffix of \(ap(\alpha_{i+1})\cdots ap(\alpha_{t})ap(\alpha_{1})\cdots ap(\alpha_{i})\), which is a rotation of \(\mathrm{RCL}(\mathcal{T})\), considered cyclically. Proof.: Let \(q=n/p\). Note that \(|ap(\alpha_{i})^{q-k}|\leq n\). The proof is by induction on \(t\). If \(t=1\), the result is trivial. Suppose the claim holds for any tree with less than \(t>1\) nodes. Let \(\mathcal{T}\) have \(t\) nodes and let \(\alpha_{i}\) be a leaf node of \(\mathcal{T}\). If there are no periodic nodes, we are done. Otherwise, we first consider \(\alpha_{i}\), then all other periodic nodes in \(\mathcal{T}\). Suppose \(\alpha_{i}\) is periodic with period \(p\) and acceptable range \(kp+1,\ldots,kp+p\). From Claim 15, \(ap(\alpha_{i})^{q-k-1}\) is a suffix of \(\alpha_{i-1}\). If \(\alpha_{i-1}\) is aperiodic, then we are done. Suppose, then, that \(\alpha_{i-1}\) is periodic with period \(p^{\prime}\) and acceptable range \(k^{\prime}p^{\prime}+1,\ldots,k^{\prime}p^{\prime}+p^{\prime}\). Let \(\mathcal{T}^{\prime}\) be the tree resulting from \(\mathcal{T}\) when \(\alpha_{i}\) is removed. It follows from (i) that \(k^{\prime}p^{\prime}<c_{i}\leq kp+p\), or \(n-kp-p<n-k^{\prime}p^{\prime}\), which implies \(ap(\alpha_{i})^{q-k-1}\) is a suffix of \(ap(\alpha_{i-1})^{q^{\prime}-k^{\prime}}\), where \(q^{\prime}=n/p^{\prime}\). Additionally, since \(\mathcal{T}^{\prime}\) has less than \(t\) nodes and \(\alpha_{i-1}\) is periodic, \(ap(\alpha_{j-1})^{q^{\prime}-k^{\prime}}\) is a suffix of \(ap(\alpha_{i+1})\cdots ap(\alpha_{t})ap(\alpha_{1})\cdots ap(\alpha_{i-1})\) by our inductive assumption. Therefore, \(ap(\alpha_{i})^{q-k}\) is a suffix of \(ap(\alpha_{i+1})\cdots ap(\alpha_{t})ap(\alpha_{1})\cdots ap(\alpha_{i-1})ap( \alpha_{i})\). Now consider \(\alpha_{i+1}\). If it is aperiodic, then by induction the claim clearly holds for all periodic nodes in \(\mathcal{T}^{\prime}\). Thus, assume \(\alpha_{i+1}\) is periodic. By showing \(ap(\alpha_{i+2})\cdots ap(\alpha_{t})ap(\alpha_{1})\cdots ap(\alpha_{i})ap( \alpha_{i+1})\) has the desired suffix, then repeating the same arguments will prove he claim holds for every other periodic node in \(\mathcal{T}^{\prime}\). Let \(\alpha_{i+1}\) have period \(p^{\prime\prime}\) and acceptable range \(k^{\prime\prime}p^{\prime\prime}+1,\ldots,k^{\prime\prime}p^{\prime\prime}+p^{ \prime\prime}\). If \(\alpha_{i}\) is aperiodic, Claim 15 implies that \(ap(\alpha_{i+1})^{q^{\prime\prime}-k^{\prime\prime}-1}\) is a suffix of \(\alpha_{i}=\mathrm{ap}(\alpha_{i})\) and thus the claim holds for \(\alpha_{i+1}\). If \(\alpha_{i}\) is periodic with period \(p\) and acceptable range \(kp+1,\ldots,kp+p\), we already demonstrated that \(ap(\alpha_{i})^{q-k}\) is a suffix of \(ap(\alpha_{i+1})\cdots ap(\alpha_{t})ap(\alpha_{1})\cdots ap(\alpha_{i})\). From Claim 15, \(ap(\alpha_{i+1})^{q^{\prime\prime}-k^{\prime\prime}-1}\) is a suffix of \(\alpha_{i}\). Note that (i) and its proof handles cases (a)(c)(d)(f) from Remark 2 implying that \(c_{i+1}>kp\) for these cases. Since \(\alpha_{i+1}\) is not necessarily a leaf, we must also consider (b) and (e). In both cases, clearly \(c_{i}\leq k^{\prime\prime}p^{\prime\prime}+p^{\prime\prime}\). Either way, \(kp<k^{\prime\prime}p^{\prime\prime}+p^{\prime\prime}\), which means \(\mathrm{ap}(\alpha_{i+1})^{q^{\prime\prime}-k^{\prime\prime}-1}\) is a suffix of \(\mathrm{ap}(\alpha_{i})^{q-p}\). Thus, \(\mathrm{ap}(\alpha_{i+1})^{q^{\prime\prime}-k^{\prime\prime}}\) is a suffix of \(ap(\alpha_{i+2})\cdots ap(\alpha_{t})ap(\alpha_{1})\cdots ap(\alpha_{i})ap( \alpha_{i+1})\). Applying Claim 16 to \(\alpha_{j-1}\) gives the desired result. ## 4 Algorithmic details A concatenation tree can be traversed to produce a DB sequence in \(O(1)\)-amortized time per bit; but, it requires exponential space to store the tree. However, if the children of a given node can be computed without knowledge of the entire tree, then we can apply Algorithm 1 to traverse a concatenation tree \(\mathcal{T}\) in a space-efficient manner. The initial call is RCL(\(\alpha\),\(c\),\(x\)) where \(\alpha=a_{1}a_{2}\cdots a_{n}\) is the root node with change index \(c\). If a child with the same change index as the root is a right-child then \(x=0\); otherwise \(x=1\). The crux of the algorithm is the function IsChild(\(\alpha,i\)) which returns true if and only if \(a_{1}\cdots a_{i-1}\overline{a_{i}}a_{i+1}\cdots a_{n}\) is a child of \(\alpha\). In practice, the function will concern itself with the period of \(\alpha\) as per the construction of concatenation trees. ``` 1:procedureRCL(\(\alpha=a_{1}\cdots a_{n}\), \(c\), \(x\)) 2:for\(i\gets c+x\)to\(n\)do\(\triangleright\) Visit right-children 3:ifIsChild(\(\alpha,i\))thenRCL(\(a_{1}\cdots a_{i-1}\overline{a_{i}}a_{i+1}\cdots a_{n}\), \(i\), \(x\)) 4:\(p\leftarrow\) period of \(\alpha\) 5:Print(\(a_{1}\cdots a_{p}\)) 6:for\(i\gets 1\)to\(c+x-1\)do\(\triangleright\) Visit left-children 7:ifIsChild(\(\alpha,i\))thenRCL(\(a_{1}\cdots a_{i-1}\overline{a_{i}}a_{i+1}\cdots a_{n}\), \(i\), \(x\)) ``` **Algorithm 1** Traversing a concatenation tree \(\mathcal{T}\) in RCL order rooted at \(\alpha\) with change index \(c\) The running time of the Algorithm 1 depends on how efficiently the function IsChild(\(\alpha,i\)) can be computed for each index \(i\). Provided each call to IsChild(\(\alpha,i\)) used at most \(O(n)\) space, the overall algorithm will also require \(O(n)\) space. Let \(\mathcal{T}\) be a concatenation rooted at \(\alpha\) with change index \(c\). The sequence resulting from a call to RCL(\(\alpha\), \(c\),\(x\)) is generated in \(O(1)\)-amortized time per bit if (i) at each recursive step the work required by all calls to IsChild(\(\alpha,i\)) is \(O((t+1)n)\), where \(t\) is the number of \(\alpha\)'s children and (ii) the number of nodes in \(\mathcal{T}\) that are periodic is less than some constant times the number of nodes that are aperiodic. Proof.: The work done at each recursive step is \(O(n)\) plus the cost associated to all calls to IsChild(\(\alpha,i\)). If condition (i) is satisfied, then the work can be amortized over the \(t\) children if \(t\geq 1\), or onto the node itself if there are no children. Thus, each recursive node is the result of \(O(n)\) work. By condition (ii), the total number of bits output will be proportional to \(n\) times the number of nodes. Thus, each bit is output in \(O(1)\)-amortized time. This algorithm can be applied in to construct the DB sequences for PCR1, PCR2,and PCR3 in \(O(1)\)-amortized time using \(O(n)\) space by applying relatively straightfoward methods to compute the children. Though efficient algorithms for these sequences already exist for PCR1 and PCR2 as noted in the Introduction, they rely on being able to efficiently list specific necklace representatives, and the results are non-trivial, especially for PCR2. An efficient construction for PCR3 was recently produced concurrently and has been applied to cut-down DB sequences; however, it also requires analysis of a listing of necklace representatives. This algorithm is most likely the approach to discovering an efficient algorithm to generate PCR4 and it may also be useful for other applications such as finding an efficient construction to find long (aperiodic) orientable sequences [3, 6, 24]. ## 5 Future research In this paper we introduced the notion of concatenation trees based on a new data structure we call a bifurcated ordered tree (bot). We demonstrated how they can be used to prove the equivalence of shift rules and concatenation constructions for the four simplest DB sequences based on the PCR. Such constructions have the potential to generate the corresponding DB sequences in \(O(1)\) time per symbol. They also can be applied to efficiently construct other interesting universal cycles including those with bounded weight (number of 1s). Current research is considering how to extend these ideas to other underlying feedback functions including the CCR and PRR (see Appendix C). It is also natural to wonder if these ideas extend naturally to non-binary alphabets; it is not immediately trivial as cycle-joining non-binary cycles does not necessarily produce a cycle-joining tree (see [18]). Preliminary research, however, indicates that our work does generalize naturally, with interesting observations about the choice of left-child or right-child when a child has the same change index as its parent. Additionally, we pose the following open problems: **Open Problem 1**: Can the DB sequence with feedback function PCR4 be generated in \(O(1)\)-amortized time per bit using \(O(n)\) space? **Open Problem 2**: Is it possible to determine whether or not an arbitrary string is a node representative in \(\mathcal{T}_{4}\) in \(O(n)\) time? **Open Problem 3**: An _orientable sequence_ is a cyclic sequence such that for every substring \(\omega\) of length \(n\), the reversal of \(\omega\) does not exist as a substring. For example, a longest orientable sequence for \(n=6\) is \(0001010110010111\) of length 16. Can Algorithm 1 be applied to efficiently construct long orientable sequences based on the cycle-joining ideas in [6]? **Open Problem 4**: How does the idea of concatenation trees for binary strings extend to other objects like permutations, multiset permutations, weak orders, etc?
2306.04913
Momentum-space second-order pion-nucleus potential including medium effects in the $Δ(1232)$ region
In this work, we develop an updated model for pion-nucleus scattering in the framework of the distorted wave impulse approximation in momentum space. We construct the second-order pion-nucleus potential, which involves analysis of pion-nucleus elastic scattering as a solution of the Lippmann-Schwinger equation. The potential is based on the individual pion-nucleon scattering amplitudes extracted from SAID, and its second-order correction is presented in detail. We estimate optimal energy-independent parameters of the potential by a multi-energy fit of the pion-${}^{12}$C total, reaction, and differential elastic cross sections. We show the predictive power by applying it to pion elastic scattering on ${}^{16}$O, ${}^{28}$Si, and ${}^{40}$Ca.
Viacheslav Tsaran, Marc Vanderhaeghen
2023-06-08T03:26:50Z
http://arxiv.org/abs/2306.04913v2
Momentum-space second-order pion-nucleus potential including medium effects in the \(\Delta(1232)\) region ###### Abstract In this work, we develop an updated model for pion-nucleus scattering in the framework of the distorted wave impulse approximation in momentum space. We construct the second-order pion-nucleus potential, which involves analysis of pion-nucleus elastic scattering as a solution of the Lippmann-Schwinger equation. The potential is based on the individual pion-nucleon scattering amplitudes extracted from SAID, and its second-order correction is presented in detail. We estimate optimal energy-independent parameters of the potential by a multi-energy fit of the pion-\({}^{12}\)C total, reaction, and differential elastic cross sections. We show the predictive power by applying it to pion elastic scattering on \({}^{16}\)O, \({}^{28}\)Si, and \({}^{40}\)Ca. ## I Introduction The study of the pion-nucleus interaction has a long history filled with various theoretical approaches [1] and has seen a renewed interest in very recent years [2; 3; 4]. While the earlier works were concentrated on the pion-nucleus scattering and pionic atoms, modern experiments open new perspectives and challenges in applying the pion-nucleus reactions knowledge base. The pion production experiments in photon (electron)- and neutrino-nucleus scattering serve as examples of utmost importance. They are related to the extraction of neutron skin and neutrino oscillation measurements, respectively. The final-state interaction between the outgoing pion and nucleus in these two processes is non-negligible at the energies considered, and it is particularly significant in the \(\Delta(1232)\) resonance region [5; 6]. Moreover, the \(\Delta(1232)\) excitation is the dominant mechanism of single-pion production, implying the significance of studying modifications of the resonance in the nuclear medium. For the neutrino experiments, a good understanding of the pion final state interaction is paramount to interpret the measurements to the level of precision required [7; 8]. After the initial studies on pion-nucleus elastic scattering and energy levels of pionic atoms using the simple first-order potential, it became evident that higher-order effects are required for a consistent description of experimental data [9]. There are essentially two types of existing theoretical models. The first is based on multiple-scattering theory and provides terms beyond the first-order to the pion-nucleus optical potential, treating the pion-nucleon amplitudes phenomenologically. The second approach is the isobar-doorway model, which considers the \(\Delta\)-resonance as an elementary particle modified by various medium corrections. Our work is inspired by both of these approaches. The optical potential formalism effectively describes the many-body pion-nucleus scattering process by a one-particle equation for the pion interacting with a complex phenomenological potential. The Kisslinger optical potential [10], built on Watson's theoretical basis [11], was introduced more than half a century ago and has been continuously improved over the years by including various corrections [9; 12; 13; 14; 15; 16]. The Kerman-McManus-Thaler formulation of the multiple scattering theory [17], treatment of the Fermi motion, and relativistic kinematics have been taken into account. The addition of the phenomenological term proportional to the squared nuclear density, which covers beyond-first-order effects and real pion absorption, has resulted in a much-improved agreement between theory and pion-nucleus scattering data for a large set of nuclei. On the other hand, the properties of the \(\Delta(1232)\) isobar in the nuclear medium are essential in understanding pion-nucleus interaction and have been the subject of numerous investigations, especially in the framework of the \(\Delta\)-hole model [18; 19; 20; 21; 22; 23]. This resonance is particularly important for pion-nucleus interaction because its excitation drives the dominant \(p\)-wave spin-isospin-\(\frac{3}{2}\) (\(P_{33}\)) channel in the elementary pion-nucleon scattering. However, strong scalar and vector fields affect the \(\Delta\)-isobar propagating through the nuclear many-body system. The many-body medium effects are incorporated in the complex effective \(\Delta\) self-energy \(\Sigma_{\Delta}\), which shifts the \(\Delta\) mass and width. The treatment of pion-nuclear reactions within the framework of the \(\Delta\)-hole model is done by means of a phenomenological spreading potential, the parameters of which are fitted to the data. The aim of the present work is to develop the second-order pion-nuclear potential in momentum space. Besides the first-order part of the potential, which has a standard form [15], our second-order part involves more realistic two-body correlation functions than have been used in earlier works. In addition, we account for nuclear medium effects, which affect the resonant \(P_{33}\) pion-nucleon scattering amplitude. The pion-bound nucleon amplitude in our approach relies on the relativistic \(\Delta\)-isobar model [24] with modified \(\Delta\)-propagator. The ef fective \(\Delta\) self-energy is considered as a parameter in our model, which is fixed by a multi-energy fit to \(\pi^{\pm}\)-\({}^{12}\)C scattering data in the energy range 80-180 MeV lab kinetic energy. In addition to describing pion-nucleus scattering, our work aims to develop a model that can be applied directly to the processes of pion photoproduction and neutrino-induced pion production on spin-zero nuclei. The paper is organized as follows: In Sec.II, we present the main aspects of the multiple scattering formalism. Then, in Sec. III, we consider the pion-nucleon elementary amplitudes and the dominant \(P_{33}\) channel. In Sec.IV, we derive the second-order pion-nucleus potential and introduce in-medium modifications to the scattering amplitudes. Next, in Sec.V, we fit the obtained potential to the data on pion-\({}^{12}\)C scattering and apply it to the \({}^{16}\)O, \({}^{28}\)Si, and \({}^{40}\)Ca data. Finally, in Sec. VI, we provide our conclusions. ## II Multiple scattering formalism In multiple scattering theory, the overall pion-nuclear transition amplitude \(\hat{T}\) is a symmetric sum of amplitudes over all \(A\) individual nucleons \[\hat{T}(E)=\sum_{i=1}^{A}\hat{\tau}_{i}(E)+\sum_{i=1}^{A}\sum_{j \neq i}^{A}\hat{\tau}_{i}(E)\hat{G}(E)\hat{\tau}_{j}(E)\\ +\sum_{i=1}^{A}\sum_{j\neq i}^{A}\sum_{k\neq j}^{A}\hat{\tau}_{i} (E)\hat{G}(E)\hat{\tau}_{j}(E)\hat{G}(E)\hat{\tau}_{k}(E)+\ldots, \tag{1}\] where \(E\) is the reaction energy and \(\hat{G}(E)\) is the Green's function of the non-interacting pion-nuclear system. The pion-nucleon transition amplitude describing scattering to all orders on a single nucleon bound inside the nucleus is \[\hat{\tau}_{i}(E)=\hat{v}_{i}+\hat{v}_{i}\hat{G}(E)\hat{\tau}_{i}(E), \tag{2}\] where \(\hat{v}_{i}\) denotes the pion-single nucleon potential. Further, we are going to replace the potential \(\hat{v}_{i}\) with the corresponding free-space pion-nucleon amplitude \(\hat{t}_{i}\), which may be more easily parameterized from the experiment (see Section III): \[\hat{t}_{i}(W)=\hat{v}_{i}+\hat{v}_{i}\hat{g}(W)\hat{t}_{i}(W). \tag{3}\] The scattering series for \(\hat{t}_{i}\) with the pion-nucleon reaction energy \(W\) differs from Eq. (2) by the Green's function of the pion-free nucleon system \(\hat{g}(W)\). A determination of the transition amplitude \(\hat{T}\) from Eq. (1) is difficult due to the presence of all possible intermediate nuclear excited states in the series. Moreover, \(\hat{T}\), \(\hat{G}\) and \(\hat{\tau}_{i}\) are \((A+1)\)-particle operators, so nucleon degrees of freedom must be integrated out. Further simplification of the problem is possible by separating the equation involving only the ground state matrix elements from the one containing excited states. For this purpose, we introduce projection operators, which distinguish the ground state from the excited states of the target nucleus: \[\hat{P}_{0}=|\Psi_{0}\rangle\langle\Psi_{0}|\qquad\text{and}\qquad\hat{P}_{ \emptyset}=\sum_{\alpha^{*}\neq 0}|\Psi_{\alpha^{*}}\rangle\langle\Psi_{\alpha^{*}}|, \tag{4}\] where \(|\Psi_{0}\rangle\) and \(|\Psi_{\alpha^{*}}\rangle\) correspond to the nuclear ground state and all possible excited states, respectively. Also we assume \(\hat{P}_{\emptyset}=\hat{\mathds{1}}-\hat{P}_{0}\). Following Kerman-McManus-Thaler formulation of the multiple scattering theory, Eqs. (1-3) are equivalent to the system of integral equations [17]: \[\hat{T}(E)=\hat{U}(E)+\frac{A-1}{A}\hat{U}(E)\hat{G}(E)\hat{P}_{0} \hat{T}(E), \tag{5a}\] \[\hat{U}(E)=A\,\hat{\tau}(E)+(A-1)\hat{\tau}(E)\hat{G}(E)\hat{P}_{ \emptyset}\hat{U}(E),\] (5b) \[\hat{\tau}(E)=\hat{t}(W)+\hat{t}(W)\left[\hat{G}(E)-\hat{g}(W) \right]\hat{\tau}(E). \tag{5c}\] Here and further, we drop the sub-index of \(\hat{t}_{i}\) when there is no need to distinguish nucleons. The above scattering equation on \(\hat{T}(E)\) (\(\hat{U}(E)\)) resembles the Lippmann-Schwinger equation, with the additional factor \((A-1)/A\) and projector \(\hat{P}_{0}\) (\(\hat{P}_{\emptyset}\)), which forbids intermediate nuclear excited (ground) states, respectively. The factor \((A-1)/A\) prevents double counting of pion rescattering on the same nucleon since all possible rescatterings on a single nucleon are already included in the pion-nucleon amplitude \(\hat{\tau}\). The many-body process of pion-nucleus elastic scattering is completely determined by the nuclear ground state expectation value of \(\langle\Psi_{0}|\hat{T}|\Psi_{0}\rangle\), defined by the scattering equation \[\langle\Psi_{0}|\hat{T}(E)|\Psi_{0}\rangle=\langle\Psi_{0}|\hat{U} (E)|\Psi_{0}\rangle\\ +\frac{A-1}{A}\langle\Psi_{0}|\hat{U}(E)|\Psi_{0}\rangle\hat{G}_{ 0}(E)\langle\Psi_{0}|\hat{T}(E)|\Psi_{0}\rangle, \tag{6}\] where we have used the property of \(\hat{G}_{0}(E)\): \(\langle\Psi_{0}|\hat{G}(E)|\Psi_{\alpha}\rangle=\hat{G}_{0}(E)\delta_{0\alpha}\). Note, Eq. (6) contains only the terms diagonal in the nuclear ground state. As a result, this equation is not necessarily rapidly convergent. However, it can be solved numerically if the effective potential \(\langle\Psi_{0}|\hat{U}|\Psi_{0}\rangle\) is known. As follows from Eq. (5b), the scattering equation for the potential \(\langle\Psi_{0}|\hat{U}|\Psi_{0}\rangle\) contains two non-diagonal matrix elements in the second term and is expected to converge rapidly. This is a consequence of the fact that all influence of the excited states is contained in \(\hat{U}\). A detailed consideration of the effective potential is presented in Section IV. It is convenient to consider the pion-nucleus scattering in the center-of-mass (c.m.) frame of the pion-nucleus system. The reaction energy is then defined as \(E=E(k_{0})=\omega(k_{0})+E_{A}(k_{0})\), where \(\omega(k_{0})\) and \(E_{A}(k_{0})\) are the energies of the pion and nucleus defined relativistically, and \(k_{0}\) is the on-shell momentum. As was discussed above, Eq. (5a) contains only the diagonal in the nuclear ground state relativistic propagator of the pion-nucleus system \(\hat{G}_{0}(E)=\langle\Psi_{0}|\hat{G}(E)|\Psi_{0}\rangle\). In pion momentum space, it becomes \[\langle\pi(\mathbf{k}^{\prime})|\hat{G}_{0}(E)|\pi(\mathbf{k})\rangle=(2\pi)^{3}\delta( \mathbf{k}^{\prime}-\mathbf{k})G_{0}(k) \tag{7}\] where \(k=|\mathbf{k}|\) and \[G_{0}(k)=\frac{1}{E-\omega(k)-E_{A}(k)+i\,\varepsilon}, \tag{8}\] We can write Eq. (8) in the pseudo-nonrelativistic form: \[G_{0}(k)=\frac{2\mathscr{M}(k)}{k_{0}^{2}-k^{2}+i\,\varepsilon}, \tag{9}\] with an off-shell analog of the relativistic reduced mass \[\mathscr{M}(k)\equiv\\ \frac{[E+\omega(k)+E_{A}(k)][\omega(k_{0})E_{A}(k_{0})+\omega(k) E_{A}(k)]}{2\left(E^{2}+(\omega(k)+E_{A}(k)\right)^{2}}. \tag{10}\] Taking into account the equality \(\mathscr{M}(k_{0})=\omega(k_{0})E_{A}(k_{0})/(\omega(k_{0})+E_{A}(k_{0}))\), we introduce the elastic scattering amplitude in the momentum space, defined as: \[F(\mathbf{k}^{\prime},\mathbf{k})=-\frac{\sqrt{\mathscr{M}(k^{\prime}) \mathscr{M}(k)}}{2\pi}\\ \times\langle\pi(\mathbf{k}^{\prime}),\Psi_{0}|\hat{T}(E)|\pi(\mathbf{k} ),\Psi_{0}\rangle, \tag{11}\] where \(\mathbf{k}\) and \(\mathbf{k}^{\prime}\) are the pion c.m. momenta in the initial and final states, respectively. Then, in accordance with Eq. (5a), the elastic scattering amplitude is calculated by solving the integral equation \[F(\mathbf{k}^{\prime},\mathbf{k})=V(\mathbf{k}^{\prime},\mathbf{k})\\ -\frac{A-1}{A}\int\frac{\mathrm{d}\mathbf{k}^{\prime\prime}}{2\pi^{2} }\frac{V(\mathbf{k}^{\prime},\mathbf{k}^{\prime\prime})F(\mathbf{k}^{\prime\prime},\mathbf{k} )}{k_{0}^{2}-k^{\prime\prime 2}+i\,\varepsilon}, \tag{12}\] where the momentum space potential of the pion-nuclear interaction is defined as: \[V(\mathbf{k}^{\prime},\mathbf{k})=-\frac{\sqrt{\mathscr{M}(k^{\prime}) \mathscr{M}(k)}}{2\pi}U(\mathbf{k}^{\prime},\mathbf{k}), \tag{13}\] with \(U(\mathbf{k}^{\prime},\mathbf{k})=\langle\pi(\mathbf{k}^{\prime}),\Psi_{0}|\hat{U}(E)|\pi (\mathbf{k}),\Psi_{0}\rangle\). Note that the formulas given above and developed in the following sections are derived for the case of nuclear interaction. The modification which is needed for the inclusion of the Coulomb interaction is discussed in Appendix A. ## III Pion-nucleon elementary scattering amplitude The potential \(\hat{U}\) relies on the knowledge of the pion-nucleon scattering amplitude. To describe the scattering on a single bound nucleon, we assume that the contribution from the second term of Eq. (5c) can be neglected. In this way, we impose \(\hat{\tau}(E)\approx\hat{t}(W)\), which is known as the _impulse approximation_. However, the c.m. energy of the pion-nucleon subsystem \(W\) is a dynamical variable [19; 25]. An optimal approach for choosing \(W\) would be minimizing the second term of Eq. (5c), describing binding correction to \(\hat{\tau}\). There are several prescriptions with various motivations for choosing the optimal value for \(W\)[12; 15; 26]. We will follow the arguments of Gurvitz [27] and set \[W(\mathbf{k},\mathbf{p})=\sqrt{\left(\omega(k)+E_{N}(p)\right)^{2}-(\mathbf{k}+\mathbf{p})^{2 }}, \tag{14}\] where \(\mathbf{k}\) and \(\mathbf{p}\) are the pion and target nucleon momenta in the pion-nucleus c.m. frame, and \(\omega(k=|\mathbf{k}|)\) and \(E_{N}(p=|\mathbf{p}|)\) are the corresponding relativistic energies. The choice of the effective value of \(\mathbf{p}\) will be discussed in Sec. IV.1. We note that the freedom in choosing \(W\) can be absorbed in the model parameters when studying the medium effects (see Sec. IV.3). While we require the pion-nucleon transition amplitude in the pion-nucleus c.m. frame, it is more convenient to consider the pion-single nucleon interaction in the pion-nucleon c.m. system. All quantities denoted by the subscript "2cm" refer to the pion-nucleon frame in order to distinguish both systems. The pion momenta in both reference frames are related by the Lorentz transformation: \[\mathbf{k}_{\mathrm{2cm}}(\mathbf{k},\mathbf{p})=\mathbf{k}+\alpha\,(\mathbf{k}+\mathbf{p}),\] \[\alpha=\frac{1}{W(\mathbf{k},\mathbf{p})}\left(\frac{(\mathbf{k}+\mathbf{p}) \cdot\mathbf{k}}{W(\mathbf{k},\mathbf{p})+\omega(k)+E_{N}(p)}-\omega(k)\right), \tag{15}\] and an analogous relation for \(\mathbf{k}^{\prime}_{\mathrm{2cm}}\). Also, we assume that the transformation Eq. (15) is justified for virtual particles, which is the approach of relativistic potential theory [28; 29; 30]. The free pion-nucleon scattering matrix in the pion-nucleus and pion-nucleon c.m. frames are then related through \[\langle\pi(\mathbf{k}^{\prime}),N(\mathbf{p}^{\prime})|\hat{t}|\pi(\mathbf{k}),N(\mathbf{p})\rangle=\\ (2\pi)^{3}\delta(\mathbf{k}^{\prime}+\mathbf{p}^{\prime}-\mathbf{k}-\mathbf{p}) \gamma\,t_{\mathrm{2cm}}(\mathbf{k}^{\prime}_{\mathrm{2cm}},\mathbf{k}_{\mathrm{2cm}}), \tag{16}\] with the usual Moller phase space factor [31] \[\gamma=\sqrt{\frac{\omega(\mathbf{k}_{\mathrm{2cm}})\omega(\mathbf{k}^{\prime}_{\mathrm{2 cm}})}{\omega(\mathbf{k})\omega(\mathbf{k}^{\prime})}}\frac{E_{N}(\mathbf{k}_{\mathrm{2cm}})E_{N}( \mathbf{k}^{\prime}_{\mathrm{2cm}})}{E_{N}(\mathbf{p})E_{N}(\mathbf{p}^{\prime})} \tag{17}\] due to the non-covariant normalization convention used to calculate \(t_{\mathrm{2cm}}\). In Eq. (16) and further, we imply that the transition amplitude is calculated at the pion-nucleon reaction energy calculated according to Eq. (14) for the on-shell process. The notation \(t(\mathbf{k}^{\prime},\mathbf{k})\) indicates that the momentum-conserving delta function was explicitly separated. The pion-nucleon on-shell \(T\)-matrix is related to the elastic scattering amplitude \(f\) as \[t_{\rm 2cm}(\mathbf{k}^{\prime}_{0,\rm 2cm},\mathbf{k}_{0,\rm 2cm})=-\frac{4\pi}{2 \bar{\omega}}f(\mathbf{k}^{\prime}_{0,\rm 2cm},\mathbf{k}_{0,\rm 2cm}), \tag{18}\] where \(\bar{\omega}=\omega(k_{0,\rm 2cm})E_{N}(k_{0,\rm 2cm})/W\) is the pion-nucleon relativistic reduced mass, \(W=\omega(k_{0,\rm 2cm})+E_{N}(k_{0,\rm 2cm})\), and \(|\mathbf{k}^{\prime}_{0,\rm 2cm}|=|\mathbf{k}_{0,\rm 2cm}|=k_{0,\rm 2cm}\). We consider further in this section only the most relevant properties of the scattering amplitude for the \(\pi(\mathbf{k}_{\rm 2cm})+N(-\mathbf{k}_{\rm 2cm})\longrightarrow\pi(\mathbf{k}^{\prime}_{ \rm 2cm})+N(-\mathbf{k}^{\prime}_{\rm 2cm})\) process and refer to Ref. [1] for a more detailed review. Assuming the isospin conservation, we can explicitly represent the spin-isospin structure of the amplitude as \[\hat{t}=\hat{t}^{(0)}+\hat{t}^{(1)}\ \mathbf{\hat{t}}\cdot\mathbf{\hat{\tau}}+\left( \hat{t}^{(2)}+\hat{t}^{(3)}\ \mathbf{\hat{t}}\cdot\hat{\mathbf{\tau}}\right)\hat{\mathbf{\sigma}}\cdot\mathbf{n}, \tag{19}\] where \(\hat{\mathbf{t}}\) and \(\hat{\mathbf{\tau}}\) are the pion and nucleon isospin operators, \(\mathbf{\hat{\sigma}}\) is the nucleon Pauli spin operator and \(\mathbf{n}=\mathbf{k}\times\mathbf{k}^{\prime}/|\mathbf{k}\times\mathbf{k}^{\prime}|\) is the normal to the scattering plane. The same notation also holds for \(t_{\rm 2cm}(\mathbf{k}^{\prime}_{\rm 2cm},\mathbf{k}_{\rm 2cm})\) and \(f(\mathbf{k}^{\prime}_{\rm 2cm},\mathbf{k}_{\rm 2cm})\). The \(P_{33}\) partial wave is the only resonant one at low and intermediate energies, peaking at about the pion lab kinetic energy \(T_{\rm lab}\approx 190\,\rm MeV\). Correspondingly, within the energy range under our consideration, \(T_{\rm lab}\lesssim 300\,\rm MeV\), only the \(s\)- and \(p\)-wave contributions are dominant. As a result, the pion-nucleon scattering amplitude can be written as \[f(\mathbf{k}^{\prime}_{0,\rm 2cm}, \mathbf{k}_{0,\rm 2cm})\approx b_{0}+b_{1}\,\mathbf{\hat{t}}\cdot\mathbf{\hat{\tau}}\] \[+(c_{0}+c_{1}\ \mathbf{\hat{t}}\cdot\mathbf{\hat{\tau}})\,\mathbf{k}^{\prime}_{0, \rm 2cm}\cdot\mathbf{k}_{0,\rm 2cm}\] \[+i(s_{0}+s_{1}\ \mathbf{\hat{t}}\cdot\mathbf{\hat{\tau}})\,\hat{\mathbf{ \sigma}}\cdot[\mathbf{k}^{\prime}_{0,\rm 2cm}\times\mathbf{k}_{0,\rm 2cm}], \tag{20}\] where \(b_{0,1}\), \(c_{0,1}\) and \(s_{0,1}\) are energy-dependent complex \(s\)- and \(p\)-wave coefficients. The multipole expansion allows us to express the parameters \(b_{0,1}\), \(c_{0,1}\) and \(s_{0,1}\) through the partial wave amplitudes \(f^{l}_{2T2J}\) as: \[b_{0} =\frac{1}{3}\left[f^{0}_{1\,1}+2f^{0}_{3\,1}\right], \tag{21a}\] \[b_{1} =\frac{1}{3}\left[f^{0}_{3\,1}-f^{0}_{1\,1}\right],\] (21b) \[c_{0} =\frac{1}{3k^{2}_{0,\rm 2cm}}\left[f^{1}_{1\,1}+2f^{1}_{3\,1}+2f^{1}_{ 1\,3}+4f^{1}_{3\,3}\right],\] (21c) \[c_{1} =\frac{1}{3k^{2}_{0,\rm 2cm}}\left[f^{1}_{3\,1}-f^{1}_{1\,1}+2f^{1}_{ 3\,3}-2f^{1}_{1\,3}\right],\] (21d) \[s_{0} =\frac{1}{3k^{2}_{0,\rm 2cm}}\left[f^{1}_{1\,1}+2f^{1}_{3\,1}-f^{1}_{ 1\,3}-2f^{1}_{3\,3}\right],\] (21e) \[s_{1} =\frac{1}{3k^{2}_{0,\rm 2cm}}\left[f^{1}_{3\,1}-f^{1}_{1\,1}-f^{1}_{ 3\,3}+f^{1}_{1\,3}\right]. \tag{21f}\] Here \(l\), \(T\), and \(J\) are, respectively, the orbital angular momentum, isospin, and total angular momentum of the pion-nucleon system. The partial-wave amplitudes are related to the measured pion-nucleon phase shifts as: \[f^{l}_{2T\,2J}=\frac{1}{2ik_{0,\rm 2cm}}\left(e^{2i\delta^{l}_{2T\,2J}}-1\right). \tag{22}\] In this work, we take the complex scattering phase shifts \(\delta^{l}_{2T\,2J}\) as extracted from the state-of-the-art phase shift analysis (WI08) by the SAID collaboration [32]. As can be seen from Eq. (12), explicit knowledge of the off-energy-shell behavior of the potential \(V\) is required to solve the scattering equation. Whereas the on-shell behavior is directly defined by the partial wave amplitudes \(f^{l}_{2T\,2J}\), Eq. (22), the off-shell extrapolation needs a model specification. We assume that for the on-shell momentum \(k_{0,\rm 2cm}\) the dependence of the amplitude \(f^{l}_{2T\,2J}\) on the off-shell momenta \(\mathbf{k}_{\rm 2cm}\) and \(\mathbf{k}^{\prime}_{\rm 2cm}\) is defined by the separable form \[f^{l}_{2T\,2J}(k^{\prime}_{\rm 2cm}, k_{\rm 2cm})=f^{l}_{2T\,2J}(k_{0,\rm 2cm},k_{0,\rm 2cm})\\ \times\left(\frac{k^{\prime}_{\rm 2cm}k_{\rm 2cm}}{k^{2}_{0,\rm 2cm}} \right)^{l}\frac{v(k^{\prime}_{\rm 2cm})v(k_{\rm 2cm})}{v^{2}(k_{0,\rm 2cm})}, \tag{23}\] with the off-shell vertex factor for \(s\)- and \(p\)-waves \[v(k)=\frac{1}{\Lambda^{2}-(\omega^{2}(k_{0,\rm 2cm})-k^{2})}, \tag{24}\] where \(\Lambda=1.25\) GeV is taken. Note that including the second-order part of the potential \(\hat{U}\) (see Sec. IV) reduces the model sensitivity to the off-shell behavior of the pion-nucleon amplitude. An important feature of pion-nucleon scattering is the relative weakness of the \(s\)-wave interaction. It makes the \(p\)-wave part of the amplitude not only dominant at intermediate energies but also significant at low energies, even close to the threshold. As a result, an accurate description of the \(p\)-wave interaction is essential for the pion scattering on both free and bound nucleons. The starting point should be a model which effectively describes the basic dynamical features of the free pion-nucleon process. In our work, we adopt the _relativistic \(\Delta\)-isobar model_ by Oset, Toki and Weise [24], which successfully reproduces the \(p\)-wave pion-nucleon phase shifts at low and intermediate energies, especially the resonant \(P_{33}\) channel. The model is based on the \(K\)-matrix formalism in which we express the elastic scattering partial amplitudes as \[f^{l}_{2T\,2J}=\frac{K^{l}_{2T\,2J}}{1-ik_{0,\rm 2cm}K^{l}_{2T\,2J}}. \tag{25}\] When the \(K\)-matrix is real, the unitarity is automatically incorporated. In general, the phase shifts and, correspondingly, the \(K\)-matrix remain real only below the pion production threshold (\(\pi N\to\pi\pi N\)), which is approximately at \(170\,\rm MeV\) pion lab kinetic energy. However, even when the inelastic channel is open, the inelasticity parameters for the \(p\)-wave remain close to 1 with high accuracy. As a result, the \(p\)-wave pion-nucleon interaction can be described by the real crossing symmetric \(K\)-matrix. According to the relativistic \(\Delta\)-isobar model, the pion-nucleon \(K\)-matrix is based entirely on pion-baryon effective Lagrangian and contains direct and crossed contributions from nucleon \(N\), \(\Delta(1232)\)-isobar and Roper resonance \(N^{\star}(1440)\). The resulting \(K\)-matrix in the dominant \(P_{33}\) channel is given by \[K_{33}^{1}=\frac{1}{3}\frac{k_{0,2{\rm cm}}^{2}}{4\pi m_{\pi}^{2 }}\frac{m_{N}}{\sqrt{s}}\left[4f_{N}^{2}\frac{2m_{N}}{m_{N}^{2}-\bar{u}}+4f_{N^ {\star}}^{2}\frac{2m_{N^{\star}}}{m_{N^{\star}}^{2}-\bar{u}}\right.\\ \left.+f_{\Delta}^{2}\left(\frac{2m_{\Delta}}{m_{\Delta}^{2}-s}+ \frac{1}{9}\frac{2m_{\Delta}}{m_{\Delta}^{2}-\bar{u}}\right)\right], \tag{26}\] where \(s=W^{2}\), \(m_{\pi}\) is the pion mass and the approximate \(u\)-channel Mandelstam variable is \(\bar{u}=u+2{\mathbf{k}}^{\prime}_{2{\rm cm}}\cdot{\mathbf{k}}_{2{\rm cm}}=m_{N}^{2}+ m_{\pi}^{2}-2\omega E_{N}(k_{0,2{\rm cm}})\). The masses and coupling constants used are [24]: \[m_{N}=939\,{\rm MeV}, f_{N}^{2}/4\pi=0.079,\] \[m_{\Delta}=1232\,{\rm MeV}, f_{\Delta}^{2}/4\pi=0.37,\] \[m_{N^{\star}}=1450\,{\rm MeV}, f_{N^{\star}}^{2}/4\pi=0.015.\] The primary role of the Roper resonance \(N^{\star}(1440)\) in this model is providing the correct behavior in the \(P_{11}\) channel. In Fig. 1, we compare the \(P_{33}\) partial amplitude taken from the SAID phase shift analysis with the relativistic \(\Delta\)-isobar model results. The corresponding curves in the plot are almost indistinguishable, showing excellent agreement between the theoretical model and experiment. The dominant term in Eq. (26) comes from the direct (\(s\)-channel) \(\Delta\)-pole contribution. This resonant part of the \(K_{33}^{1}\) can be written as \[K_{33}^{1(\Delta)}=\frac{1}{k_{0,2{\rm cm}}}\frac{m_{\Delta}\Gamma_{\Delta}}{ m_{\Delta}^{2}-W^{2}}, \tag{27}\] where we have introduced the \(\Delta\) decay width \[\Gamma_{\Delta}=\frac{2}{3}\frac{f_{\Delta}^{2}}{4\pi}\frac{k_{0,2{\rm cm}}^{ 3}}{m_{\pi}^{2}}\frac{m_{N}}{W}. \tag{28}\] The width at resonance (\(W=m_{\Delta}\)) is \(\Gamma_{\Delta}\approx 115\,{\rm MeV}\). This separation of the \(s\)-channel \(\Delta\) term in the form of Eq. (27) will be useful in the following introducing the medium modifications. ## IV Derivation of the pion-nuclear potential We are now in the position to construct the effective pion-nucleus potential used in the scattering equation (12). We assume the potential \(\hat{U}(E)\) is approximated by the first two terms of the iterative series for Eq. (5b): \[\hat{U}(E)\approx\hat{U}^{(1)}+\hat{U}^{(2)}, \tag{29}\] where within the impulse approximation, the first-order part has the simple form \[\hat{U}^{(1)}=A\,\hat{t} \tag{30}\] and the second-order part is given by \[\hat{U}^{(2)}=A(A-1)\hat{t}\hat{G}(E)\hat{P}_{\emptyset}\hat{t}. \tag{31}\] In the following, we will express Eqs. (30) and (31) for the effective potential into more practical forms. ### The first-order potential The first-order potential in momentum space can be written as: \[U^{(1)}({\mathbf{k}}^{\prime},{\mathbf{k}})=\int\frac{{\rm d}{\mathbf{p}}^{ \prime}}{(2\pi)^{3}}\frac{{\rm d}{\mathbf{p}}}{(2\pi)^{3}}\\ \times{\rm Tr}\left[\left\langle\pi({\mathbf{k}}^{\prime}),N({\mathbf{p}}^ {\prime})\right|\hat{t}\left|\pi({\mathbf{k}}),N({\mathbf{p}})\right\rangle\rho({\mathbf{p }}^{\prime};{\mathbf{p}})\right], \tag{32}\] where \({\mathbf{p}}\) and \({\mathbf{p}}^{\prime}\) are the initial and final momentum of the target nucleon under consideration, \({\rm Tr}\) represents summation over all nucleon spin and isospin projections as: \[{\rm Tr}\left[\hat{t}\,\rho({\mathbf{p}}^{\prime};{\mathbf{p}})\right] \equiv\sum_{\sigma,\sigma^{\prime}}\sum_{\tau,\tau^{\prime}}\langle\sigma^{ \prime}_{1z},\tau^{\prime}_{1z}|\,\hat{t}\,|\sigma_{1z},\tau_{1z}\rangle\\ \times\rho({\mathbf{p}}^{\prime},\sigma^{\prime},\tau^{\prime};{\mathbf{p }},\sigma,\tau), \tag{33}\] and the one-body density matrix for the target nucleus is \[\rho({\mathbf{p}}^{\prime},\sigma^{\prime},\tau^{\prime};{\mathbf{p}}, \sigma,\tau)=A\!\int\left(\prod_{i=2}^{A}{\rm d}x_{i}\right){\rm d}{\mathbf{r}}_{1} \,{\rm d}{\mathbf{r}}_{1}^{\prime}\\ \times e^{i({\mathbf{p}}^{\prime}\cdot{\mathbf{r}}_{1}^{\prime}-{\mathbf{p}} \cdot{\mathbf{r}}_{1})}\Psi_{0}^{*}(x^{\prime}_{1},x_{2},\ldots,x_{A})\Psi_{0}(x_{1},\ldots,x_{A}). \tag{34}\] Figure 1: The theoretical \(f_{33}^{1}\) amplitude obtained with the relativistic \(\Delta\)-isobar model (R\(\Delta\)M) as a function of pion lab kinetic energy compared with SAID phase shift analysis [32]. The solid black (long-dashed red) curve represents the real (imaginary) part of the amplitude taken from SAID, while the dot-dashed green (short-dashed blue) curve corresponds to the R\(\Delta\)M calculation. Here, the notation \(x_{i}=\{\mathbf{r}_{i},\sigma_{i},\tau_{i}\}\) covers nucleon spin and isospin, and \(\int\mathrm{d}x_{i}(\ldots)=\sum_{\sigma_{i}}\sum_{\tau_{i}}\int\mathrm{d}\mathbf{r} _{i}(\ldots)\). The spin and isospin variables are suppressed in what follows. As a result, the first-order potential in the impulse approximation including the recoil of the struck nucleon is given by the Fermi motion integral: \[U^{(1)}(\mathbf{k}^{\prime},\mathbf{k})=\int\frac{\mathrm{d}\mathbf{p}}{(2 \pi)^{3}}\gamma\,\operatorname{Tr}\left[\rho(\mathbf{p}-\mathbf{q}/2;\mathbf{p}+\mathbf{q}/2)\right. \\ \times\left.t_{\mathrm{2cm}}(\mathbf{k}^{\prime}_{\mathrm{2cm}}(\mathbf{k} ^{\prime},\mathbf{p}-\mathbf{q}/2),\mathbf{k}_{\mathrm{2cm}}(\mathbf{k},\mathbf{p}+\mathbf{q}/2)\right], \tag{35}\] where \(\mathbf{q}=\mathbf{k}^{\prime}-\mathbf{k}\) and \(\gamma\) is given by Eq. (17). The integration over \(\mathbf{p}\) in Eq. (35) requires non-diagonal elements of the one-body density matrix which are model dependent. Moreover, the proper treatment of the Fermi averaging should also take into account the binding effects. To simplify the problem, one can treat the nucleon Fermi motion approximately by evaluating the pion-nucleon amplitude at the effective initial and final nucleon momenta \[\mathbf{p}_{\mathrm{eff}}=\frac{\mathbf{q}}{2}-\frac{\mathbf{k}^{\prime}+\mathbf{k}}{2A}, \quad\text{and}\quad\mathbf{p}^{\prime}_{\mathrm{eff}}=-\frac{\mathbf{q}}{2}-\frac{ \mathbf{k}^{\prime}+\mathbf{k}}{2A}, \tag{36}\] respectively. This result was obtained for elastic nucleon-deuteron scattering (for \(A=2\)) in Ref. [33]. The terms proportional to \(A^{-1}\) arrive due to the correct treatment of the target recoil. In this so-called _optimized factorization approximation_ we arrive at \[U^{(1)}(\mathbf{k}^{\prime},\mathbf{k})=\\ \gamma\operatorname{Tr}\left[\rho(\mathbf{q})t_{\mathrm{2cm}}(\mathbf{k} ^{\prime}_{\mathrm{2cm}}(\mathbf{k}^{\prime},\mathbf{p}^{\prime}_{\mathrm{eff}}),\mathbf{ k}_{\mathrm{2cm}}(\mathbf{k},\mathbf{p}_{\mathrm{eff}})\right], \tag{37}\] with the nuclear form factor \[\rho(\mathbf{q})=A\!\int\left(\prod_{i=1}^{A-1}\mathrm{d}\mathbf{\xi}_{i}\right)e^{i \frac{A-1}{A}\mathbf{q}\cdot\mathbf{\xi}_{A-1}}|\Psi_{0}(\mathbf{\xi}_{1},\ldots,\mathbf{\xi} _{A-1})|^{2} \tag{38}\] normalized to \(\rho(0)=A\). Galilean-invariant Jacobi coordinates, \(\mathbf{\xi}_{i}\), were introduced in order to eliminate the motion of the nucleus as a whole, as the form factor characterizes the internal structure of the nucleus (see Ref. [34] for details). The factorization approximation is justified by the compensation between binding potential of the nucleon and the Fermi motion kinetic energy [27]. Finally, the pion-nuclear potential, Eq. (13), is expressed through the pion-nucleon scattering amplitude as: \[V^{(1)}(\mathbf{k}^{\prime},\mathbf{k})=\mathscr{W}(\mathbf{k}^{\prime},\mathbf{k}) \operatorname{Tr}\left[\rho(\mathbf{q})f\left(\mathbf{k}^{\prime}_{\mathrm{2cm}},\mathbf{ k}_{\mathrm{2cm}}\right)\right], \tag{39}\] with the phase space factor \[\mathscr{W}(\mathbf{k}^{\prime},\mathbf{k})=\sqrt{\frac{\mathscr{M}(k^{\prime}) \mathscr{M}(k)}{\mu(\mathbf{k}^{\prime},\mathbf{p}^{\prime}_{\mathrm{eff}})\mu(\mathbf{k}, \mathbf{p}_{\mathrm{eff}})}}, \tag{40}\] where \(\mu(\mathbf{k},\mathbf{p})=\omega(k)E_{N}(p)/W(\mathbf{k},\mathbf{p})\) and we imply \(\mathbf{k}_{\mathrm{2cm}}=\mathbf{k}_{\mathrm{2cm}}(\mathbf{k},\mathbf{p}_{\mathrm{eff}})\). For spin- and isospin-zero nuclei, only the spin- and isospin-independent part of the scattering amplitude, Eq. (20), contributes to the first-order potential: \[V^{(1)}(\mathbf{k}^{\prime},\mathbf{k})=\bar{\mathscr{W}}(\mathbf{k}^{\prime},\mathbf{k}) \left[b_{0}+c_{0}\,\mathbf{k}^{\prime}_{\mathrm{2cm}}\cdot\mathbf{k}_{\mathrm{2cm}} \right]\rho(\mathbf{q}), \tag{41}\] where \(\bar{\mathscr{W}}(\mathbf{k}^{\prime},\mathbf{k})=\mathscr{W}(\mathbf{k}^{\prime},\mathbf{k}) v(k^{\prime}_{\mathrm{2cm}})v(k_{\mathrm{2cm}})/v^{2}(k_{\mathrm{0,2cm}})\). Note that the scattering parameters \(b_{0}\) and \(c_{0}\) are derived at the pion-nucleon c.m. energy given by Eq. (14) for on-shell momenta and thus depend on the scattering angle. We extract the nuclear form factor \(\rho(\mathbf{q})\) from the corresponding nuclear charge form factor determined through elastic electron scattering (see Appendix B for details). Note, in our calculation, besides the most important \(s\)- and \(p\)-wave terms, we also include the \(d\)-wave contribution in the same manner. ### The second-order correction The second-order part of the potential, Eq. (31), describes scattering to all orders from one nucleon, after which the nucleus makes a transition into an excited state followed by propagation and then scattering to all orders on a second nucleon, summed over all nucleons (see Fig. 2). We use the subscript "1" and "2" in this subsection to distinguish the initial and final nucleons involved in the second-order scattering process. In calculating the second-order correction for the kinetic energies larger than around \(30\,\mathrm{MeV}\) considered in this work, we neglect the nuclear excitation energies in comparison with energies of the pion-nucleus system intermediate states. In this way, the excited system propagator is approximated by the ground state one, \(\hat{G}_{\alpha^{*}}\approx\hat{G}_{0}\). Correspondingly, the second-order part of the pion-nucleus becomes \[\langle\Psi_{0}|\hat{U}^{(2)}|\Psi_{0}\rangle=A(A-1)\langle\Psi_{0}|\hat{t}_{2 }\hat{G}_{0}\hat{P}_{0}\hat{t}_{1}|\Psi_{0}\rangle. \tag{42}\] Substituting the projection operator explicitly, we arrive at \[\langle\Psi_{0}|\hat{U}^{(2)}|\Psi_{0}\rangle=A(A-1)\left[\langle \Psi_{0}|\hat{t}_{2}\hat{G}_{0}\hat{t}_{1}|\Psi_{0}\rangle\right.\\ \left.-\langle\Psi_{0}|\hat{t}_{2}|\Psi_{0}\rangle\hat{G}_{0} \langle\Psi_{0}|\hat{t}_{1}|\Psi_{0}\rangle\right]. \tag{43}\] Figure 2: Diagram representation of the second-order part of the pion-nuclear potential. According to Eq. (37) for the first-order potential, the second term of Eq. (43) in momentum space becomes: \[\langle\pi(\mathbf{k}^{\prime}),\Psi_{0}|\hat{t}_{2}|\Psi_{0}\rangle\hat{ G}_{0}\langle\Psi_{0}|\hat{t}_{1}|\pi(\mathbf{k}),\Psi_{0}\rangle=\\ \frac{1}{A^{2}}\int\frac{\mathrm{d}\mathbf{k}^{\prime\prime}}{(2\pi)^ {3}}\operatorname{Tr}\left[t_{2}(\mathbf{k}^{\prime},\mathbf{k}^{\prime\prime})\rho( \mathbf{k}^{\prime}-\mathbf{k}^{\prime\prime})\right]G_{0}(\mathbf{k}^{\prime\prime})\\ \times\operatorname{Tr}\left[t_{1}(\mathbf{k}^{\prime\prime},\mathbf{k}) \rho(\mathbf{k}^{\prime\prime}-\mathbf{k})\right], \tag{44}\] Similarly, the first term in Eq. (43) acquires the form \[\langle\pi(\mathbf{k}^{\prime})\Psi_{0}|\hat{t}_{2}\hat{G}_{0}\hat{t} _{1}|\pi(\mathbf{k})\Psi_{0}\rangle=\frac{1}{A(A-1)}\int\frac{\mathrm{d}\mathbf{k}^{ \prime\prime}}{(2\pi)^{3}}G_{0}(\mathbf{k}^{\prime\prime})\\ \times\operatorname{Tr}\left[t_{2}(\mathbf{k}^{\prime},\mathbf{k}^{ \prime\prime})t_{1}(\mathbf{k}^{\prime\prime},\mathbf{k})\rho_{2}(\mathbf{k}^{\prime}-\bm {k}^{\prime\prime},\mathbf{k}^{\prime\prime}-\mathbf{k})\right], \tag{45}\] where \(\rho_{2}(\mathbf{q}_{1},\mathbf{q}_{2})\) is the Fourier transform, \[\rho_{2}(\mathbf{q}_{1},\mathbf{q}_{2})=\int\mathrm{d}\mathbf{r}_{1}\,\mathrm{d}\mathbf{r}_{2 }\,e^{-i(\mathbf{q}_{1}\cdot\mathbf{r}_{1}+\mathbf{q}_{2}\cdot\mathbf{r}_{2})}\rho_{2}(\mathbf{r} _{1},\mathbf{r}_{2}), \tag{46}\] of the two-body density function \[\rho_{2}(\mathbf{r}_{1},\mathbf{r}_{2})=A(A-1)\!\int\left(\prod_{i=3}^{A }\mathrm{d}\mathbf{r}_{i}\right)\\ \times\Psi_{0}^{\dagger}(\mathbf{r}_{1},\dots,\mathbf{r}_{A})\Psi_{0}(\bm {r}_{1},\dots,\mathbf{r}_{A}). \tag{47}\] In Eqs. (44) and (45), we imply the same convention as in Eq. (33), omitting spin and isospin variables. The nuclear two-body density \(\rho_{2}(x_{1},x_{2})\) characterizes the probability of finding one nucleon with \(\sigma_{1}\) and \(\tau_{1}\) at \(\mathbf{r}_{1}\) and another nucleon with \(\sigma_{2}\) and \(\tau_{2}\) at \(\mathbf{r}_{2}\), while all the other nucleons have arbitrary positions, spins, and isospins. We imply that \(\rho_{2}\) is normalized to \(A(A-1)\). As can be seen from Eqs. (43-45), the second-order part of the optical potential depends directly on the nucleon-nucleon correlations within the nucleus. We introduce two correlation functions: \[C_{\text{ex}}(\mathbf{r}_{1},\mathbf{r}_{2})=\rho(\mathbf{r}_{1})\rho(\mathbf{r }_{2})-\rho_{2}(\mathbf{r}_{1},\mathbf{r}_{2}), \tag{48a}\] \[C_{0}(\mathbf{r}_{1},\mathbf{r}_{2})=C_{\text{ex}}(\mathbf{r}_{1},\mathbf{r}_{2})-\frac{1}{A }\rho(\mathbf{r}_{1})\rho(\mathbf{r}_{2}), \tag{48b}\] which were considered in Ref. [35]1. The function \(C_{\text{ex}}(\mathbf{r}_{1},\mathbf{r}_{2})\) can be referred to as the "exchange correlation function" because, as demonstrated below, it accounts for the spin and isospin exchange contributions to pion-nucleon scattering. It is expressed as the exchange sum in terms of individual nucleon wave functions, Eq. (101). Both correlation functions are employed in our calculations, as we do not neglect terms of order \(A^{-1}\) appearing in the calculation. Both correlation functions in momentum space are then obtained by the Fourier transform: Footnote 1: Note that different normalizations are used in this work compared to Ref. [35]. \[C_{\text{ex},0}(\mathbf{q}_{1},\mathbf{q}_{2})=\int\mathrm{d}\mathbf{r}_{1}\,\mathrm{d}\bm {r}_{2}\,e^{-i(\mathbf{q}_{1}\cdot\mathbf{r}_{1}+\mathbf{q}_{2}\cdot\mathbf{r}_{2})}C_{\text{ ex},0}(\mathbf{r}_{1},\mathbf{r}_{2}). \tag{49}\] None of the two-body correlation functions is directly measurable, and a model is required to calculate the second-order correction. A common approach is using the Fermi gas approximation to evaluate the second-order part of the potential [36; 9; 37]. In our calculation, we employ the more realistic harmonic oscillator nuclear shell model (see Appendix B). The explicit forms of \(C_{\text{ex}}(\mathbf{q}_{1},\mathbf{q}_{2})\) and \(C_{0}(\mathbf{q}_{1},\mathbf{q}_{2})\), summed over spin and isospin, are given by Eqs. (109-110). The correlation functions for \({}^{12}\)C in momentum space in the case of \(|\mathbf{q}_{1}|=|\mathbf{q}_{2}|=q\) are shown in Fig. 3. While the difference between \(C_{\text{ex}}\) and \(C_{0}\) is less noticeable in coordinate space, Fig. 3 demonstrates their different behavior in the case of small momenta transfer, which is especially important for the pion-nucleus scattering process. Even for a nucleus with zero spin and isospin, the trace operator in Eq. (45) yields a non-trivial result containing spin- and isospin-dependent parts of the scattering amplitude, Eq. (19). A direct calculation for spin-isospin-zero Figure 4: Diagram representation of the second-order isospin exchange for negative pion scattering. Figure 3: The correlation functions \(C_{\text{ex}}\) and \(C_{0}\) for \({}^{12}\)C in momentum space given by the harmonic oscillator shell model. \(C_{\text{ex}}(\mathbf{q}_{1},\mathbf{q}_{2})\) and \(C_{0}(\mathbf{q}_{1},\mathbf{q}_{2})\) are plotted for \(|\mathbf{q}_{1}|=|\mathbf{q}_{2}|=q\) as function of \(q\) and the relative angle \(\theta\) between \(\mathbf{q}_{1}\) and \(\mathbf{q}_{1}\). The red dashed curves correspond to \(C_{\text{ex}}\), while the blue solid curves correspond to \(C_{0}\). nuclei yields the following spin and isospin sums entering Eq. (45): \[\sum_{s,s^{\prime},\tau,\tau,\tau=-1/2}^{1/2}\chi_{1}^{\dagger}(s) \chi_{2}^{\dagger}(s^{\prime})\eta_{1}^{\dagger}(\tau)\eta_{2}^{\dagger}(\tau^{ \prime})\left[\hat{t}_{2}^{(0)}+\hat{t}_{2}^{(1)}\ \hat{\mathbf{t}}\cdot\hat{\mathbf{\tau}}_{2}+\left(\hat{t}_{2}^{(2)}+\hat{t}_{2}^{(3) }\ \hat{\mathbf{t}}\cdot\hat{\mathbf{\tau}}_{2}\right)\hat{\mathbf{\sigma}}_{2}\cdot\mathbf{n}_{2}\right] \\ \times\left[\hat{t}_{1}^{(0)}+\hat{t}_{1}^{(1)}\ \hat{\mathbf{t}}\cdot\hat{\mathbf{\tau}}_{1}+ \left(\hat{t}_{1}^{(2)}+\hat{t}_{1}^{(3)}\ \hat{\mathbf{t}}\cdot\hat{\mathbf{\tau}}_{1}\right)\hat{\mathbf{\sigma}}_{1}\cdot\mathbf{n}_{1} \right]\eta_{1}(\tau^{\prime})\eta_{2}(\tau)\chi_{1}(s^{\prime})\chi_{2}(s)\\ =4\left[\hat{t}_{2}^{(0)}\hat{t}_{1}^{(0)}+2\,\hat{t}_{2}^{(1)} \hat{t}_{1}^{(1)}+\left(\hat{t}_{2}^{(2)}\hat{t}_{1}^{(2)}+2\,\hat{t}_{2}^{(3) }\hat{t}_{1}^{(3)}\right)\mathbf{n}_{1}\cdot\mathbf{n}_{2}\right], \tag{50}\] where \(\chi(s)\) (\(\eta(\tau)\)) is the nucleon spinor (isospinor), \(\mathbf{n}_{1}=\mathbf{k}\times\mathbf{k}^{\prime\prime}/|\mathbf{k}\times\mathbf{k}^{\prime\prime}|\) and \(\mathbf{n}_{2}=\mathbf{k}^{\prime\prime}\times\mathbf{k}^{\prime}/|\mathbf{k}^{\prime\prime} \times\mathbf{k}^{\prime}|\). The first term on the right-hand side of Eq. (50) consists of the spin-isospin averaged part \(\hat{t}^{(0)}\) of the scattering amplitudes. The remaining terms involve the spin- and isospin-dependent parts and describe intermediate spin and isospin exchange. In Fig. 4, we show a diagrammatic representation of the isospin exchange for negative pion scattering. In the following, we include the global factor 4, which arises due to spin-isospin summation, in the correlation functions. Finally, combining the above results, we express the second-order part of the potential for spin- and isospin-zero nuclei in terms of the correlation functions: \[U^{(2)}(\mathbf{k}^{\prime},\mathbf{k})=-\int\frac{\mathrm{d}\mathbf{k}^{ \prime\prime}}{2\pi)^{3}}G_{0}(\mathbf{k}^{\prime\prime})\left[t^{(0)}(\mathbf{k}^{ \prime},\mathbf{k}^{\prime\prime})t^{(0)}(\mathbf{k}^{\prime\prime},\mathbf{k})C_{0}(\mathbf{k }^{\prime}-\mathbf{k}^{\prime\prime},\mathbf{k}^{\prime\prime}-\mathbf{k})\right.\\ \left.+\left(2t^{(1)}(\mathbf{k}^{\prime},\mathbf{k}^{\prime\prime})t^{(1 )}(\mathbf{k}^{\prime\prime},\mathbf{k})+\left(t^{(2)}(\mathbf{k}^{\prime},\mathbf{k}^{\prime \prime})t^{(2)}(\mathbf{k}^{\prime\prime},\mathbf{k})+2t^{(3)}(\mathbf{k}^{\prime},\mathbf{k}^ {\prime\prime})t^{(3)}(\mathbf{k}^{\prime\prime},\mathbf{k})\right)\,\mathbf{n}_{1}\cdot \mathbf{n}_{2}\right)C_{\rm ex}(\mathbf{k}^{\prime}-\mathbf{k}^{\prime\prime},\mathbf{k}^{ \prime\prime}-\mathbf{k})\right] \tag{51}\] or equivalently \[V^{(2)}(\mathbf{k}^{\prime},\mathbf{k})=\int\frac{\mathrm{d}\mathbf{k}^{ \prime\prime}}{2\pi^{2}}\frac{\mathscr{W}(\mathbf{k}^{\prime},\mathbf{k}^{\prime\prime })\mathscr{W}(\mathbf{k}^{\prime\prime},\mathbf{k})}{k_{0}^{2}-{k^{\prime\prime}}^{2 }+i\varepsilon}\left[f^{(0)}(\mathbf{k}^{\prime},\mathbf{k}^{\prime\prime})f^{(0)}(\bm {k}^{\prime\prime},\mathbf{k})C_{0}(\mathbf{k}^{\prime}-\mathbf{k}^{\prime\prime},\mathbf{k}^ {\prime\prime}-\mathbf{k})\right.\\ \left.+\left(2f^{(1)}(\mathbf{k}^{\prime},\mathbf{k}^{\prime\prime})f^{(1 )}(\mathbf{k}^{\prime\prime},\mathbf{k})+\left(f^{(2)}(\mathbf{k}^{\prime},\mathbf{k}^{\prime \prime})f^{(2)}(\mathbf{k}^{\prime\prime},\mathbf{k})+2f^{(3)}(\mathbf{k}^{\prime},\mathbf{k}^ {\prime\prime})f^{(3)}(\mathbf{k}^{\prime\prime},\mathbf{k})\right)\,\mathbf{n}_{1}\cdot \mathbf{n}_{2}\right)C_{\rm ex}(\mathbf{k}^{\prime}-\mathbf{k}^{\prime\prime},\mathbf{k}^{ \prime\prime}-\mathbf{k})\right]. \tag{52}\] The first term in Eq. (52) describing spin-isospin averaged individual nucleon scattering on two nucleons is similar to Eq. (6.5) of Foldy and Walecka [38]. The term proportional to \(\hat{f}_{1}^{(2)}\hat{f}_{2}^{(2)}\ (\hat{f}_{1}^{(1)}\hat{f}_{2}^{(1)})\) corresponds to spin (isospin) exchange between the intermediate pion and two nucleons, keeping the scattered nucleus in the ground state (see Fig. 4 for an example). Similarly, the term \(\hat{f}_{1}^{(3)}\hat{f}_{2}^{(3)}\) describes the simultaneous exchange of both spin and isospin. At the initial step of our calculation, the Pauli principle was included in the pion-nucleus potential through the antisymmetric nature of the nucleon wave functions. However, for the first-order potential, Eq. (37), this property was lost after the integration over nucleon momenta within the factorization approximation [39; 40]. The obtained structure of the second-order correction, Eq. (52), explicitly involves two types of the two-nucleon correlation function and arises primarily from the Pauli principle. This can, e.g., be understood by considering the process with zero momentum transfer to each of the nucleons involved in the second-order scattering. As shown in Fig. 3, for this situation, \(C_{0}(0,0)=0\) and \(C_{\rm ex}(0,0)=A\). As a result, the first term in Eq. (52), describing the process which does not change the nucleon quantum numbers, makes zero contribution to the integral. In contrast, the second term is proportional to the non-zero \(C_{\rm ex}\) correlation function. It corresponds to the situation when, after the pion scattering on the first nucleon, this nucleon changes its spin and/or isospin, acquiring quantum numbers already occupied by another nucleon. In this way, the second-order part of the potential, Eq. (52), introduces the Pauli corrections to the model. Fig. 5 demonstrates the first- and second-order parts of the pion-nucleus potential for on-shell forward scattering on \({}^{12}\)C. As Pauli blocking limits the phase space available to the struck nucleon, the second-order correction to the potential leads to a reduction of the imaginary part of the potential. Around \(T_{\rm lab}=160\) MeV, the struck nucleon on-shell momentum becomes close to Fermi momentum, \(p_{F}\approx 1.36\,\mathrm{fm}^{-1}\), and the imaginary part of Eq. (52) changes sign. In Appendix C, we further discuss the second-order correction, Eq. (52). ### Medium modifications An essential part of the pion-nucleus total cross section for all energies up to 300 MeV comes from pion absorption [41]. In the nuclear medium, the pion can be absorbed by one or more nucleons, which indicates that intermediate states without a pion should also contribute to the pion-nucleus effective potential. This mechanism is usually referred to as "true absorption" to distinguish it from the flux loss due to scattering through many open inelastic channels. However, even zero-energy pion absorption on a single nucleon results in a momentum \(\sqrt{2m_{N}m_{\pi}}\approx 2.6\,\text{fm}^{-1}\) to be carried off by the nucleon. This value is very large for a nucleon within a nucleus, which means the single-nucleon absorption is significantly suppressed [42]. As a result, the true absorption originates from many-body mechanisms. Early models of pion absorption hypothesized dominance of two-nucleon pion absorption [43], where the pion is scattered on one nucleon and then absorbed by another. Following this assumption, the pioneering work of Ref. [9] introduced additional phenomenological terms proportional to the square of the nuclear density in the pion-nucleus potential to allow for true absorption. However, it was shown both experimentally [44] and theoretically [45] that the absorption process is more complicated and the three-nucleon mechanism yields a significant fraction of the total absorption cross section in the resonance region and above. As a result of the above, the pion-nucleon interaction is significantly modified in the presence of surrounding nucleons. In general, this means that the medium-modified scattering coefficients \(b_{0,1}\), \(c_{0,1}\) and \(s_{0,1}\) are not only functions of the reaction energy but also acquire a dependence on nuclear density \(\rho(r)\). Even if the exact form of this dependence were known, its inclusion in the momentum space approach would not be trivial. To solve this difficulty, we need to use the fact that the pion interacts mainly with a limited part of the nucleus due to strong absorption, which results in the existence of an effective nuclear density. Ref. [46] studied the correlations between the \(\rho(r)\) and \(\rho^{2}(r)\) terms of the pion-nucleus optical potential from the threshold to \(T_{\text{lab}}=50\,\text{MeV}\). It was proven that an effective density \(\rho_{e}\) could be defined such that the substitution \(\rho^{2}(r)\longleftrightarrow\rho_{e}\rho(r)\) would result in approximately the same binding energies and scattering amplitudes for various nuclei in the range from \({}^{12}\)C to \({}^{208}\)Pb. Even though we only fit in the range 80-180 MeV pion kinetic energy, we still wish to check our model predictions at lower energies. For this reason, the in-medium influence on the \(s\)-wave scattering should be considered. Moreover, due to \(s\)-\(p\)-wave interference in the second-order part of the pion-nucleus potential constructed in Sec. IV.2, both isoscalar \(b_{0}\) and isovector \(b_{1}\) parts of the \(s\)-wave pion-nucleon scattering amplitude are substantial even at high energies (see Appendix C for details). In the following, we subsequently describe modifications of both \(s\)- and \(p\)-wave pion-nucleon scattering. The primary effect of the Pauli exclusion principle, which reduces the phase space accessible to the struck nucleon, is incorporated by explicitly calculating the second-order correction to the pion-nuclear potential as described in Sec. IV.2. #### iv.3.1 \(P_{33}\) modification In our approach, we assume that for the \(p\)-wave interaction, only the resonant \(P_{33}\) channel is changed in the nuclear medium, keeping all other small partial-wave amplitudes at their free values taken from SAID. The interaction of the \(\Delta\)-isobar with the surrounding nucleons significantly modifies the \(f_{33}^{1}\) partial amplitude. A comparably long lifetime of \(\Delta\) on the nuclear scale and its mean free path within a nucleus of around 1 fm suggest that the \(\Delta\) is a nuclear quasiparticle that may still be treated effectively as a separate baryonic species without considering the intrinsic quark dynamics. The open in Figure 5: The on-shell forward pion-nucleus potential for \({}^{12}\)C as a function of pion lab kinetic energy for parameters given by fit 1 in Table 2. The upper and lower panels are for real and imaginary parts, respectively. The solid red curves represent the first-order part, \(V^{(1)}(\mathbf{k}_{0},\mathbf{k}_{0})\) given by Eq. (41), with the on-shell momentum \(\mathbf{k}_{0}\) corresponding to \(T_{\text{lab}}\). The dashed green curves correspond to the second-order part, \(V^{(2)}(\mathbf{k}_{0},\mathbf{k}_{0})\) given by Eq. (52), and the dash-dotted orange are the sum of these two contributions. elastic channels involving many-body interactions, e.g., the two-body absorption (\(\pi NN\to\Delta N\to NN\)) and three-body absorption [45], considerably affect the \(\Delta\)-resonance decay width inside nuclear matter. As a result, we consider the in-medium interactions effectively by a renormalization of the intermediate \(\Delta\) propagator by the complex self-energy \(\Sigma_{\Delta}\) function: \[K_{33}^{1(\Delta)}(\Sigma_{\Delta})=\frac{1}{k_{0,2\mathrm{cm}}}\frac{m_{ \Delta}}{m_{\Delta}+W}\frac{\Gamma_{\Delta}}{m_{\Delta}+\Sigma_{\Delta}-W}. \tag{53}\] In this approach, the dressed resonance leads to a complex \(K_{33}^{1}\) matrix element in which the effective many-body \(p\)-wave absorption is automatically included in the model. The \(\Delta\) self-energy \(\Sigma_{\Delta}\) in a finite nucleus is, in general, non-local [21]. However, we are looking for a simple phenomenological parametrization of \(\Sigma_{\Delta}\), which would still provide a reasonable description of the data. Since the real part of \(\Sigma_{\Delta}\) has a weak energy dependence [47], it is often approximated to be constant. In contrast, the imaginary part of \(\Sigma_{\Delta}\) is regularly considered as a function of the pion energy [48]. However, we have found that including the second-order part of the pion-nucleus potential, Eq. (52), allows us to neglect the energy dependence of \(\mathrm{Im}\,\Sigma_{\Delta}\). As a result, we treat \(\mathrm{Re}\,\Sigma_{\Delta}\) and \(\mathrm{Im}\,\Sigma_{\Delta}\) as two energy-independent \(p\)-wave model parameters determined by fitting the experimental data for pion-carbon scattering in Sec. V.2. The pion absorption process by a nucleus, unlike scattering, can occur even at pion energies below its mass. While the \(\Delta\) width, Eq. (28), starts at \(\omega=m_{\pi}\), the imaginary part of the \(\Delta\) self-energy inside nuclear matter starts at \(\omega=0\)[49]. As a result, we expect the constant \(\mathrm{Im}\,\Sigma_{\Delta}\) assumption to be applicable not only in the \(\Delta\)-resonance region but also at low energies. #### v.2.2 Isoscalar \(s\)-wave modification The \(s\)- and \(p\)-wave true absorption within the optical potential formalism [9] is typically characterized by two complex parameters denoted as \(B_{0}\) and \(C_{0}\), respectively. It is assumed to be based on a two-nucleon mechanism. As was pointed out above, we effectively take into account various inelastic in-medium \(p\)-wave channels by introducing the \(\Delta\) self-energy. Thereby, we expect \(\Sigma_{\Delta}\) to incorporate absorption corrections associated with \(C_{0}\). Further, we limit our consideration of analyses based on the optical model of Ref. [9] to only the \(s\)-wave part of the potential: \[U^{(s)}(r)\propto b_{0}\rho(r)+B_{0}\rho^{2}(r), \tag{54}\] where phase space factors were omitted for simplicity, and the first term here corresponds to the Fourier-transformed first term in Eq. (41). Due to the correlation between \(b_{0}\) and \(B_{0}\) pointed out in Ref. [46], the two terms can be lumped together, resulting in an effective modification of the isoscalar parameter \(b_{0}\): \[U^{(s)}(r)\propto(b_{0}+\Delta b_{0})\rho(r), \tag{55}\] where \(\Delta b_{0}=B_{0}\rho_{e}\). In our model, we assume the following in-medium modification of the isoscalar scattering parameter: \[b_{0}^{\mathrm{bound}}(T_{\mathrm{lab}})=b_{0}^{\mathrm{free}}(T_{\mathrm{lab} })+\Delta b_{0}(T_{\mathrm{lab}}), \tag{56}\] where \(b_{0}^{\mathrm{free}}(T_{\mathrm{lab}})\) is given by Eq. (21a), and the complex parameter \(\Delta b_{0}\) effectively takes into account not only true absorption but also all possible in-medium modifications. Comparing Eqs. (55) and (56), we see that pionic atom analyses with the \(s\)-wave part of the potential given by Eq. (54) can provide us with information about the threshold value of \(\Delta b_{0}\) (see more detailed discussion in Appendix C). Using the value \(B_{0}=0.189\;\mathrm{fm}^{4}\) from Ref. [50], we arrive at the following result for the imaginary part of the in-medium isoscalar correction: \[\mathrm{Im}\,\Delta b_{0}(0)=\frac{1+m_{\pi}/2m_{N}}{1+m_{\pi}/m_{N}}\rho_{e} \,\mathrm{Im}\,B_{0}(0)=0.017\;\mathrm{fm}, \tag{57}\] where we restore the phase space factor and use the \(s\)-wave effective density \(\rho_{e}=0.6\rho_{0}\approx 0.1\,\mathrm{fm}^{-3}\) deduced from the overlapping of pion and nucleus densities for pionic atoms [51]. The resulting imaginary part of \(\Delta b_{0}\) is assumed to be \[\mathrm{Im}\,\Delta b_{0}(T_{\mathrm{lab}})=\mathrm{Im}\,\Delta b_{0}(0)+ \alpha_{b_{0}}k_{0,2\mathrm{cm}}(T_{\mathrm{lab}}), \tag{58}\] where \(\alpha_{b_{0}}\) is the effective \(s\)-wave isoscalar slope parameter, determined by the fitting procedure and \(k_{0,2\mathrm{cm}}(T_{\mathrm{lab}})\) is the on-shell pion-nucleon c.m. momentum corresponding to \(T_{\mathrm{lab}}\). Performing fitting with various parameterizations of the real part of \(\Delta b_{0}\), we conclude that while \(\mathrm{Im}\,\Delta b_{0}\) is an important parameter of our model, the resulting \(\mathrm{Re}\,\Delta b_{0}\) is always close to zero and can be neglected. For this reason, we assume \(\Delta b_{0}\) is purely imaginary, given by Eqs. (58) and (57). #### v.2.3 Isovector \(s\)-wave modification Our approach includes the in-medium modification of the \(s\)-wave amplitude \(b_{1}\), as it was successfully applied for the \(s\)-wave pionic atom [52; 53] and low-energy pion-nucleus [52; 54] potentials. To lowest order in the chiral expansion, the parameter \(b_{1}\) for the scattering of a pion on a free nucleon in the threshold region is given by the Tomozawa-Weinberg expression [55]: \[b_{1}^{\mathrm{TW}}=-\frac{1}{8\pi f_{2}^{2}}\frac{m_{\pi}m_{N}}{m_{\pi}+m_{N} }\approx-0.11\,\mathrm{fm}, \tag{59}\] where \(f_{\pi}=92.2\,\)MeV is the free-space pion decay constant [56]. The value for \(b_{1}\) obtained in this way is very close to the empirical one not only at low energies but also in the resonance region. According to the suggestion by Weise [57; 58], the medium dependence of the pion decay constant \(f_{\pi}\), which is related to the quark condensate, is in the simplest approximation given by a linear function of the nuclear density \[{f_{\pi}^{*}}^{2}(\rho)=f_{\pi}^{2}-\frac{\sigma}{m_{\pi}^{2}}\rho, \tag{60}\] where \(\sigma\) is the pion-nucleon sigma term [59]. As a result, the in-medium threshold parameter \(b_{1}\) is obtained as \[b_{1}^{\rm bound}=\frac{b_{1}^{\rm free}}{1-\sigma\rho/m_{\pi}^{2}f_{\pi}^{2}}. \tag{61}\] This simple model successfully described both pionic atoms [60; 51; 61] and low-energy pion-nucleus scattering [62]. For energies above the threshold, \(b_{1}\) is not constant but a slowly varying function of energy, which is, however, still close to its threshold value even in the \(\Delta\)-resonance region. In our analysis, we assume the following weak energy dependence of \(b_{1}\): \[b_{1}^{\rm bound}(T_{\rm lab})=b_{1}^{\rm free}(T_{\rm lab})+\Delta b_{1}, \tag{62}\] where \(b_{1}^{\rm free}(T_{\rm lab})\) is given by Eq. (21b) and the energy-independent in-medium correction is taken from the pionic atom \[\Delta b_{1}=b_{1}^{\rm free}(0)\frac{\sigma\rho_{e}/m_{\pi}^{2}f_{\pi}^{2}}{ 1-\sigma\rho_{e}/m_{\pi}^{2}f_{\pi}^{2}}=-0.044\,{\rm fm}, \tag{63}\] where following Ref. [63]\(\sigma=57\,\)MeV is taken and \(b_{1}^{\rm free}(0)\approx-0.122\,\)fm [64]. The resulting value of \(b_{1}\) at the effective density \(\rho_{e}\) is in quantitative agreement with microscopic [65] and chiral calculations [66], and the recent deeply bound pionic atoms analysis [50]. The effect of double scattering to higher order was shown to be a minor correction [67]. ## V Results and discussion In this section, we apply the model developed in Sec. IV to fit \(\pi^{\pm}\)-\({}^{12}\)C scattering data. As a result of the fit, we determine our model's three energy-independent real parameters: the real and imaginary parts of the effective \(\Delta\)-resonance self-energy, \({\rm Re}\,\Sigma_{\Delta}\) and \({\rm Im}\,\Sigma_{\Delta}\) entering Eq. (53), and the slope of the imaginary \(s\)-wave isoscalar amplitude, \(\alpha_{b_{0}}\) in Eq. (58). Subsequently, the same fixed parameters are used to compare our predictions for the pion scattering on \({}^{16}\)O, \({}^{28}\)Si and \({}^{40}\)Ca with available experimental data. ### Observables The Coulomb interaction significantly influences the charged pion scattering process. The differential elastic cross section is given by: \[\frac{d\sigma}{d\Omega}(\theta)=\left|F_{C,p}(\theta)+F_{NC}(\theta)\right|^{2}, \tag{64}\] where we have separated the Coulomb distorted strong-interaction amplitude, \(F_{NC}\), from the singular point-charge Coulomb amplitude \[F_{C,p}(\theta)=-\frac{\eta_{c}}{2k_{0}\sin^{2}(\theta/2)}\exp\{2i[\sigma_{0} -\eta_{c}\log\sin(\theta/2)]\}, \tag{65}\] with the Lorentz-invariant Sommerfeld parameter \(\eta_{c}=\alpha ZZ_{\pi}\omega_{\rm lab}/k_{\rm lab}\), where \(Z(Z_{\pi})\) is the nucleus (pion) charge. The Coulomb phase shifts \(\sigma_{l}\) are defined as \[e^{2i\sigma_{l}}=\frac{\Gamma(1+l+i\eta_{c})}{\Gamma(1+l-i\eta_{c})}, \tag{66}\] with the Euler's gamma function \(\Gamma\). The Coulomb-nuclear interference term is split in partial waves as: \[F_{NC}(\theta)=\sum_{l}(2l+1)e^{2i\sigma_{l}}F_{l}\,P_{l}(\cos\theta), \tag{67}\] where \[F_{l}=\frac{1}{2}\int\mathrm{d}\cos\theta\,F(\mathbf{k}^{\prime},\mathbf{k})P_{l}( \cos\theta) \tag{68}\] and \(\cos\theta=\mathbf{k}^{\prime}\cdot\mathbf{k}/(k^{\prime}k)\). The full partial-wave amplitudes \(F_{l}\) depend not purely on the hadronic interaction, Eq. (12), but also on the short-range part of the Coulomb potential due to the nuclear charge distribution and long-range Coulomb effects. To account for this nuclear-Coulomb interference, we apply the matching method of Vincent and Phatak [68] and an effective Coulomb modification of the reaction energy, see Appendix A for details. Besides differential cross sections, experimental measurements also provide Coulomb-subtracted angle-integrated elastic and total cross sections. The direct calculation provides the angle-integrated elastic cross section in the form \[\sigma^{\rm El}=4\pi\sum_{l}(2l+1)\left|F_{l}\right|^{2}. \tag{69}\] Due to the optical theorem, the total cross section can be derived as \[\sigma^{\rm Tot}=\frac{4\pi}{k}_{0}\sum_{l}(2l+1)\,{\rm Im}[F_{l}]. \tag{70}\] The pion-nuclear potential is a non-hermitian operator giving rise to the reaction channel with the corresponding cross section, which can be calculated as \[\sigma^{\rm R}=\sigma^{\rm Tot}-\sigma^{\rm El}. \tag{71}\] The reaction cross section \(\sigma_{R}\) includes quasielastic scattering, charge exchange, and true pion absorption. The total cross section is significant for our analysis since it has a different sensitivity to the imaginary part of the potential as compared with the differential elastic cross section. ### Fit to \({}^{\bf 12}\)C data Various groups intensively studied pion scattering on carbon from the 1970s through the 1990s. Table 1 summarizes the \(\pi^{\pm}\)-\({}^{12}\)C scattering data used in our analysis. The dataset includes measurements of the total, angle-integrated elastic, reaction, and differential elastic cross sections done at different facilities: Schweizerisches Institut fur Nuklearforschung (SIN), Canada's particle accelerator centre (TRIUMF), Los Alamos Meson Physics Facility (LAMPF), Rutherford Appleton Laboratory (RAL), and the European Organization for Nuclear Research (CERN). As our aim is the extraction of the effective \(\Delta\) resonance self-energy, in the fitting procedure, we only use the data having strong sensitivity to the \(\Delta\) properties. We choose to fit the data in the energy range of 80-180 MeV pion lab kinetic energy, corresponding with the region up to the \(\Delta\)-resonance excitation energy on a nucleon. Furthermore, our treatment of the Coulomb interaction (the Coulomb energy shift, Eq. (A)) relies on the small momentum transfer approximation. Thereby, we limit the fitting of the differential cross section data to momentum transfers \(q\leq 1.5\,\mathrm{fm}^{-1}\). Since \(\sigma^{\mathrm{Tot}}\), \(\sigma^{\mathrm{R}}\), and \(\sigma^{\mathrm{El}}\) are related through Eq. (71), we include in the fit only \(\sigma^{\mathrm{Tot}}\) and \(\sigma^{\mathrm{El}}\) if all three observables are provided. The best fit is found by minimizing the \(\chi^{2}\) defined as \[\chi^{2} =\sum_{i}\sum_{j}^{n_{i}}\left[\frac{1}{n_{i}}\left(\frac{d \sigma_{j}^{\mathrm{Data}_{i}}-N_{i}^{-1}d\sigma_{j}}{\Delta d\sigma_{j}^{ \mathrm{Data}_{i}}}\right)^{2}\right.\] \[\left.+\left(\frac{N_{i}-1}{\Delta N_{i}}\right)^{2}\right]+\sum_ {i}\sum_{j}^{n_{i}}\left(\frac{\sigma_{j}^{\mathrm{Data}_{i}}-\sigma_{j}}{ \Delta\sigma_{j}^{\mathrm{Data}_{i}}}\right)^{2}, \tag{72}\] where the first (second) term represents a sum over differential (angle-integrated) cross section data sets and \(n_{i}\) is the number of data points in the dataset "\(\tilde{r}\)". Every differential cross section dataset \(d\sigma^{\mathrm{Data}_{i}}\) consists of correlated measurements taken at individual energies and is treated as a single uncorrelated point of the fit. Since \(\Delta d\sigma_{j}^{\mathrm{Data}}\) contains only the sum of the statistical and the measured background errors, the normalization parameters \(N_{i}\) are included to account for a fully correlated component between the data points of each differential cross section dataset (instrumental error). The normalization parameters are allowed to vary, keeping the number of degrees of freedom (ndf) the same. In our formalism, only three energy-independent fitting parameters are entering Eq. (72): the \(\Delta\) self-energy parameters \(\mathrm{Re}\,\Sigma_{\Delta}\) and \(\mathrm{Im}\,\Sigma_{\Delta}\) in Eq. (53), and the \(s\)-wave isoscalar slope parameter \(\alpha_{b_{0}}\) in Eq. (58). We also tested the possibility of improving our fit by adding model parameters that modify the energy dependence of \(\Sigma_{\Delta}\) and \(b_{0}\). We found that the resulting \(\chi^{2}\)/ndf value can be improved only slightly in this way. However, the strong correlation between the parameters results in large uncertainties, making it impossible to determine the fitted parameters precisely. Moreover, introducing additional parameters does not improve our model predictions beyond the fitting range and for other nuclei. As was mentioned in Sec. IV.1, we also include the \(d\)-wave contribution to the first-order potential besides the traditional \(s\)- and \(p\)-wave terms. This small component does not change the overall energy and momentum behavior of observables in a significant way. However, including of the \(d\)-wave amplitude improves the resulting minimal \(\chi^{2}\) of the fit by about 10%. Note that the observables and fitting parameters are sensitive to the value of the effective bound nucleon mass, which in our calculation is taken as the average of the proton and neutron masses, \(m_{N}=938.92\,\mathrm{MeV}\). Tables 2-4 summarize the fitting results. Two fits were performed: fit 1 with fixed normalization parameters and fit 2 with \(N_{i}\) also being fitted. The obtained \begin{table} \begin{tabular}{c c c c c c} \hline \hline fit & \(\mathrm{Re}\,\Sigma_{\Delta}\) [MeV] & \(\mathrm{Im}\,\Sigma_{\Delta}\) [MeV] & \(\alpha_{b_{0}}\) [fm\({}^{2}\)] & \(\chi^{2}\) & \(\chi^{2}\)/ndf \\ \hline 1 & \(12.9\pm 1.3\) & \(-33.2\pm 0.8\) & \(0.039\pm 0.006\) & 53.4 & 1.67 \\ 2 & \(12.8\pm 1.4\) & \(-33.3\pm 0.9\) & \(0.040\pm 0.006\) & 47.9 & 1.50 \\ \hline \end{tabular} \end{table} Table 2: Potential parameters from fits to \(\pi^{\pm}\)-\({}^{12}\)C scattering data. For both fits \(\mathrm{ndf}=32\). model and normalization parameters are collected in Tables 2 and 3, respectively. The covariance matrix for fit 1 is given in Table 4. As can be seen from Table 2, letting \(N_{i}\) free improves the resulting \(\chi^{2}\) by about 10%, keeping the fitted parameters almost unchanged. The obtained normalization factors in Table 3 are well within the provided experimental normalization uncertainties. The consistency of the results strengthens the reliability of derived results and the robustness of the method. In the following calculations, we will use the parameter set corresponding to fit 1. In Fig. 6, we show the fitted data compared with the obtained theoretical curves corresponding to fit 1. The resulting agreement is especially good for integrated and differential elastic cross sections for \(\theta\leq 60^{\circ}\). Despite the fact that the data for \(q>1.5\,\mathrm{fm}^{-1}\) were not fitted, our model demonstrates a fairly good description of the data even for large angles, except for the dataset at \(100\,\mathrm{MeV}\) Figure 6: Fit to \(\pi^{\pm}\)-\({}^{12}\)C scattering data using the full second-order potential. The top left panel demonstrates the total (red curves and circles), integrated reaction (blue curves and squares), and elastic (green curves and triangles) cross sections. Solid curves and closed markers stand for \(\pi^{+}\); dashed and open markers for \(\pi^{-}\). The vertical dashed lines on the top left panel indicate the fitted energy range. Differential cross sections in the 80-180 MeV range are shown on other panels. The blue circles on the differential cross section plots correspond to the \(q<1.5\,\mathrm{fm}^{-1}\) range, which was fitted; the black circles were not included in \(\chi^{2}\). The dashed vertical lines on \(d\sigma/d\Omega\) plots indicate the zero position of the form factor. Table 1 lists the external data presented in the plots. which seems to be an outlier. The obtained differential cross section at \(100\,\mathrm{MeV}\) significantly undershoots the data for \(\theta\gtrsim 120^{\circ}\). The same discrepancy was also reported in the \(\Delta\)-hole model analysis of Ref. [75] and the phenomenological momentum-space potential approach of Ref. [16] with the \(\rho^{2}(r)\)-dependent second-order term. As seen from the top left panel of Fig. 6, the integrated cross sections are well described outside the fitting range denoted by the vertical dashed lines. The predicted differential cross sections based on fit 1 outside the fitting range are plotted in Fig. 7. The data measured at 65 and \(200\,\mathrm{MeV}\) are well reproduced. Some deviations are seen at 30 and \(50\,\mathrm{MeV}\), which can be fixed by a more precise treatment of the \(s\)-wave medium modifications. In order to better see the improvement originating from the inclusion of the second-order part of the pion-nuclear potential, we perform the same fit with only the first-order potential. The fit results for the integrated and differential elastic cross section are shown in Fig. 8. After minimization, we obtain \(\chi^{2}/\mathrm{ndf}\approx 10\), demonstrating a poor description of the data, which is clearly seen from the plots. ### Comparison with \({}^{16}\)O data Having the model parameters of the pion-nucleus potential fixed from the \(\pi^{\pm}\)-\({}^{12}\)C data fitting, we can further test the predictive power of our model for another \(p\)-shell \begin{table} \begin{tabular}{c|c c c} \hline \hline & \(\mathrm{Re}\,\Sigma_{\Delta}\) & \(\mathrm{Im}\,\Sigma_{\Delta}\) & \(\alpha_{b_{0}}\) \\ \hline \(\mathrm{Re}\,\Sigma_{\Delta}\) & 1 & 0.53 & 0.22 \\ \(\mathrm{Im}\,\Sigma_{\Delta}\) & 0.53 & 1 & \(-0.4\) \\ \(\alpha_{b_{0}}\) & 0.22 & \(-0.4\) & 1 \\ \hline \end{tabular} \end{table} Table 4: The correlation matrix for the fit 1. Figure 7: Comparison of the theoretical prediction based on fit 1 with the \(\pi^{\pm}\)-\({}^{12}\)C scattering data at kinetic energies outside the fitting range. The meaning of the curves is the same as in Fig. 6. Table 1 lists the external data presented in the plots. nucleus. We compare our theoretical predictions based on fit 1 with the data on \(\pi^{\pm\text{-}16}\)O scattering. Table 5 summarizes the experimental data used for the comparison. Since \({}^{12}\)C and \({}^{16}\)O are both spin-isospin-zero closed \(p\)-subshell nuclei, in our calculation, we replace only the nuclear form factors and apply the correlation functions given by Eqs. (13). The pion-nucleon scattering amplitudes are kept the same. The resulting plots with our predictions are presented in Fig. 9, demonstrating a rather good agreement between the model and experimental data. The small deviations between theoretical curves and the differential cross section data are similar to those present on the plots for \({}^{12}\)C. The theoretical curves follow the data even for large angles, except for 114 MeV, where the minimum is shifted by about 5\({}^{\circ}\). The small angle \(\pi^{\pm\text{-}16}\)O scattering data at 155, 185, and 213 MeV from the Space Radiation Effects Laboratory (SREL) [82] are well reproduced. The comparison supports our expectation of the model's universality and demonstrates its predictive power. ### Comparison with \({}^{28}\)Si and \({}^{40}\)Ca data The described model copes well with describing both \({}^{12}\)C and \({}^{16}\)O data using the same set of parameters. In general, it can be applied for any spin- and isospin-zero nucleus if the nuclear form factor and correlation functions \(C_{0}\) and \(C_{\text{ex}}\) are known. However, the calculation of the second-order part of the potential, Eq. (52), becomes involved even for \(p\)-shell nuclei. Moreover, the harmonic oscillator shell model used to calculate the correlation functions for \({}^{12}\)C and \({}^{16}\)O is much less suitable for describing heavier nuclei like \({}^{40}\)Ca, requiring more realistic nucleon wave functions. However, considering that the influence of the second-order correction decreases for heavier nuclei, we can still try applying the harmonic oscillator model to derive \(C_{0}\) and \(C_{\text{ex}}\) for closed \(d\)-subshell nuclei, as given in Eqs. (14) and. (155). In Figs. 10 and 11, we demonstrate our prediction for the \(\pi^{\pm\text{-}28}\)Si and \(\pi^{\pm\text{-}40}\)Ca differential cross sections, re Figure 8: Fit to \(\pi^{\pm\text{-}12}\)C scattering data with the first-order potential \(V^{(1)}\) only. The meaning of the curves is the same as in Fig. 6. Table 1 lists the external data presented in the plots. spectively. The theoretical model is compared with the experimental differential cross section data listed in Table 1. The theoretical model is compared with the experimental differential cross section data listed in Table 2. The theoretical model is compared with the experimental differential cross section data listed in Table 3. The theoretical model is compared with the experimental differential cross section data listed in Table 4. The theoretical model is compared with the experimental differential cross section data listed in Table 5. The theoretical model is compared with the experimental differential cross section data listed in Table 6. The theoretical model is compared with the experimental differential cross section data listed in Table 7. The theoretical model is compared with the experimental differential cross section data listed in Table 8. The theoretical model is compared with the experimental differential cross section data listed in Table 9. The theoretical model is compared with the experimental differential cross section data listed in Table 10. The theoretical model is compared with the experimental differential cross section data listed in Table 11. The theoretical model is compared with the experimental differential cross section data listed in Table 12. The theoretical model is compared with the experimental differential cross section data listed in Table 13. The theoretical model is compared with the experimental differential cross section data listed in Table 14. The theoretical model is compared with the experimental differential cross section data listed in Table 15. The theoretical model is compared with the experimental differential cross section data listed in Table 16. The theoretical model is compared with the experimental differential cross section data listed in Table 17. The theoretical model is compared with the experimental differential cross section data listed in Table 18. The theoretical model is compared with the experimental differential cross section data listed in Table 19. The theoretical model is compared with the experimental differential cross section data listed in Table 10. The theoretical model is compared with the experimental differential cross section data listed in Table 10. ble VI. Given that no additional adjustments were made, the agreement between our prediction and the data is surprisingly good, especially at larger energies. The observed small discrepancy at low energies can be explained by a more decisive influence in heavier nuclei of the \(s\)-wave part of the potential and stronger Coulomb-nuclear interference. ## VI Conclusion and Outlook In the present work, we have constructed the second-order pion-nuclear potential in momentum space. The potential is based on the individual pion-nucleon scattering amplitudes extracted from SAID. The second-order correction to the potential depends on two types of correlation functions and, as a result, is consistent with the Pauli principle. The many-body medium effects are incorporated in the complex effective \(\Delta\)-self-energy and the modifications to the \(s\)-wave scattering parameters. In our approach only three fitting parameters are introduced: the real and imaginary parts of the \(\Delta\) self-energy and the \(s\)-wave isoscalar slope parameter. The free parameters were determined by fitting the \(\pi^{\pm}\)-\({}^{12}\)C scattering data in the energy range of 80-180 MeV pion lab kinetic energy, which show a strong sensitivity to the \(\Delta\)-resonance properties. The developed second-order potential was found to yield a successful description of the total, angle-integrated elastic, reaction, and differential elastic cross sections data, assuming that the model parameters are energy-independent. Furthermore, the model demonstrates that it yields a good description of the \(\pi^{\pm}\)-\({}^{12}\)C data not only in the fitting range but also outside of it. To check its predictive power, we have applied the second-order potential to heavier nuclei, using the three parameters which have been fixed by fitting the \({}^{12}\)C data. The model predictions for \({}^{16}\)O, \({}^{28}\)Si, and \({}^{40}\)Ca nuclei were found to nicely agree with the experimental data, supporting the model's universality and predictive power. \begin{table} \begin{tabular}{l l l l} \hline \hline Ref. & facility & \(T_{\rm lab}\) [MeV] & nucleus \\ \hline [84] & TRIUMF & \(\pi^{\pm}\) 50 & \({}^{28}\)Si \\ [85] & SIN & \(\pi^{\pm}\) 130, 180, 226 & \\ \hline [71] & LAMPF & \(\pi^{-}\) 50 & \\ [70] & LAMPF & \(\pi^{+}\) 50 & \\ [86] & LAMPF & \(\pi^{\pm}\) 65 & \({}^{40}\)Ca \\ [87] & LAMPF & \(\pi^{\pm}\) 80 & \\ [88] & SIN & \(\pi^{\pm}\) 130, 180, 230 & \\ \hline \end{tabular} \end{table} Table 6: Summary of the \(\pi^{\pm}\)-\({}^{28}\)Si and \(\pi^{\pm}\)-\({}^{40}\)Ca differential cross section data Figure 10: Comparison of the theoretical calculation based on fit 1 with the data for \(\pi^{\pm}\)-\({}^{28}\)Si scattering. The meaning of the curves is the same as in Fig. 6. Table 6 lists the external data presented in the plots. In future work, we plan to provide a more detailed analysis for scattering on heavy nuclei and for the case of nuclei with nonzero isospin. As a next step, the presented model can also be applied to analyzing electron- or neutrino-induced pion production processes on nuclei. ###### Acknowledgements. This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), in part through the Collaborative Research Center [The Low-Energy Frontier of the Standard Model, Projektnummer 204404729 - SFB 1044], and in part through the Cluster of Excellence [Precision Physics, Fundamental Interactions, and Structure of Matter] (PRISMA\({}^{+}\) EXC 2118/1) within the German Excellence Strategy (Project ID 39083149). ## Appendix A Scattering by nuclear and Coulomb potentials The charged pion that approaches the nucleus, \(\pi^{-}\) (\(\pi^{+}\)), is accelerated (decelerated) due to the influence of the long-range Coulomb field of the nucleus. This effect Figure 11: Comparison of the theoretical calculation based on fit 1 with the data for \(\pi^{\pm}\)-\({}^{40}\)Ca scattering. The meaning of the curves is the same as in Fig. 6. Table 6 lists the external data presented in the plots. occurs before the pion reaches the range of the strong interaction described by the pion-nucleus potential \(\hat{U}(E)\). At intermediate energies, the pion-nucleon scattering has a strong energy dependence due to the resonant \(P_{33}\) channel and is sensitive to this Coulomb energy shift. As a result, the potential \(\hat{U}(E)\) in the scattering equations must be replaced with the nuclear-Coulomb potential \(\hat{U}_{NC}(E)\), which can be approximated as \[\hat{U}_{NC}(E)=\hat{U}(E-\langle\hat{U}_{C}\rangle)+\hat{U}_{C}. \tag{10}\] In Eq. (10), besides adding the Coulomb potential, \(\hat{U}_{C}\), we shift the reaction energy by the value of the Coulomb potential at the root-mean-squared (rms) radius of the nucleon distribution [89; 90]. The shift in the energy argument of the nuclear potential describes the intermediate Coulomb rescattering and, in general, is given at the operator level, but assuming the commutativity of \(U_{C}\) with the Green's function and neglecting the nucleus excitation by the Coulomb potential, we arrive at Eq. (10) (see Ref. [91] for details). We explicitly separate the momentum transfer dependent nuclear structure characteristics, namely the form factor and correlation functions, from the angle- and energy-dependent single-nucleon scattering amplitudes in the pion-nucleus potential. When dealing with the pion-nuclear potential in coordinate space, applying the Coulomb energy shift in Eq. (10) is straightforward and consists in shifting only the argument of the scattering coefficients in Eqs. (21). However, the situation is more complicated for the potential in momentum space due to its dependence on the off-shell momentum. To address this, we assume that the entire pion-nucleon on-shell transition and scattering amplitudes are calculated as described in Sec. III but with the shifted on-shell momentum \(k_{0}(T_{\rm lab}-\langle U_{C}\rangle)\) in the pion-nucleus c.m. frame, where \[k_{0}^{2}(T_{\rm lab})=\frac{m_{A}^{2}T_{\rm lab}(2m_{\pi}+T_{\rm lab})}{(m_{ A}+m_{\pi})^{2}+2m_{A}T_{\rm lab}}. \tag{11}\] For a smooth off-shell extrapolation, we further assume that the Coulomb-affected off-shell momenta involved in calculating the off-shell vertex factor, Eq. (24), are replaced by \[k^{2}\longrightarrow k^{2}+k_{0}^{2}(T_{\rm lab}-\langle U_{C}\rangle)-k_{0}^ {2}(T_{\rm lab}). \tag{12}\] A direct solution of the scattering equation (12) involving the long-range Coulomb interaction is difficult due to \(1/q^{2}\) singularity in the momentum space representation of the Coulomb potential. To address this issue, we apply Vincent and Phatak's method [68] to treat the Coulomb-nuclear interaction in momentum space. It is assumed that in coordinate space, the nuclear part of the potential vanishes beyond the cut-off radius \(R_{\rm cut}\). As a result, at \(r\geq R_{\rm cut}\), only the point-charge Coulomb potential exists, and the radial part of the coordinate space wave function can be expressed as \[u_{l}(r)\propto\mathscr{F}_{l}(\eta_{c},k_{0}r)+k_{0}F_{l}\,\mathscr{H}_{l}( \eta_{c},k_{0}r), \tag{13}\] with \(\mathscr{H}_{l}\equiv\mathscr{H}_{l}^{+}=\mathscr{G}_{l}+i\mathscr{F}_{l}\), where \(\mathscr{F}_{l}\) and \(\mathscr{G}_{l}\) are the regular and irregular Coulomb functions [92]. The amplitude \(F_{l}\) in Eq. (13) represents the correct Coulomb-modified nuclear partial-wave scattering amplitude that describes the observed cross sections and enters Eqs. (67-70). The asymptotic Coulomb wave function, Eq. (13), is smoothly matched with the cut-off solution at the \(r=R_{\rm cut}\), which yields: \[F_{l}=\frac{1}{k_{0}}\frac{\mathscr{F}_{l}^{\prime}(\eta_{c},\rho)-\xi_{l} \mathscr{F}_{l}(\eta_{c},\rho)}{\xi_{l}\mathscr{H}(\eta_{c},\rho)-\mathscr{H }_{l}^{\prime}(\eta_{c},\rho)} \tag{14}\] where \(\rho=k_{0}R_{\rm cut}\) and \[\xi_{l}=\frac{\mathscr{F}_{l}^{\prime}(0,\rho)+k_{0}F_{l}^{\rm cut}\mathscr{ H}_{l}^{\prime}(0,\rho)}{\mathscr{F}_{l}(0,\rho)+k_{0}F_{l}^{\rm cut}\mathscr{H}_{l} (0,\rho)}. \tag{15}\] The partial amplitude \(F_{l}^{\rm cut}\) is the solution of the pion-nucleus scattering equation with the short-range potential, which is the sum of the Coulomb potential cut at \(R_{\rm cut}\) and the strong pion-nuclear potential described in Sec. IV. We derive \(F_{l}^{\rm cut}\) from Eq. (12) using the momentum space representation of the cut Coulomb potential given by \[V_{C}^{\rm cut}(q)=-2\bar{\omega}\frac{\alpha Z_{\pi}}{q^{2}}\left[\rho_{\rm ch }(q)\rho_{\rm ch}^{\pi}(q)-Z\cos(qR_{\rm cut})\right], \tag{16}\] where \(\rho_{\rm ch}(q)\) and \(\rho_{\rm ch}^{\pi}(q)\) are the charge form factors of the nucleus and pion. We use the value \(R_{\rm cut}=8\,\)fm. The original Kerman-McManus-Thaler (KMT) multiple scattering formalism does not explicitly address the Coulomb interaction. As a result, the KMT scattering Equations (6) and (12) in the pure Coulomb scattering limit, \(\hat{U}\to\hat{U}_{C}\), fail to provide the correct Coulomb scattering amplitude due to factor \((A-1)/A\). The treatment of the Coulomb interaction in the KMT formalism was examined in detail in Ref. [93]. To recover the Coulomb scattering amplitude effectively, we follow the "KMT No. 3 prescription" of Ref. [93] (Eqs. (48-50)) and replace the pure Coulomb KMT \(T\)-matrix with the analogous quantity in the Watson approach. Despite being a minor correction, this approach improves the calculated cross sections by a few percent. ## Appendix B Nuclear form factor and correlation functions The determination of the nuclear charge density, \(\rho_{\rm ch}(r)\), provides information on the nucleon distribution within nuclei. In this work, we use the Fourier-Bessel (FB) series expansion to provide an accurate, model-independent description of the charge distribution [94]. The charge density, \(\rho_{\rm ch}(r)\), is assumed to be zero beyond a certain cutoff radius \(R_{c}\). Within the interval \(r\leq R_{c}\), we can then expand \(\rho_{\rm ch}(r)\) into the FB series: \[\rho_{\rm ch}(r)=\theta(R_{c}-r)\sum_{n=1}^{n_{\rm max}}a_{n}j_{0}\left(q_{n}r \right), \tag{17}\] where \(q_{n}=n\pi/R_{c}\) are the zeros of the 0-order Bessel function \(j_{0}(x)=\sin x/x\), and the coefficients of the series are determined by fitting experimental data on electron scattering. The number of expansion coefficients is determined by the maximal experimentally measured momentum \(q_{\rm max}\) as: \(n_{\rm max}=q_{\rm max}R_{c}/\pi\). For spin-zero nuclei, the charge distribution, \(\rho_{\rm ch}(r)\), and the charge form factor, \(\rho_{\rm ch}(q)\), are related by the Fourier transform, which for spherically symmetric nuclei is given by \[\rho_{\rm ch}(q)=4\pi\int r^{2}\,{\rm d}rj_{0}(qr)\rho_{\rm ch}(r). \tag{101}\] Correspondingly, the FB expansion, Eq. (100), in the momentum space becomes \[\rho_{\rm ch}(q)=4\pi\frac{\sin(qR_{c})}{q}\sum_{n=1}^{n_{\rm max}}a_{n}\frac{( -1)^{n}}{q^{2}-q_{n}^{2}}. \tag{102}\] The nuclear charge density does not correspond to the proton density in the nucleus because of the finite size of the proton. Moreover, neutron also possesses a charge distribution with a negative mean square radius. The nuclear charge distribution, \(\rho_{\rm ch}(r)\), can be found as the convolution of the distribution \(\rho(r)\) of the nucleons in the nucleus with the nucleon charge density. As a result, the form factor for isospin-zero nuclei is given as \[\rho(q)=\frac{2\rho_{\rm ch}(q)}{\rho_{\rm ch}^{(p)}(q)+\rho_{\rm ch}^{(n)}(q)}, \tag{103}\] where \(\rho_{\rm ch}^{(p)}(q)\) and \(\rho_{\rm ch}^{(n)}(q)\) are the proton and neutron charge form factors, respectively. We utilize the nucleon charge form factors obtained from the global fits of electron scattering data presented in Ref. [95]. While the FB expansion is a reliable approach for the first-order potential, Eq. (41), the second-order correction, Eq. (52), requires a model for deriving the two-body density and correlation functions, Eqs. (48). Assuming the \(A\)-body Slater determinant form of the total nuclear wave function: \[\Psi_{0}^{\rm SD}(x_{1},\ldots,x_{A})=\frac{1}{\sqrt{A!}}\det\{\phi_{\alpha_{i }}(x_{j})\}, \tag{104}\] with \(i,j=1,\ldots,A\) and the multi-index \(\alpha\equiv\{n,l,j,m,m_{j}\}\), we can express the exchange correlation function, Eq. (48a), in terms of the shell model single-particle nucleon wave functions \(\phi_{\alpha_{i}}(x_{j})\): \[C_{\rm ex}(x_{1},x_{2})=\sum_{i,j=1}^{A}\phi_{\alpha_{i}}^{\dagger}(x_{1}) \phi_{\alpha_{j}}^{\dagger}(x_{2})\phi_{\alpha_{i}}(x_{2})\phi_{\alpha_{j}}( x_{1}). \tag{105}\] The corresponding nuclear density within the shell model is given by \[\rho(r)=\sum_{\sigma\tau}\sum_{i=1}^{A}\phi_{\alpha_{i}}^{\dagger}(x)\phi_{ \alpha_{i}}(x). \tag{106}\] In this work, we use the harmonic oscillator (HO) nuclear shell model [96] to obtain approximate single-particle wave functions of nucleons, \(\phi_{nlm}(x)\). A direct calculation followed by the Fourier transform provides the following HO nuclear form factor for closed \(p\)-subshell nuclei (\({}^{12}\)C and \({}^{16}\)O): \[\rho(q)=\left[A-\frac{A-4}{6}a^{2}q^{2}\right]e^{-\frac{1}{4}\frac{A-1}{A}a^{2 }q^{2}}, \tag{107}\] where \(a\) is the HO parameter. As in Eq. (38), factor \((A-1)/A\) in the exponential takes into account the center-of-mass motion correction. Performing a similar calculation with the additional closed \(d\)-subshell, we arrive at the HO form factor for \({}^{28}\)Si: \[\rho(q)=\left[28-6a^{2}q^{2}+\frac{1}{5}a^{4}q^{4}\right]e^{-\frac{1}{4}\frac{ A-1}{A}a^{2}q^{2}}, \tag{108}\] and \({}^{40}\)Ca: \[\rho(q)=\left[40-10a^{2}q^{2}+\frac{1}{2}a^{4}q^{4}\right]e^{-\frac{1}{4}\frac {A-1}{A}a^{2}q^{2}}. \tag{109}\] The HO form factors, Eqs. (107) and (109), enable us to determine the corresponding HO model parameters. The extracted values of \(a\) used in our calculation of the correlation functions are listed in Table 7. The FB coefficients are taken from Refs. [98] (\({}^{12}\)C) and [99] (\({}^{16}\)O, \({}^{28}\)Si and \({}^{40}\)Ca). In each case, \(R_{c}=8\,\)fm is used. In Table 7, we also compare the rms charge radius for HO and FB analyses with experimental values from Ref. [97]. To obtain the two-body correlation functions \(C_{0}\) and \(C_{\rm ex}\) in momentum space within the HO shell model, we generalize the derivation presented in Refs. [35; 100] to \(\mathbf{q}\neq\mathbf{q}^{\prime}\) case. Starting from Eq. (105), followed by spin-isospin summation and the Fourier transform, Eq. (49), we arrive at the two-body correlation functions \[C_{\rm ex}(\mathbf{q}_{1},\mathbf{q}_{2})=\sum_{\sigma_{1,2}}\sum_{\tau_{ 1,2}}C_{\rm ex}(\mathbf{q}_{1},\sigma_{1},\tau_{1},\mathbf{q}_{2},\sigma_{2},\tau_{2}), \tag{110a}\] \[C_{0}(\mathbf{q}_{1},\mathbf{q}_{2})=C_{\rm ex}(\mathbf{q}_{1},\mathbf{q}_{2})- \frac{1}{A}\rho(q_{1})\rho(q_{2}), \tag{110b}\] which yields the following forms, for \({}^{12}\)C: \[C_{\rm ex}(\mathbf{q}_{1},\mathbf{q}_{2}) =\left(12-\frac{4}{3}a^{2}(q_{1}^{2}+q_{2}^{2})-4\sqrt{\frac{2}{3}} a^{2}\mathbf{q}_{1}\cdot\mathbf{q}_{2}+\frac{2}{3}a^{4}(\mathbf{q}_{1}\cdot\mathbf{q}_{2})^{2} \right)\exp\left[-\frac{1}{4}\frac{A-1}{A}a^{2}\left(q_{1}^{2}+q_{2}^{2}\right) \right], \tag{12a}\] \[C_{0}(\mathbf{q}_{1},\mathbf{q}_{2}) =\left(-4\sqrt{\frac{2}{3}}a^{2}\mathbf{q}_{1}\cdot\mathbf{q}_{2}+\frac{2} {3}a^{4}(\mathbf{q}_{1}\cdot\mathbf{q}_{2})^{2}-\frac{4}{27}a^{4}q_{1}^{2}q_{2}^{2} \right)\exp\left[-\frac{1}{4}\frac{A-1}{A}a^{2}\left(q_{1}^{2}+q_{2}^{2}\right) \right], \tag{12b}\] for \({}^{16}\)O: \[C_{\rm ex}(\mathbf{q}_{1},\mathbf{q}_{2}) =\left(16-2a^{2}(\mathbf{q}_{1}+\mathbf{q}_{2})^{2}+a^{4}(\mathbf{q}_{1}\cdot \mathbf{q}_{2})^{2}\right)\exp\left[-\frac{1}{4}\frac{A-1}{A}a^{2}\left(q_{1}^{2}+ q_{2}^{2}\right)\right], \tag{13a}\] \[C_{0}(\mathbf{q}_{1},\mathbf{q}_{2}) =\left(-4a^{2}\mathbf{q}_{1}\cdot\mathbf{q}_{2}+a^{4}(\mathbf{q}_{1}\cdot\mathbf{ q}_{2})^{2}-\frac{1}{4}a^{4}q_{1}^{2}q_{2}^{2}\right)\exp\left[-\frac{1}{4} \frac{A-1}{A}a^{2}\left(q_{1}^{2}+q_{2}^{2}\right)\right], \tag{13b}\] for \({}^{28}\)Si: \[C_{\rm ex}(\mathbf{q}_{1},\mathbf{q}_{2}) =\left(28-2a^{2}\left(3(\mathbf{q}_{1}+\mathbf{q}_{2})^{2}+4\left(\sqrt{ 5/3}-1\right)\mathbf{q}_{1}\cdot\mathbf{q}_{2}\right)+\frac{1}{240}a^{8}q_{1}^{4}q_{2 }^{4}\left(1-3x^{2}\right)^{2}\right.\] \[\quad+\left.\frac{1}{15}a^{4}\left(3\left(q_{1}^{4}+q_{2}^{4} \right)+4\sqrt{15}\mathbf{q}_{1}\cdot\mathbf{q}_{2}\left(q_{1}^{2}+q_{2}^{2}\right)+q_ {1}^{2}q_{2}^{2}\left(13-\sqrt{15}+3(12+\sqrt{15})x^{2}\right)\right)\right.\] \[\quad-\left.\frac{1}{30}a^{6}q_{1}^{2}q_{2}^{2}\left(\sqrt{15} \mathbf{q}_{1}\cdot\mathbf{q}_{2}\left(3x^{2}-1\right)+\left(q_{1}^{2}+q_{2}^{2} \right)(3x^{2}+1)\right)\right)\exp\left[-\frac{1}{4}\frac{A-1}{A}a^{2}\left(q _{1}^{2}+q_{2}^{2}\right)\right], \tag{14a}\] \[C_{0}(\mathbf{q}_{1},\mathbf{q}_{2}) =a^{2}q_{1}q_{2}\left(-\frac{4}{3}\left(3+2\sqrt{15}\right)x+ \frac{1}{105}a^{2}\left(28\sqrt{15}\left(q_{1}^{2}+q_{2}^{2}\right)x-\left(44 +7\sqrt{15}-21\left(12+\sqrt{15}\right)x^{2}\right)q_{1}q_{2}\right)\right.\] \[\qquad\qquad\qquad\left.-\frac{1}{210}a^{4}q_{1}q_{2}\left(7\sqrt {15}\mathbf{q}_{1}\cdot\mathbf{q}_{2}\left(3x^{2}-1\right)+\left(q_{1}^{2}+q_{2}^{2} \right)\left(21x^{2}-2\right)\right)\right.\] \[\qquad\qquad\qquad\left.+\frac{1}{8400}a^{6}q_{1}^{3}q_{2}^{3} \left(23-210x^{2}+315x^{4}\right)\right)\exp\left[-\frac{1}{4}\frac{A-1}{A}a^{ 2}\left(q_{1}^{2}+q_{2}^{2}\right)\right], \tag{14b}\] with \(x=\mathbf{q}_{1}\cdot\mathbf{q}_{2}/(q_{1}q_{2})\), and for \({}^{40}\)Ca: \[C_{\rm ex}(\mathbf{q}_{1},\mathbf{q}_{2}) =\left(40-10a^{2}(\mathbf{q}_{1}+\mathbf{q}_{2})^{2}+\frac{1}{2}a^{4}\left( (\mathbf{q}_{1}+\mathbf{q}_{2})^{4}+10(\mathbf{q}_{1}\cdot\mathbf{q}_{2})^{2}\right)-\frac{1} {2}a^{6}(\mathbf{q}_{1}\cdot\mathbf{q}_{2})^{2}\left(q_{1}^{2}+q_{2}^{2}+\mathbf{q}_{1} \cdot\mathbf{q}_{2}\right)\right.\] \[\qquad\left.+\frac{1}{16}a^{8}(\mathbf{q}_{1}\cdot\mathbf{q}_{2})^{4} \right)\exp\left[-\frac{1}{4}\frac{A-1}{A}a^{2}\left(q_{1}^{2}+q_{2}^{2} \right)\right], \tag{15a}\] \[C_{0}(\mathbf{q}_{1},\mathbf{q}_{2}) =\left(-20a^{2}\mathbf{q}_{1}\cdot\mathbf{q}_{2}+\frac{1}{2}a^{4}\left(4( \mathbf{q}_{1}+\mathbf{q}_{2})^{2}\mathbf{q}_{1}\cdot\mathbf{q}_{2}+6(\mathbf{q}_{1}\cdot\mathbf{q}_{2}) ^{2}-3q_{1}^{2}q_{2}^{2}\right)+\frac{1}{160}a^{8}\left(10(\mathbf{q}_{1}\cdot\mathbf{ q}_{2})^{4}-q_{1}^{4}q_{2}^{4}\right)\right.\] \[\qquad\left.-\frac{1}{8}a^{6}\left(4(q_{1}^{2}+q_{2}^{2}+\mathbf{q}_{1 }\cdot\mathbf{q}_{2})(\mathbf{q}_{1}\cdot\mathbf{q}_{2})^{2}-(q_{1}^{2}+q_{2}^{2})q_{1}^{2}q _{2}^{2}\right)\right)\exp\left[-\frac{1}{4}\frac{A-1}{A}a^{2}\left(q_{1}^{2}+q_ {2}^{2}\right)\right]. \tag{15b}\] By accounting for the difference in normalization conventions, we find that the obtained correlation functions at \(\mathbf{q}_{2}=-\mathbf{q}_{1}\) are consistent with the results reported in Ref. [100]2. Footnote 2: We compare \(\mathbf{q}_{2}=-\mathbf{q}_{1}\) instead of \(\mathbf{q}_{2}=\mathbf{q}_{1}\) due to using different Fourier transform definitions with Ref. [100]. ## Appendix C The second-order part of the potential Using the explicit form of the pion-nucleon scattering amplitude, Eq. (20), the second-order part of the optical potential, Eq. (52), can be written as a sum of four terms: \[V^{(2)}(\mathbf{k}^{\prime},\mathbf{k})=V_{ss}+V_{sp}+V_{pp}+V_{pp}^{(s)}, \tag{16}\] where \[V_{ss} =\int\frac{\mathrm{d}\mathbf{k}^{\prime\prime}}{2\pi^{2}}\tilde{\mathscr{ W}}(\mathbf{k}^{\prime},\mathbf{k}^{\prime\prime})\tilde{\mathscr{W}}(\mathbf{k}^{\prime\prime}, \mathbf{k})\frac{1}{k_{0}^{2}-{k^{\prime\prime}}^{2}+i\varepsilon}\left[b_{0}^{2}C_ {0}(\mathbf{k}^{\prime}-\mathbf{k}^{\prime\prime},\mathbf{k}^{\prime\prime}-\mathbf{k})+2b_{1} ^{2}C_{\mathrm{ex}}(\mathbf{k}^{\prime}-\mathbf{k}^{\prime\prime},\mathbf{k}^{\prime \prime}-\mathbf{k})\right], \tag{12a}\] \[V_{sp} =\int\frac{\mathrm{d}\mathbf{k}^{\prime\prime}}{2\pi^{2}}\tilde{ \mathscr{W}}(\mathbf{k}^{\prime},\mathbf{k}^{\prime\prime})\tilde{\mathscr{W}}(\mathbf{k} ^{\prime\prime},\mathbf{k})\frac{k_{\mathrm{cm}}^{\prime}\cdot\mathbf{k}_{\mathrm{ cm}}^{\prime\prime}+\mathbf{k}_{\mathrm{cm}}\cdot\mathbf{k}_{\mathrm{cm}}^{\prime \prime}}{k_{0}^{2}-{k^{\prime\prime}}^{2}+i\varepsilon}\left[b_{0}c_{0}C_{0}( \mathbf{k}^{\prime}-\mathbf{k}^{\prime\prime},\mathbf{k}^{\prime\prime}-\mathbf{k})+2b_{1}c_ {1}C_{\mathrm{ex}}(\mathbf{k}^{\prime}-\mathbf{k}^{\prime\prime},\mathbf{k}^{\prime \prime}-\mathbf{k})\right],\] (12b) \[V_{pp} =\int\frac{\mathrm{d}\mathbf{k}^{\prime\prime}}{2\pi^{2}}\tilde{ \mathscr{W}}(\mathbf{k}^{\prime},\mathbf{k}^{\prime\prime})\tilde{\mathscr{W}}(\mathbf{k} ^{\prime\prime},\mathbf{k})\frac{(\mathbf{k}_{\mathrm{cm}}^{\prime}\cdot\mathbf{k}_{ \mathrm{cm}}^{\prime\prime})(\mathbf{k}_{\mathrm{cm}}^{\prime\prime}\cdot\mathbf{k}_{ \mathrm{cm}})}{k_{0}^{2}-{k^{\prime\prime}}^{2}+i\varepsilon}\left[c_{0}^{2}C_ {0}(\mathbf{k}^{\prime}-\mathbf{k}^{\prime\prime},\mathbf{k}^{\prime\prime}-\mathbf{k})+2c_{1 }^{2}C_{\mathrm{ex}}(\mathbf{k}^{\prime}-\mathbf{k}^{\prime\prime},\mathbf{k}^{\prime \prime}-\mathbf{k})\right],\] (12c) \[V_{pp}^{(s)} =-\int\frac{\mathrm{d}\mathbf{k}^{\prime\prime}}{2\pi^{2}}\tilde{ \mathscr{W}}(\mathbf{k}^{\prime},\mathbf{k}^{\prime\prime})\tilde{\mathscr{W}}(\mathbf{k} ^{\prime\prime},\mathbf{k})\frac{\left[\mathbf{k}_{\mathrm{cm}}\times\mathbf{k}_{\mathrm{ cm}}^{\prime\prime}\right]\cdot\left[\mathbf{k}_{\mathrm{cm}}^{\prime\prime}\times\mathbf{k}_{ \mathrm{cm}}^{\prime}\right]}{k_{0}^{2}-{k^{\prime\prime}}^{2}+i\varepsilon} \left[s_{0}^{2}+2s_{1}^{2}\right]C_{\mathrm{ex}}(\mathbf{k}^{\prime}-\mathbf{k}^{ \prime\prime},\mathbf{k}^{\prime\prime}-\mathbf{k}). \tag{12d}\] Each second-order contribution described in Eqs. (12) represents the interference between the \(s\)- and \(p\)-wave parts of the pion-nucleon amplitude. Fig. 12 demonstrates the second-order components for on-shell forward scattering on \({}^{12}\)C. Generally, the scattering parameters \(b_{0,1}\), \(c_{0,1}\) and \(s_{0,1}\) in Eqs. (12) depend modestly on the angle between the corresponding momenta. For the purpose of evaluating the second-order correction, we assume these parameters to be angle-independent and fixed at the forward scattering angle. The peculiarity of our approach is the presence of two correlation functions in the second order. However, in the \(s\)-\(s\)-wave interference term \(V_{ss}\), Eq (12a) the first term with the \(C_{0}\) correlation function is negligible due to the smallness of \(b_{0}\) compared to the real part of \(b_{1}\). This enables us to compare our approach with the \(s\)-wave potential originally derived in Ref. [9]. With the second-order correction, the \(s\)-wave coordinate space potential given by Eq. (54) acquires the form \[U^{(s)}(r)\propto\left(b_{0}-\left(b_{0}^{2}+2b_{1}^{2}\right)\left\langle\frac {1}{r}\right\rangle\right)\rho(r)+B_{0}\rho^{2}(r), \tag{13}\] where \(\left\langle 1/r\right\rangle\) is the so-called "inverse nucleon correlation length", which within the Fermi gas model for zero pion kinetic energy becomes \(\left\langle 1/r\right\rangle=3p_{F}/(2\pi)\approx 0.65\,\mathrm{fm}^{-1}\). Performing the integration in Eq. (12a) in the limit \(k_{0}\to 0\), we obtain \(V_{ss}(0,0)=2b_{1}^{2}(C_{\mathrm{ex}})\), with \(\left\langle C_{\mathrm{ex}}\right\rangle/A\) acquiring the values \(0.61\,\mathrm{fm}^{-1}\) and \(0.56\,\mathrm{fm}^{-1}\) for \({}^{12}\)C and \({}^{40}\)Ca, respectively. The approximate agreement between \(V_{ss}\) and \(U^{(s)}(r)\) at the threshold allows us to directly apply the results of the pionic atom analyses in Sections IV.3.2 and IV.3.3. The \(p\)-\(p\)-wave interference term \(V_{pp}\), Eq. (12c), corresponds to the second-order term, \[U_{pp}(\mathbf{r})\propto-\frac{1}{3}\frac{A-1}{A}\left(4\pi c_{0}\right)^{2}\mathbf{ \nabla}\rho^{2}(r)\mathbf{\nabla}, \tag{14}\] in the coordinate space \(p\)-wave potential describing the Lorentz-Lorenz-Ericson-Ericson effect [9]: \[U^{(p)}(\mathbf{r})\propto\mathbf{\nabla}\frac{c_{0}\rho(r)}{1+\frac{4\pi}{3}\frac{A-1 }{A}c_{0}\rho(r)}\mathbf{\nabla}. \tag{15}\] The kinematic factors are omitted in Eqs. (14) and (15) Figure 12: The components of the on-shell forward pion-nucleus potential, Eqs. (12), for \({}^{12}\)C as a function of pion lab kinetic energy for parameters given by fit 1 in Table 2. The left and right panels are for real and imaginary parts, respectively. The solid red, dashed green and dash-dotted blue curves correspond to the second-order \(s\)-\(s\)-, \(s\)-\(p\)- and \(p\)-\(p\)-wave interference of the spin-independent pion scattering, Eqs. (12a)-(12c). The short-dashed curves represent the contribution from the spin-dependent part of the pion-nucleon amplitude, Eq. (12d). for simplicity. While our model does not account for effects beyond second-order, unlike Eq. (100), we expect \(V_{pp}\) to be much more realistic. The reason for this is that Eq. (101) is obtained from Eq. (100) in the limit of zero pion kinetic energy by setting \(C_{\rm ex}(\mathbf{q}_{1},\mathbf{q}_{2})=0\) and \(C_{0}(\mathbf{q}_{1},\mathbf{q}_{2})=\rho(q_{1})\rho(q_{2})\), which may be a crude approximation in the resonance energy region. The term \(V_{sp}\), Eq. (100), characterizes the \(s\)-\(p\)-wave interference. It is nonzero in our approach since we perform the computation within the nuclear shell model without resorting to the Fermi gas model. As seen from Fig. 12, this term is not negligible and is important both at high and low energies. Similarly to the case of \(V_{ss}\), the term proportional to \(C_{0}\) gives a much smaller contribution due to the ratio of \(b_{0}\) and \(b_{1}\). The term \(V_{pp}\) describes \(p\)-\(p\)-wave interference accounting for processes with (term proportional to \(C_{\rm ex}\)) and without (term proportional to \(\propto C_{0}\)) the isospin exchange. Similarly, the term \(V_{pp}^{(s)}\), Eq. (100), characterizes the spin exchange. This term has similar energy dependence as \(V_{pp}\) (see Fig. 12), because both \(c_{0,1}\) and \(s_{0,1}\) are proportional to the \(P_{33}^{1}\) partial amplitude. However, \(V_{pp}\) and \(V_{pp}^{(s)}\) have different angle-dependent structure.
2301.11407
SN1987A neutrino burst: limits on flavor conversion
In this paper, we revisit the SN1987A neutrino data to see its constraints on flavor conversion. We are motivated by the fact that most works that analyze this data consider a specific conversion mechanism, such as the MSW (Mikheyev-Smirnov-Wolfenstein) effect, although flavor conversion is still an open question in supernovae due to the presence of neutrino-neutrino interactions. In our analysis, instead of considering a specific conversion mechanism, we let the electron antineutrino survival probability $P_{\overline{e}\overline{e}}$ be a free parameter. We fit the data from Kamiokande-II, Baksan, and IMB detected spectrum with two classes of models: time-integrated and time-dependent. For the time-integrated model, it is not possible to put limits above $1\sigma$ (68% confidence level) on the survival probability. The same happens for the time-dependent model when cooling is the only mechanism of antineutrino emission. However, for models considering an accretion phase, $P_{\overline{e}\overline{e}}\sim0$ is strongly rejected, showing a preference for the existence of an accretion component in the detected antineutrino flux, and a preference for normal mass ordering when only the MSW is present.
Pedro Dedin Neto, Marcos V. dos Santos, Pedro Cunha de Holanda, Ernesto Kemp
2023-01-26T20:39:02Z
http://arxiv.org/abs/2301.11407v3
# SN1987A neutrino burst: limits on flavor conversion ###### Abstract In this paper, we revisit the SN1987A neutrino data to see its constraints on flavor conversion. We are motivated by the fact that most works that analyze this data consider a specific conversion mechanism, such as the MSW (Mikheyev-Smirnov-Wolfenstein) effect, although flavor conversion is still an open question in supernovae due to the presence of neutrino-neutrino interactions. In our analysis, instead of considering a specific conversion mechanism, we let the electron antineutrino survival probability \(P_{\overline{e}e}\) be a free parameter. We fit the data from Kamiokande-II, Baksan, and IMB detected spectrum with two classes of models: time-integrated and time-dependent. For the time-integrated model, it is not possible to put limits above \(1\sigma\) (68% confidence level) on the survival probability. The same happens for the time-dependent model when cooling is the only mechanism of antineutrino emission. However, for models considering an accretion phase, \(P_{\overline{e}e}\sim 0\) is strongly rejected, showing a preference for the existence of an accretion component in the detected antineutrino flux, and a preference for normal mass ordering when only the MSW is present. ## 1 Introduction The detection of antineutrinos coming from the SN1987A supernova, the first and only detection of supernova neutrinos up to this date, was a big event for particle and astrophysics. The events were observed by the underground neutrino experiments Kamiokande-II (KII) [1; 2], IMB [3; 4] and Baksan [5]. Since then, many works were produced to analyze and understand this data [6; 7; 8; 9; 10; 11], which gave us information to put bound in supernova models and neutrino properties. However, some conditions used in previous works do not fit well in the picture that we have today. In this context, this paper is intended to be complementary to [6; 7]. One of the main questions regarding supernova neutrinos today is the flavor conversion mechanism. It is expected for the supernova neutrinos to suffer MSW conversion [12; 13; 14] and a substantial number of works were done considering this as the only conversion mechanism in action, including the ones that analyze the SN1987A data [6; 7]. However, today it is expected that neutrino-neutrino interactions (forward scattering) become relevant in a supernova environment leading the neutrinos to a non-linear collective evolution [15]. Due to the complications that emerge from this type of evolution, there is not a conclusive picture of neutrino conversion in the supernova environment. Nevertheless, given the equal amount of non-electron antineutrinos \(\overline{\nu}_{x}=(\overline{\nu}_{\mu},\overline{\nu}_{\tau})\) emitted from the supernova, it is possible to write the flavor conversion in terms of only the electron antineutrino survival probability \(P_{\overline{e}e}\). Therefore, we treat this probability as a free parameter to see how SN1987A data can constrain it. Something similar was done by F. Vissani in [16]. However, it seems that the influence of the survival probability is analyzed only for the MSW normal hierarchy scenario (\(P_{\overline{e}e}=0.64\)) against the no oscillation one (\(P_{\overline{e}e}=0\)). Here we take a more complete analysis for \(P_{\overline{e}e}\), allowing it to range from 0 to 1. In section 2 we describe our model for the detected event rate in each detector (KII,IMB, Baksan) based on two different neutrino emission models, the flavor conversion mechanism, and the detection properties. In section 3 we describe our statistical analysis of the SN1987A data. In section 4 we show our results and discuss them, and finally, in section 5 we present our conclusions. ## 2 Model for the neutrino signal In this section, we describe the model for the expected neutrino event rate in each of the detectors, which is used to fit the SN1987A data. First, we describe the two neutrino emission models considered in this paper: a time-dependent and a time-integrated. In sequence, we describe the flavor conversion in the flux, which depends only on \(P_{\overline{\nu}\overline{\nu}}\), and, in the end, we discuss the detection features of this analysis. Given that the most relevant cross-section for the considered detectors is the IBD, we will restrict our model to the antineutrino sector \((\bar{\nu}_{e},\bar{\nu}_{\mu},\bar{\nu}_{\tau})\) ### **Neutrino Emission** Based on previous SN1987A neutrino data analysis [6; 7; 8; 9; 10], we use two distinct models for the neutrino emission: time-integrated and time-dependent ones. _Time-dependent_ Given that the neutrino emission evolves in time, a time-dependent model should be at least considered in data analysis. This approach can be found in the famous paper of Lamb and Loredo [6] and some other works [7]. In this approach, the antineutrino emission can be divided into two phases: the accretion and cooling phases. Here we will follow the path of [6; 7] and model each phase by its most relevant mechanism of emission. In this case, the accretion phase can be modeled as a positron thermal flux with temperature \(T_{a}\) incident in a neutron target, that composes the mass in accretion in the proto-neutron star. Therefore, as in [6; 7], we consider that only electron antineutrinos are emitted in this phase and the flux is given by: \[\phi^{0}_{\alpha_{\mathrm{\bar{\nu}}}}(E_{\nu},t)=\frac{8\pi c}{(hc)^{3}} \left[N_{n}(t)\sigma_{e^{+}n}(E_{\nu})g_{e^{+}}(E_{e+},T_{a})\right], \tag{1}\] with \[N(t)=\frac{Y_{n}}{m_{n}}\times M_{a}\times\frac{j_{k}(t)}{1+t/0. 5s},\] \[g_{e^{+}}(E_{e+},T_{a})=\frac{E_{e+}^{2}}{1+exp\left[E_{e+}/T_{a} \right]}, \tag{2}\] where \(N_{n}(t)\) is the number of neutrons as a function of the time, \(\sigma_{e^{+}n}(E_{\nu})\) the positron-neutron cross-section, and \(g_{e^{+}}(E_{e+},T_{a})\) the thermal distribution of positrons with energy \(E_{e+}\) in a temperature \(T_{a}\). The number of neutrons is given by the initial accreting mass \(M_{a}\) with a fraction of neutrons \(Y_{n}\), and its time behavior is given by the factor \(j_{k}(t)=exp\left[-\left(t/\tau_{a}\right)^{k}\right]\), with \(\tau_{a}\) being the characteristic time of the accretion phase and the parameter \(k=2\) following the parametrization in [7]1. The denominator \(1+t/0.5s\), as in [6; 7], is used to mimic the behavior from supernova simulations, where we have a constant flux within the first \(0.5\,s\) followed by a fast decrease. Footnote 1: In [6] it is used \(k=10\), however, as discussed in [7]\(k=2\) adjust better to supernova simulations. The cooling phase, which is dominated by neutrinos emitted by the cooling neutron star, is modeled by a thermal distribution of fermions with temperature \(T_{c}(t)\), with characteristic time \(\tau_{c}\), emitted from a sphere with fixed radius \(R_{c}\) and is given by \[\phi^{0}_{c}(E,t)=\frac{\pi c}{(hc)^{3}}4\pi R_{c}^{2}\frac{E^{2}}{1+\exp[E/T _{c}(t)]}, \tag{3}\] with the cooling temperature being a function of time \[T_{c}(t)=T_{c}\exp\left[-t/\left(4\tau_{c}\right)\right]. \tag{4}\] To combine the fluxes of both phases of emission, we follow [7] where the cooling phase starts after the accretion one. As argued in the cited work, if the accretion and cooling phases were contemporaneous the first seconds would be composed of two different spectra, given the different temperatures of each of these phases. As numerical simulations of supernovae do not show this feature, we assume that the different emission phases are separated in time. We do this using the following parameterization: \[\phi^{0}_{\nu}(t)=\phi^{0}_{a}(t)+(1-j_{k}(t))\phi^{0}_{c}(t-\tau_{a}), \tag{5}\] where we have to remind that the accretion flux is considered only for the electron antineutrinos. _Time-integrated_ In this model, we consider that the time-integrated flux can be described by the following pinched spectrum [17]: \[\phi^{0}_{\beta}(E) = \frac{L_{\beta}}{E_{0\beta}}\frac{1}{(\alpha_{\beta}+1)^{-( \alpha_{\beta}+1)}\Gamma(\alpha_{\beta}+1)E_{0\beta}} \tag{6}\] \[\times\left(\frac{E}{E_{0}}\right)^{\alpha_{\beta}}e^{-(\alpha_{ \beta}+1)E/E_{0\beta}},\] where, for a specific neutrino flavor \(\beta\), \(L_{\beta}\) is the total energy (time-integrated luminosity), \(E_{0\beta}\) the mean energy, and \(\alpha_{\beta}\) the pinching parameter. We are mainly motivated to use this model due to a collection of works which only use the energy information from the SN1987A [8; 9; 10]. Although the time data could bring new information, it is interesting to check if the energy alone can say something about the flavor conversion. ### **Flavor Conversion** From emission until detection, the neutrino may suffer flavor conversion. It is still an open question for supernova neutrinos which is the complete mechanism of flavor conversion, given the complications that arise with neutrino-neutrino interactions. However, due to unitarity and the equal initial flux of non-electron antineutrinos \(\phi^{0}_{\nu_{e}}=\phi^{0}_{\nu_{e}}=\phi^{0}_{\nu_{x}}\), the equations for flavor conversion can be simplified so that it will only depend on the electron antineutrino survival probability \(P_{\overline{e}\overline{e}}\) and initial fluxes [18], such that \[\phi_{e}=\phi^{0}_{\nu_{e}}-(1-P_{\overline{e}\overline{e}})(\phi^{0}_{\nu_{e}}- \phi^{0}_{\nu_{x}}), \tag{7a}\] \[2\phi_{\nu_{x}}=2\phi^{0}_{\nu_{x}}+(1-P_{\overline{e}\overline{e}})(\phi^{0}_{ \overline{\nu}_{e}}-\phi^{0}_{\overline{\nu}_{x}}). \tag{7b}\] Therefore, we can explore the survival probability \(P_{\overline{e}\overline{e}}\) as a free parameter representing the flavor conversion occurring during the neutrino propagation. In this paper, we want to see how strong the SN1987A data can constrain \(P_{\overline{e}\overline{e}}\) in the fitted models, given that the flavor conversion mechanism is still an open question in a supernova environment. Although this probability may be time and/or energy-dependent, we will consider it independent of these variables, given that we do not want to use a specific model. We will also consider the MSW-only conversion scenario in order to compare it to our free \(P_{\overline{e}\overline{e}}\) model. In this scenario, the electron antineutrino is created as a \(\nu_{1}\) for normal mass hierarchy (NH) and \(\bar{\nu}_{3}\) for inverted mass hierarchy (IH). Therefore, the survival probability for each mass ordering can be written as follows \[P_{\overline{e}\overline{e}}^{\rm NH}=U_{e1}^{2}, \tag{8a}\] \[P_{\overline{e}\overline{e}}^{\rm IH}=(1-P_{f}(E))U_{e3}^{2}+P_{f}(E)U_{e1}^{ 2},\] (8b) \[P_{f}(E)=exp\left[-\frac{U_{e3}^{2}}{3.5\times 10^{-5}}\left(\frac{20{\rm MeV }}{E}\right)^{2/3}\right], \tag{9}\] where we have considered an adiabatic evolution, except on the high-density resonance of the IH, where the flip probability from \(\bar{\nu}_{3}\) to \(\bar{\nu}_{1}\) is parameterized by \(P_{f}(E)\) which depends on the energy. This picture is similar to the MSW effect considered in previous works [7; 8; 9; 10]. The energy dependency in the MSW effect may appear when considering possible non-adiabaticity in the high-density resonance layer [7; 14]. However, for the usual parameterization (equation (9)), the conversion probability in the resonance is negligible. Then, the constant and energy-independent \(P_{\overline{e}\overline{e}}\) is a good representation of what has been done in SN1987A analyses until now. Although this energy dependence of \(P_{\overline{e}\overline{e}}\) is negligible in the standard MSW effect, other possible effects associated with collective effects, such as spectral split among different neutrino flavors lead to a strong energy dependency, changing drastically this scenario [15]. However, given the unknowns associated with such collective effects nowadays, we limit our analysis to consider a \(P_{\overline{e}\overline{e}}\) that is uniform in energy, leaving the spectral split analysis for a future work. ### Detection In the case of the SN1987A, we have data from three detectors: Kamiokande-II, IMB, and Baksan. In all of them, the dominant channel for electron antineutrino detection is the Inverse Beta-decay (IBD), which is the only one that we will consider. Therefore, the event rate \(R^{\rm IBD}_{\nu_{e}}\) as a function of the positron measured energy \(E_{e^{+}}\), the angle between the incoming neutrino and the scattered positron \(\theta\) and time (for the time-dependent model) can be calculated as follows \[R^{\rm IBD}_{\nu_{e}}(E_{e^{+}},t,\cos\theta) = N_{p}\times\phi_{\nu_{e}}(E_{\nu},t) \tag{10}\] \[\times\frac{d\sigma^{\rm IBD}_{\bar{\nu}_{e}}}{d\cos\theta}(E_{ \nu})\times\eta^{d}(E_{e^{+}}),\] where \(N_{p}\) is the number of free protons, \(\phi_{\nu_{e}}(E_{\nu},t)\) the electron antineutrino flux at the detector, \(d\sigma^{\rm IBD}_{\bar{\nu}_{e}}(E_{\nu})/d\cos\theta\) the differential cross-section for IBD, and \(\eta^{d}(E_{e^{+}})\) the detector efficiency. For the IBD, the incoming neutrino energy \(E_{\nu}\) is related to the created positron energy by \(E_{e^{+}}\approx E_{\nu}-1.293MeV\), due to the mass difference between the initial proton and the final neutron. The energy threshold for the IBD is \(E_{\nu}^{th}=1.806MeV\)[19]. ### Efficiency Instead of assuming the procedure followed in [7], where authors performed a Monte Carlo simulation to calculate an average efficiency, we decided to adopt the functions from [5], that simply fit the efficiency points reported from the three collaborations. These functions are shown in Figure 12. ### Cross-section The exclusive interaction considered in the analysis was the inverse beta decay, given the high cross-section compared to other possible channels of KII, IMB, and Baksan. We adopted the differential cross section (in the scattering angle) calculated by Vogel and Beacom in [20]. ### Off-set time Another thing that we have to be careful of is to not confuse the time of the first detected neutrino \(t_{1}\) with the time \(t_{0}=t=0\) which indicates the time that the first neutrino arrives at the detector, even if it was not detected. Not considering this may force that the first detected neutrino is originated from the initial accretion phase, which may not be the case. As we will discuss later, for the MSW conversion in the inverted mass hierarchy scenario (IH), the initial \(\Psi_{e}\) flux contributes only to 2% of the detected flux, which makes it probable that the first detected neutrino came from the cooling phase and then \(t_{1}\neq t_{0}\). To get around this problem, it is usual to introduce an offset time \(t_{\rm off}^{d}=t_{1}-t_{0}\) between the first detected neutrino and the time of arrival of the first neutrino, which may be different for each detector given that they do not have an equal absolute time. ### Background Modeling In a realistic approach, we have to consider that detected events may come from background sources. The background rate is considered to be constant over the time of exposure, and also uniform over space, i.e., it depends only on the positron energy of the event \(B=B(E_{i})=d^{2}N_{B}/dtdE\). The independence regarding the spatial position is an approximation, given that there is more background at the wall of the detector, due to the surrounding material. The background can be measured and it is published by the collaborations. In our case, we use the background rate from [21] for Kamiokande-II and [6] for Baksan. The background is irrelevant for the IMB detector. In the case of the Time-Integrated analysis, we have to integrate the background rate in time to get the event rate per energy \(B=B(E_{i})=dN_{B}/dE\). The integration has to be done on the time of exposure to the supernova signal, i.e., the data-taking duration (\(\sim 30s\)). ## 3 Statistical Analysis For the statistical analysis, we use the method of maximum unbinned likelihood, due to the low number of events. Our expression for the likelihood is the same as in [7] \[\mathcal{L} = e^{-f_{d}\int R(t)dt}\prod_{t=1}^{N}e^{R(t_{i})\tau_{d}} \tag{11}\] \[\times\left[\frac{B_{i}}{2}+\int R(t_{i},E_{e,i},\cos\theta_{i}) \mathcal{L}_{i}(E_{e})dE_{e}\right].\] Here we made implicitly the dependency of \(\mathcal{L}\) in the parameters of our models. In this equation, \(i\) is the index of each event, \(R(t,E,\cos\theta)\) is the expected event rate from equation (10), \(R(t)\) the event rate integrated in the angle and energy, and \(B\) the background rate2 discussed in section 2.7. The integration in the positron energy \(E_{e}\) is made considering a Gaussian distribution \(\mathcal{L}_{i}(E_{e})\) around the measured value \(E_{e,i}\) with standard deviation given by the measurement uncertainty. As in [7], we consider that the time and angle uncertainties are irrelevant. We also consider the dead time \(\tau_{d}\) for each detector (\(d=K,B,I\)), where \(f_{d}\) is the live-time fraction [7]. In the case of the time-independent model, we only have to consider a time integration in the event rate for the signal \(R(t_{i},E_{e,i},\cos\theta_{i})\) and for the background \(B(E_{i})\). Footnote 2: The factor of \(1/2\) in the background rate term comes from its angular dependency in \(\cos\theta\), which we consider to be uniform. To find the set of parameters that best adjusts our model to the data, we only have to maximize the likelihood \(\mathcal{L}\) or minimize \(-2\log(\mathcal{L})\). The last one is useful because it transforms multiplication into a sum and has a straightforward connection to confidence intervals. Given that we have a set of parameters \(\vec{\theta}\), taking their the best-fit \(\vec{\hat{\theta}}\) we can define the likelihood ratio as follows. \[\lambda(\vec{\theta})\equiv\mathcal{L}(\vec{\theta})/\mathcal{L}(\vec{\hat{ \theta}}) \tag{12}\] so that \(-2\log\lambda(\vec{\theta})\) follows a \(\chi^{2}\) distribution in the asymptotic limit of large samples \(N\rightarrow\infty\), with \(m\) degrees of freedom representing the number of parameters not constrained to be in its best-fit value. With this procedure, we can estimate the best-fit values for the parameters and their confidence interval, given a confidence level. However, we have to note that our data is not a large sample so our confidence level is an approximation. In any case, in this paper, we consider that it is an acceptable approximation given the allowed region for the astrophysical parameters to be comparable to previous works [6] that use other approaches to set the confidence levels, as we discuss in Appendix A. ## 4 Results and Discussion ### Time-dependent model For the time-dependent model, following the references [6; 7], we consider two possible cases, one with just cooling emission and the other with an initial accretion phase. For the cooling component, we have four astrophysical parameters, the initial cooling temperature \(T_{c}\), the time constant of the phase \(\tau_{c}\), the radius of the neutrinosphere \(R_{c}\), and the ratio between the initial temperatures of the electronic and non-electronic antineutrinos \(\tau=T_{\psi_{c}}/T_{\psi_{e}}\). Previous works [7] fix this temperature ratio based on supernova simulations. Here, we check the impact of changing this ratio given that it has strong implications in how similar the initial spectra are, which reflects how well we can identify flavor conversion in the detected spectrum. Nevertheless, we limit ourselves to the range of temperature ratio expected from supernova simulations [17]. When considering the accretion phase, we introduce three new astrophysical parameters: the initial accretion temperature \(T_{a}\), the time constant of the phase \(\tau_{a}\), and the accretion mass \(M_{a}\). In addition to the astrophysical parameters, there is the offset time for each detector and the survival probability, resulting in a total of 8 parameters for the cooling model and 11 for the cooling plus accretion. To analyze how the SN1987A data can put limits on \(P_{\overline{\theta}}\), we can do a marginal analysis, as described in section 3. Figures 1 and 2 show the marginal plot of \(P_{\overline{e}\overline{e}}\) for the models with only cooling component and for the one with cooling and accretion, respectively. For the model with just cooling, we can see that it is not possible to put limits on \(P_{\overline{e}\overline{e}}\) up to the \(1\sigma\) for \(\tau\) values considered. This probably happens because both initial fluxes \(\phi^{0}_{\overline{e}_{\tau}}\) and \(\phi^{0}_{\overline{e}_{x}}\) come from the same mechanism, resulting in almost indistinguishable spectra, even allowing the temperatures to be different. When we consider the accretion phase, we have a different scenario, where \(P_{\overline{e}\overline{e}}\sim 0\) is strongly rejected, as we can see in Figure 2. This stronger constraint in \(P_{\overline{e}\overline{e}}\) happens because in the accretion mechanism only electrons antineutrinos are emitted, making their initial flux \(\phi^{0}_{\overline{e}_{\tau}}\) more distinguishable from the non-electronic one \(\phi^{0}_{\overline{e}_{x}}\), which in turns facilitates the identification of flavor conversion. Given that, the excluded region of \(P_{\overline{e}\overline{e}}\sim 0\) corresponds to the case where the detected flux is composed only by the initial \(\phi^{0}_{\overline{e}_{x}}\), i.e., a flux with no accretion component. This shows us that the detected electron antineutrinos are better described by a flux with an accretion component coming from \(\phi^{0}_{\overline{e}_{x}}\), as already found by [6]. However, in [6] they do not consider the role of flavor conversion, while here we can see that the existence of an accretion component has strong implications on the conversion mechanism. If we consider only the MSW effect with adiabatic propagation, this implies that the normal hierarchy scenario is favored over the inverted. Comparing them with the best-fit of free \(P_{\overline{e}\overline{e}}\), the normal hierarchy scenario is not significantly rejected, while the inverted one is rejected by \(\sim 3\sigma\) of significance. We have also tested the implications of considering the cooling and accretion components as contemporaneous. As argued by [7], there is no evidence of a composed spectrum in supernova simulations, so the two mechanisms with different mean energies should occur at different times. However, from supernovae physics, we may expect that the PNS starts to cool down by neutrino emission soon after its formation, simultaneously with the accretion mechanism [22]. Therefore, we decide to test the implications of that hypothesis in our analysis. As we can see in Figure 3 there is no significant modification on \(P_{\overline{e}\overline{e}}\) limits. The only modification appears on the best-fit of \(t^{\rm IMB}_{\rm off}\), which can be seen in Appendix A. ### **Time-integrated model** For the time-integrated model, we considered a Fermi-Dirac emission (\(\alpha_{\overline{e}_{x}}=\alpha_{\overline{e}_{x}}=2.3\)), a choice that does not have big impact in the fitting for \(2.3<\alpha<4\)3. We also consider a hierarchy for the mean energy \(\overline{E}_{\overline{e}_{x}}>\overline{E}_{\overline{e}_{x}}\), which is Figure 3: Same as Fig. 1 with two components: accretion and cooling. In this case, the two phases are considered to be contemporaneous. The horizontal dashed lines corresponds to 1, 2 and 3\(\sigma\) of C.L. Figure 2: Same as Fig. 1 with two components: accretion and cooling. In this case, the two phases are considered to be separated in time. The horizontal dashed lines corresponds to 1, 2 and 3\(\sigma\) of C.L. physically motivated given that non-electron neutrinos interact less (lack of \(\tau\) and \(\mu\) leptons in the environment) and then escape from deeper regions in the supernova with higher temperatures. The best-fit values for the astrophysical parameters are shown in Table 3 considering the 3 different conversion scenarios. As we can see, there is a preference for a detected spectrum \(\phi_{\tau_{e}}\) to be composed mostly by the initial non-electron neutrino spectrum \(\phi_{\overline{\nu}_{x}}^{0}\), given that there is basically no constraint for the total energy \(\varepsilon_{\overline{\nu}_{x}}\), the same behavior was also found in [10]. Even in the MSW mechanism with inverted mass hierarchy, where the composition of \(\phi_{\overline{\nu}_{x}}^{0}\) in the final flux is small (\(P_{\overline{\nu}_{x}}\approx 67.8\%\)), the flavor conversion is compensated by a higher total energy \(\varepsilon_{\overline{\nu}_{x}}\). This preference is a combination of the imposed energy hierarchy \(\overline{E}_{\overline{\nu}_{x}}>\overline{E}_{\overline{\nu}_{x}}\) and the low detection efficiency for lower energies, where the low energy events can be as well described as coming from the background. However, we did not investigate this preference deeply4. As we are interested in the flavor conversion parameter \(P_{\overline{\nu}_{\overline{e}}}\), we leave the A to compare our marginal and contour plots with previous analyses to show the consistency of our method, at least regarding the astrophysical parameters. Footnote 4: We only tested a scenario with relaxed bound conditions for the parameters. However, we obtained nonsensical values for the electron antineutrino total energy, such as \(\varepsilon_{\overline{\nu}_{x}}\sim 10^{55}\)ergs for the inverted mass hierarchy. For the flavor conversion analysis, we again fix the initial temperature ratio (more precisely the mean energy ratio \(\tau=\overline{E}_{\overline{\nu}_{x}}/\overline{E}_{\overline{\nu}_{x}}=T_{ \overline{\nu}_{x}}/T_{\overline{\nu}_{x}}\)) and let the other parameters run freely over the allowed range (Table 3). Figure 5 shows the marginal plot of \(P_{\overline{\nu}_{\overline{e}}}\) minimizing over the other model parameters. Again, there is no constraint on the survival probability above \(68\%\) of confidence, even for spectra with higher mean energy differences such as \(\tau=1.4\). ### Problems with fitting the data with some models In our numerical implementation, we found some difficulties in working with the two-component model (accretion + cooling). The main one is the existence of different local minima, which make the minimizer algorithm give different best fits depending on the initial conditions. To get around this problem, we used two methods to find the global minimum. In the first method we fit this model multiple times (\(\approx 1000\)) fluctuating the initial conditions of parameters uniformly in the ranges shown in Table 2, and taking the minimum value of \(-2\log\mathcal{L}\) as the initial condition to find the global best-fit. The second method was based on using different minimizers (MINOS, scipy, simplex)5 to see if this dependency on the initial conditions was algorithm dependent. In the end, we found that all the different minimizers obtained the same best fit given initial conditions around it, and in agreement with the first method. Given the concordance between the two methods and algorithms, we have confidence that the best fit obtained is the most probable one inside the allowed parameter space. Footnote 5: All of them implemented in the iminuit library [23]. ## 5 Conclusion In this paper, we have explored the role of flavor conversion in the SN1987A neutrino data, and how it can impose limits on the flavor conversion mechanism. We found that the time-integrated model, which uses only the energy information, could not put any limit on the electron antineutrino survival probability \(P_{\overline{e}}\). The same happens for the time-dependent models that consider antineutrino emission only from the cooling mechanism. However, with the existence of an accretion emission of electron antineutrinos, strong limits are imposed on low values of \(P_{\overline{e}}\). This is impressive given the low statistics of the SN1987A neutrino data and it is in Figure 4: \(P_{\overline{e}}\) likelihood ratio comparing the scenario with (solid) and without (dashed) the assumption of \(T_{e}<0.6T_{c}\). The vertical lines correspond to MSW-LMA solution to an adiabatic neutrino propagation. The horizontal grey lines correspond to \(1,2\) and \(3\sigma\) of C.L. Figure 5: \(P_{\overline{e}}\) likelihood ratio for the SN1987A data considering the time-integrated model.
2302.08406
Entity Aware Modelling: A Survey
Personalized prediction of responses for individual entities caused by external drivers is vital across many disciplines. Recent machine learning (ML) advances have led to new state-of-the-art response prediction models. Models built at a population level often lead to sub-optimal performance in many personalized prediction settings due to heterogeneity in data across entities (tasks). In personalized prediction, the goal is to incorporate inherent characteristics of different entities to improve prediction performance. In this survey, we focus on the recent developments in the ML community for such entity-aware modeling approaches. ML algorithms often modulate the network using these entity characteristics when they are readily available. However, these entity characteristics are not readily available in many real-world scenarios, and different ML methods have been proposed to infer these characteristics from the data. In this survey, we have organized the current literature on entity-aware modeling based on the availability of these characteristics as well as the amount of training data. We highlight how recent innovations in other disciplines, such as uncertainty quantification, fairness, and knowledge-guided machine learning, can improve entity-aware modeling.
Rahul Ghosh, Haoyu Yang, Ankush Khandelwal, Erhu He, Arvind Renganathan, Somya Sharma, Xiaowei Jia, Vipin Kumar
2023-02-16T16:33:33Z
http://arxiv.org/abs/2302.08406v1
# Entity Aware Modelling: A Survey ###### Abstract Personalized prediction of responses for individual entities caused by external drivers is vital across many disciplines. Recent machine learning (ML) advances have led to new state-of-the-art response prediction models. Models built at a population level often lead to sub-optimal performance in many personalized prediction settings due to heterogeneity in data across entities (tasks). In personalized prediction, the goal is to incorporate inherent characteristics of different entities to improve prediction performance. In this survey, we focus on the recent developments in the ML community for such entity-aware modeling approaches. ML algorithms often modulate the network using these entity characteristics when they are readily available. However, these entity characteristics are not readily available in many real-world scenarios, and different ML methods have been proposed to infer these characteristics from the data. In this survey, we have organized the current literature on entity-aware modeling based on the availability of these characteristics as well as the amount of training data. We highlight how recent innovations in other disciplines, such as uncertainty quantification, fairness, and knowledge-guided machine learning, can improve entity-aware modeling. ## 1 Introduction Personalized prediction is an essential task in many real-world applications, including recommendation systems [4], medical interventions [13], and environmental sciences [11], which require robust personalized prediction models for sets of entities given limited training data for individual entities (or tasks). For example, the entities can represent a set of hydrological basins, and the objective is to model the streamflow response of several such basins for understanding hydrology cycles, water supply management, flood mapping, and reservoir operations. Similarly, in the healthcare domain, monitoring disease progression among patients (entity) or groups driven by external drivers, demographic or genetic information, and received treatments is essential for understanding the disease dynamics and downstream prediction task [1]. Other examples include personalized item prediction based on user behavior in e-Commerce systems [4] or the forecasting of traffic patterns in different cities or countries [15]. An entity can also be a physical system such as a drone or a spring-mass system where we want to model the trajectory of these systems. The major challenge in building personalized prediction models is the lack of training for individual entities. Hence, learning individual models can be sub-optimal, as shown by numerous studies in environmental [14] and healthcare [16, 17]. On the other hand, a trivial merging of data from all entities to learn a single model will also fail to perform well. This is because the entity's response to the external drivers is governed by inherent properties specific to each entity. For example, for the same amount of precipitation (external driver), two river basins (entity) can have very different streamflow (response) values depending on their land-cover type and soil properties (entity characteristic) [18]. Similarly, in a clinical setting, a recorded treatment of medication (driver) for diabetes for a patient (entity) can have remarkably different effects (response) depending on the frequency of self-exercise (entity characteristic) [13]. More examples of heterogeneity in entity characteristics include people (entity) having different heart rates (response) for the same physical activity (driver) depending on the physical fitness of each person (entity characteristic). Hence, ML methods must consider these entity characteristics to model the driver-response relationship effectively. We term this strategy of utilizing these entity characteristics to modulate the prediction model as entity-aware modeling (EAM). Figure 1 shows the diagrammatic representation of this EAM strategy. Various methods have been proposed across multiple disciplines to incorporate entity characteristics (implicitly or explicitly). The main issue with the current literature is that while the ideas developed across these disciplines apply to Figure 1: _Forward model (\(p\)) which uses external drivers (\(\mathbf{x^{t}}\)) and entity characteristics (\(\mathbf{z}\)) to predict response (\(\mathbf{y^{t}}\))_ EAM, they have yet to be recognized or organized as such. This paper aims to provide an organized view of the literature around EAM, where the methods are borrowed from a wide range of applications and related ML tasks. The critical goal of EAM is to build a global model that effectively leverages data from different entities by incorporating entity characteristics to reduce the impact of data scarcity for each entity. These techniques have been applied in several naturally occurring scenarios, as shown in Figure 2. When entity-specific characteristics are explicitly available, they are often used directly in ML models for modulation [12, 13, 14, 15, 16]. One advantage of having explicit characteristics (or learned embeddings) for each entity in the training set is that the learned model can be used for out-of-sample entities. However, these characteristics are often unknown or difficult to measure directly in many applications. Thus there is an additional need to build models that do not entirely depend on explicitly available characteristics. Such models either implicitly capture the entity/task characteristics as part of their parameter set [10] or infer entity/task embeddings from the data and use them to modulate the global network for personalization under heterogeneity [23, 14, 15]. Similarly, multi-task learning (MTL) framework [13, 14, 15] can be used but at the expense of increased model complexity because MTL generally requires separate parameters for each task. Due to the multi-faceted nature of EAM applications, innovations in other disciplines, such as the identifiability of entity characteristics and incorporating domain knowledge about entities, have a direct impact on improving the performance and usability of ML in these applications. Precisely, method advancements that can correctly identify the latent causal variables from the data [20, 14] can better characterize the entities. Similarly, several studies have shown that incorporating auxiliary information, either in the form of domain knowledge about the hierarchical structure [21] or additional observations about the state of entities, provides us with a way to monitor the evolving processes and characteristics of the entities. Furthermore, incorporating advances in the uncertainty quantification [1, 15] and fairness [20] is not only pivotal for the usability of EAM in operational decision-making, but they can also lead to improved EAM. To summarize, this survey aims to organize the diverse ML research threads proposed over the years that can be leveraged to tackle this EAM task. Furthermore, we enumerate the gaps and opportunities for advancing research in each direction. We organize the paper as follows. Section 2 first formulates the problem encountered in predicting the response for a diverse set of entities and discusses the different scenarios in this problem. Section 3 discusses the overarching themes between methods and applications. Lastly, Section 4 analyzes the additional topics that arise in this direction of EAM and lists open questions for future research. ## 2 Entity-aware Prediction Scenarios This survey focuses on the setting where there are a set of \(N\) entities/tasks. There exists a variability in the amount of training data available for entities - abundant, sparse or none. There may be a subset of well-observed entities, such that for each entity \(i\) in this set, we have access to a training dataset \(\mathcal{D}_{i}=\{(x_{i}^{1},y_{i}^{1}),(x_{i}^{2},y_{i}^{2}),\ldots,(x_{i}^{T ^{train}},y_{i}^{T^{train}})\}\), with (drivers, response) pairs. From the remaining entities, there may be a subset of less-observed entities where, for each entity \(j\) in the remaining set, we have access to a few-shot dataset \(\mathcal{D}_{j}=\{(x_{j}^{1},y_{j}^{2}),\ldots,(x_{j}^{T_{Few}},y_{j}^{T_{Few}})\}\). The rest of the entities are completely unobserved. The objective is to learn the mapping function from input variables \(x_{i}^{t}\) to target variables \(y_{i}^{t}\). In conventional supervised machine learning, we train a predictive model \(\hat{y}_{i}^{t}=p_{\theta_{i}}(x_{i}^{t})\), parameterized by \(\theta_{i}\), by finding the parameters that minimize the empirical risk on the training data: \[\theta_{i}^{*}=\arg\min_{\theta_{i}}\mathcal{L}(\mathcal{D}_{i};\theta_{i}) \tag{1}\] Given sufficient training data for each entity, we can train individual ML models that capture these inherent biases in each entity within the learned parameter set \(\theta_{i}^{*}\). However, the data from all the entities are combined due to the lack of training data to learn a robust model for each entity. Learning a global model by the trivial merging of data from different entities can lead to sub-optimal results due to the heterogeneity across different sites. Because these entities are differentiated by their inherent characteristics \(\mathbf{z}_{i}\), the functions are of the form \(\hat{y}_{i}^{t}=p_{\theta}(x_{i}^{t},\mathbf{z}_{i})\), where \(\theta\) denotes the function class shared by the target systems and \(\mathbf{z}_{i}\) denotes entity-specific inherent characteristics as shown in Figure 1. Figure 2 summarizes the scenarios in which the ML models are used in real-world applications. The entity characteristics \(\mathbf{z}_{i}\) can be explicitly available in some scenarios. In many scenarios, measurement of entity characteristics may be partially available for some of the characteristics, noisy or uncertain, or completely unavailable. In this situation, the entity characteristics must either implicitly be part of the models or be recovered from the data as latent variables. Further, the trained ML models can be evaluated in two further settings: (1) In-Sample test: the training and testing data are from the same entities but different from each other, and (2) Out-of-Sample test: the training and testing data are from different entities and different periods. In the out-of-sample testing scenario, we further have the few-shot and zero-shot setting depending on whether we have access to few-shot datasets for the testing entities. Next, we describe the most relevant literature for these scenarios. Figure 2: Problem Setting Methods ### Known \(z\) In the scenario where entity-specific characteristics are known, the most standard approach is to modulate the network using them [14, 15]. Several studies have used these source characteristics by concatenating them along with the drivers and then passing them into the ML models [11]. Other approaches have used entity-specific characteristics/features to modulate the architecture or transform the input features. In the context of store placement prediction in multiple cities, [14] use the city-specific parameters in an attention network to modulate and adapt the base feature extractor. Further, [17] perform feature-wise affine transformation based modulation using entity characteristics based conditioning. Similarly, [22] explicitly model the interaction of the input drivers and entity characteristics through a deep and cross network. One advantage of having explicit characteristics for each entity is that the learned model can be used for out-of-sample entities. Thus we do not further divide this scenario based on whether the ML models are being applied in the in-sample or out-of-sample setting, which is trivial. However, all the methods discussed in the later settings can be easily adapted for this scenario. A challenge commonly faced in this scenario is handling the model's bias towards certain types of entities caused due to the fact that the training set may be imbalanced in the types and occurrences of entities, as discussed in Section 4.4. ### Unknown \(z\) & In-sample When the characteristics are unavailable, EAM methods that learn to leverage entity relationship implicitly is required. **Multi-task Learning** (MTL) is the most common approach used for several personalized prediction applications in this setup, such as mood prediction [13], appearance prediction [15], and human mobility prediction [20]. The different tasks/entities (used interchangeably here) share a common network in multi-task learning followed by task-specific weights to achieve personalization. However, there are two main challenges with MTL for personalized prediction. First, the number of entity-specific weights increases rapidly with the number of entities. Furthermore, the shared network should have sufficient capacity to handle a large set of entities [20]. Hierarchical Dirichlet processes have been used to combine similar tasks at the expense of the increased model and computational complexity. Recently, [14] proposed a deep MTL framework that aims to learn entity similarity from the data to reduce the impact of limited training data while training entity-specific parameters. Another approach in this scenario is to train global models by assigning one-hot/random vector to each entity [12]. Here the entity-specific parameters do not depend on the number of entity but on the dimensionality of random characteristics, thus reducing the high model complexity of the MTL framework. More complex methods, such as meta-learning (discussed later), can also be applied in this scenario. ### Unknown \(z\), out-of-sample & few-shot When few samples of observation are available for the out-of-sample entities, few-shot learning [20] methods can be used. MTL methods are not the right approach as a model trained using traditional learning schemes is not easily adaptable to a different set of entities [16]. The solution is to use the few-shot data to either adapt the models to the new entities through gradient-based optimization or infer the entity characteristics and use them to modulate the prediction model. **Meta Learning:** Recently meta-learning has gained much attention in few-shot learning applications by leveraging the shared structure between existing training tasks, leading to better generalization and adaptation [13]. In particular, Model Agnostic Meta Learning (MAML) [16] aims to learn a global meta model that can be easily adapted to create personalized models for each entity. This is commonly done by formulating the training scheme as a bi-level optimization problem: \[\theta^{*} =\arg\min_{\theta}\sum_{i\in\mathcal{P}(i)}\mathcal{L}(\mathcal{D }_{i}^{val};\theta_{i}^{*})\] (2) s.t. \[\theta_{i}^{*} =\arg\min_{\theta}\mathcal{L}(\mathcal{D}_{i}^{train};\theta)\] During meta-training, the individual models \(\theta_{i}\) are fine-tuned for each entities using their meta-training samples \(\mathcal{D}_{i}^{train}\). These individual models are used to calculate the loss on meta-test samples \(\mathcal{D}_{i}^{val}\), which serves as the training error for the meta model \(\theta^{*}\). This meta-model can adapt to each entity using one or a small number of gradient steps to find the task-adapted parameter of the prediction model. Meta-learning has gained much attention in recent years for several EAM tasks. [16] obtain a high-performance personalized model using meta learning and few-shot entity data. [15] bridge the modeling of infrequent patients (entites) and rare diseases (tasks) by designing a meta learning approach based on hierarchical patient subtyping mechanism. [1] show the benefit of meta-learning over individual models in forecasting a diverse set of air pollution. [13] developed a MAML framework for multiple clinical risks prediction in healthcare application. Other applications include using the prior consumption data from multiple source cities to predict optimal store placement in a new city [14]. Adapting the whole parameter set may put extensive burden on the optimization procedure, possibly biasing the solution of the inner-level optimization. Recently, variations of MAML have been proposed that adapt only the high-level layers instead of the whole meta-model [2, 15]. This strategy can be adapted for the EAM modeling. The key idea is to freeze the prediction model in the inner loop and assume only entity characteristics as a trainable vector. This strategy has been used in engineering [14], finance [14], and vision domains [26]. Recently, [13] used an invertible neural network to infer lake attributes using a few observations. Similarly, [12] proposed to jointly identify and predict systems using similar bi-level op timization of MAML: \[\theta^{*} =\arg\min_{\theta}\sum_{i\in\mathcal{P}(i)}\mathcal{L}(\mathcal{D}_{ i}^{val};\theta,z_{i}^{*})\] (3) s.t. \[z_{i}^{*} =\arg\min_{z}\mathcal{L}(\mathcal{D}_{i}^{train};\theta,z)\] Here, the total parameters are separated into prediction model \(\theta\) shared by the target entities and entity-specific characteristics \(z\). Additionally, many MAML-based methods assume that all train and test entities are drawn from the same distribution. Thus, a single meta-initialization could be challenging to adapt due to the data distribution in different entities being different and multimodal [23]. Furthermore, the training process is computationally expensive and sensitive to hyperparameter choices [1]. **Conditional Meta Learning:** If the entity distribution is multi-modal with disjoint and far apart modes (e.g. patients/groups from different countries), a set of separate meta-learners could better master the full distribution. Several strategies have been proposed to learn meta-learners that acquire mode-specific prior parameters by formulating the bi-level optimization problem as, \[\theta^{*} =\arg\min_{\theta}\sum_{i\in\mathcal{P}(i)}\mathcal{L}(\mathcal{ D}_{i}^{val};\theta_{i}^{*})\] (4) s.t. \[\theta_{i}^{*} =\arg\min_{\theta}\mathcal{L}(\mathcal{D}_{i}^{train};\theta) \quad\text{and}\quad\theta=\mathcal{T}(\mathbf{z})\] Here, the meta-model is conditioned on additional side information \(\mathcal{T}(\mathbf{z})\) that contains descriptive features associated to the entity/task. Several works have been proposed that advocate this conditional perspective. They have been called several names such as heterogeneous meta learning [3], conditional meta learning [17] or multi-modal meta learning [23]. Associating each entity with one of the meta initializations require additional entity characteristics, which is often unavailable or could be ambiguous when the modes are not disjoint. Under this setting, the most common strategy is to learn another network that converts training data from seen entities into entity-specific embeddings that modulate the shared prediction network [23]. The prediction and embedding networks can be trained either jointly or alternately. [20] learn embedding metric space that characterizes disease (entity) relationships for disease prediction and shows promising results for solving the data scarcity problem in healthcare decision support. [10] learn a mixture of hierarchical Bayesian models by incorporating entity-specific parameters as latent variables. This allows the meta-learner to perform entity-specific parameter selection instead of consolidating inductive biases into a single meta-model. Further, [20] showed that utilizing the traffic data from data-rich cities improves the prediction in cities with only a short period of data using a conditioned MAML (CMAML) based approach. Similarly, [3] proposed a graph-based conditional meta-learning approach for predicting water quality and quantity variables in a diverse set of basins. One challenge of CMAML that needs to be addressed is quantifying the diversity needed to merit these methods. Further, most of CMAML methods utilize a metric that measures the similarity between entities. Thus we need to investigate novel metrics that better capture this similarity. **Neural Process:** The neural process (NP) family has been used in EAM in a variety of fields, including robotics, computer vision, and natural language processing [14]. The Neural Process (NP) family of methods started with Conditional Neural Processes (CNPs) [1], which combine the benefits of Deep neural networks and Bayesian methods, such as Gaussian Processes (GPs), to exploit prior knowledge and quickly infer the shape of a new function. The defining characteristic of the NP framework is that it conditions the prediction function on the observations via an inferred entity embedding. The resulting model can be boiled down to three core components, as shown below, \[h_{c} =q_{\phi}(x_{c},y_{c}) \text{encoder} \tag{5}\] \[z =h_{1}\oplus\cdots\oplus h_{n} \text{aggregator}\] \[y_{c} =p_{\theta}(x_{c},z) \text{conditional decoder}\] Here, the encoder produces a representation from each (input, output) pair, that are aggregated to form an embedding. The conditional decoder outputs the target predictions using the embedding and inputs. Further advancements have been proposed, such as introduction of latent variables [1] instead of deterministic embeddings, using bootstrapping for multiple latent variables [10], or introducing attention-based versions [15]. The NP framework has found applications in a wide range of domains, given its flexibility, modeling capacity, and computational efficiency. NPs have been used in designing recommender systems [13] for personalized prediction of an item for each user (entity). In neuroscience, NPs have been used to predict the responses of neurons (entities) in the visual cortex to natural stimuli [16] and neural spike sorting [1]. [22] propose a Multi-fidelity Hierarchical Neural Process (MF-HNP) that can leverage the cheap data from low-fidelity simulators for epidemiology tasks across individuals from multiple age groups and climate modeling for diverse sites. [17] propose an extension to the NP framework for multi-task classification settings that can quickly adapt to a new task without costly retraining. [18] use a self-supervised contrastive loss to infer the entity embeddings for personalized streamflow prediction. Despite the success of the NP framework, there exist open challenges and limitations that need to be addressed. Deciding the aggregator function is still an open direction of research [23]. Further in many scientific process there are multiple processes within an entity which can lead to multiple contexts, as discussed in Sec 4.1. Thus a potential direction is to impose a manifold structure on the latent distribution or a hierarchy among the latent distributions from the diverse contexts for the same entity. ### Unknown \(\mathbf{z}\), out-of-sample & zero-shot In many scenarios, a good model is expected for these out-of-sample entities, despite collecting high-quality data for all possible entities (e.g., abnormalities/diseases in healthcare) being challenging. While meta-learning is the common approach for few-shot learning scenarios, it cannot be used when we have no data available for out-of-sample entities (zero-shot setting). Since there is no data/knowledge about the entity characteristics, the characteristics should only be inferred from the training entities' drivers and responses. There could be multiple choices for the latent characteristics that can yield the same data distribution. However, only one (or a small subset) contributes a robust model. Additionally, the global model could be biased toward the in-sample entities. **Disentangled Representation Learning:** Disentangled representation learning is proposed to partition the hidden representation \(h\) into independent factors of variations, which are aligned with data generative factors [16]. For example, in the image classification task, a disentangled representation might encode the shape and color of an object separately. Based on generative models like variational autoencoders (VAEs) and generative adversarial networks (GANs) structure, disentanglement is encouraged by new regularizations and training techniques [17, 1]. Better disentanglement could be achieved if there is more information about the latent generative factors, like hierarchical priors or a group of entities sharing a common factor [18, 1]. To build an entity-aware model using disentangled representations, one option is to separate the entity-dependent representations from the representations which are shared by all the entities, i.e., \(h_{i}=[h_{shared},z_{i}]\). Recent progress in disentangled representation learning provides opportunities for this approach. For example, using identification label, [1] introduced identity shuffle GAN (IS-GAN) to disentangle identity-related (e.g., clothing) and unrelated features (e.g., human pose) from person images. [20] introduced a disentangled sequential auto-encoder. The latent representation is learned to separate time-independent (e.g., static entity characteristics) and time-dependent features (e.g., states of the entity). **State Space Model:** State space model (SSM) is a model designed for sequential data, which assumes the observational data is generated from latent state variables through _emission model_. The transitions between latent states are modeled by _transition model_. Given observations \([x^{t},y^{t}]\) and latent states \(h^{t}\), the vanilla state space model [1] can be formulated as \[h^{t} =g(h^{t-1})+\varepsilon_{z} \text{(transition model)}\] \[[x^{t},y^{t}] =f(h^{t})+\varepsilon_{x} \text{(emission model)} \tag{6}\] According to the equation above, the hidden Markov model (HMM) can be seen as a special state space model, where the latent state is discrete and the transition only depends only on the previous latent state. Recently, researchers added neural structures to the conventional state space model for better approximation and to learn nonlinear latent states. The deep SSMs are usually solved by variational learning algorithm, which includes a inference network to approximate the intractable posterior of latent states and a generative model to approximate the transition and emission model. [15] proposed an inference algorithm to learn continuous latent states of deep Markov models (DMMs), where the emission distributions are modeled by deep neural networks, and the transition distributions are estimated by an RNN-based inference network. [1] put attention mechanism on latent states to investigate the dependence between the current and all past states, which generalize the transition model in Eq.6. The authors also proposed an inference algorithm for a discrete latent state. [19] further combined state space model with transformer architectures, which uses attention mechanism instead of RNNs to model latent state dynamics. Given its flexibility and interpretability, the state space model is widely used for time series modeling and forecasting in different domains like computer vision [14], and healthcare [20]. Recent progress in the state space model provides great promise for EAM. The overall idea is to use prior knowledge about entities to learn better latent state representations and dynamics. For example, [17] introduced a model that allows the transition of latent states depends on the characteristics of entity (e.g., genetics, demographics). The learned latent representations thus implicitly captures the entity-related knowledge (e.g., clinical phenotypes, pharmacodynamic) from the observations. In computer vision domain, [14] used a Kalman variational auto-encoder (KVAE) as inference network. The KVAE is designed to separate the object's representation from latent state describing its dynamics in a unsupervised manner, which overlaps with disentangled representation. **Causal Representation Learning:** This group of methods focus on the discovery of latent causal variables and the robust prediction in the downstream task [10]. Most disentangled representation learning methods are insufficient to learn causal representations since they try to disentangle independent factors from observations. However, causal factors are usually dependent on each other, which forms an underlying causal structure. To fill this gap, a line of recent work focuses on recovering the causal representation from the disentangled factors. [21] proposed to add a causal layer in the VAE-based model to transform independent factors into causal representation. [1] introduced a weakly supervised disentanglement method when the dependency among hidden generative factors is only caused by confounders (common parents). [22] used a trainable structural causal model as the prior distribution to enforce causal disentanglement, instead of an independent one. In causal representation learning, the _Causality Assumption_[1] states that the environment \(e\) does not change the relationship between covariates \(X\) and target variables \(Y\).The environment \(e\in\mathcal{E}\) is a special case of an entity, which is also referred to as experimental setting, sub-population, or perturbation. For example, basins from different locations, and different patient populations can be seen as different environments. Leveraging such invariance across entities could yield a robust model. Given the assumption that environment \(e\) only change the distribution of covariates \(X\), recent progress show empirically and theoretically that causal representations enable out-of-distribution generalization [10]. Entity-aware models built on causal representation can resist the distributional shifts induced by interventions, and selection bias. However, this approach may fail when entities have divergences other than distributional shifts. Further, domain knowledge on latent causal variables/mechanisms could be critical for the causal identification [22]. Successful adoption of the causal representation learning methods require addressing the challenge of the identifiability of causal variables [14]. The detailed discussion of the problem of identifiability can be found in section 4.2. ## 4 Further research topics ### Incorporating additional entity level information Several applications exist where auxiliary information about entities can be accessed. This supplementary information can be available in primarily two forms: a) process understanding of the entities and b) additional independent observations of entity states. ML models, being data-driven, are not impacted by our limited understanding of the underlying processes. However, ML models can only learn (however complex) patterns in the data used for training and thus fail on unseen data that is outside the range seen in training. Most real-world systems consist of multiple physical processes interacting in a hierarchical order. Moreover, these processes are often highly nonlinear and exhibit complex behavior encompassing multiple inputs and outputs. There is an opportunity to advance the EAM framework further by leveraging prior physical knowledge of the hierarchical structure. The hierarchical structure provides a principled way to share parts of the entity characteristics across diverse processes through joint optimization. Apart from advances in physics-guided machine learning that utilize physical equations, boundary conditions, and other inductive biases, entity-specific physical descriptors and physical processes can also be incorporated in modeling framework to enable generalization in unseen scenarios [23]. Another opportunity unique to many environmental problems is the availability of ancillary information about the system beyond the standard input and output variables. For example, streamflow in a river catchment is modeled as a function of weather drivers, but auxiliary information such as soil moisture data from in-situ sensors or earth observing satellites [1] can provide valuable information related to underlying processes such as evapotranspiration and base flow. New EAM methods are required that can readily incorporate such diverse sources of data and has the potential to represent complex physical relationships between multiple bio-geo-physical processes. ### Identifiability of Characteristics/Equifinality When characteristics are unknown in EAM, a central problem is how to correctly identify those factors. Although methods like NP, SSM, and disentangled/causal representation learning show potential to learn entity-related representations, there is no guarantee the learned latent representation corresponds to the real characteristics (latent causal factors) [14]. The intuition is that given observational variables, there could be infinitely many generative models yielding the same observations, and those algorithms cannot discriminate the true causal model from other equivalent generative models. Recent progress have shown that it's impossible to recover latent causal variables without inductive biases both on models and data sets [14, 15]. This problem is known as identifiability of causal models. Existing works established identifiability results based on the independent component analysis (ICA) [20]. The identifiability and uniqueness of linear ICA models have be well studied [1]. For nonlinear ICA model, researchers argue that the latent causal variables are unidentifiable without temporal structure [13]. Recent advances focus on extending the identifiability of linear ICA to non-linear ICA, using the nonstationary structure of time series or auxiliary variables [13]. However, this line of work doesn't assume the causal relationship or generative process between latent variables and observed variables, which limits its use. How to correctly identify latent causal variables and structure is still a open problem. Current attempt makes strong condition on measurement model, noise type, or require nonstationary time series [25, 26], and those methods are only tested on synthetic dataset or simple scenarios. Thus, there is an opportunity to identify latent causal variables in complex system, especially in the scenario where people have good domain knowledge, which is more informative than auxiliary variables. ### Uncertainty Quantification Uncertainty estimation in EAM enables the quantification of uncertainty stemming from the model structure and input/output data and improves our understanding of different scientific processes and inherent entity characteristics. Uncertainty estimates can be used to establish the usability of an entity-aware model for operational decision-making in real world applications [11, 12]. Finally, uncertainty quantification (UQ) methods also allow domain scientists to encode prior knowledge as model structure [10] for robust generalization. Uncertainty can be introduced in EAM due to several sources. EAM methods may be simplification or approximation of the real-world physical systems leading to a model structure-based uncertainty. Second, imperfections, measurement errors, interpolation, or noise in entity characteristics can also lead to uncertainty in the known characteristics. Finally, more recently, there has also been a focus on estimating distributional uncertainty that arises because of differences in the data distribution between training and test set. Existing UQ methods include Bayesian methods that compute posterior prediction distribution and provide uncertainty estimates. Dropout-based methods like Monte Carlo Dropout [1] are utilized during the testing period for approximate Bayesian inference when making predictions. Weight perturbation schemes [17] have been adopted for weight-perturbation-based uncertainty quantification. Using variational inference makes learning in these Bayesian networks more feasible [1] _et al._, 2015]. Other approaches, such as Mixture density networks [1], have been used for multi-modal data where each of the modalities can be captured using the mixing components. More comparisons of uncertainty estimation methods can be found in [14]. Recent studies have also attempted to decompose different sources of uncertainties [22]. Principles of evidential theory have further been used to learn other sources of uncertainty [20]. Several of these UQ methods can be used to improve EAM methods discussed in Section 3. First, several variational Gaussian processes methods [1] use inducing points to estimate posterior function from few-shot data. Thus the uncertainty due to the use of different approximation mechanisms and different subsampled datasets can be estimated and used to study the difference in generalization capabilities of these methods. Second, the decomposition of uncertainty estimates can be pivotal in decision-making - understanding if the current EAM can help adapt the model to specific use cases or determine if we need to build better models and use different datasets for our analysis. Third, most UQ methods develop Bayesian frameworks that use Gaussian distribution as function priors. A direction that would be useful for practical applications is looking at other prior distributions for model parameter sampling. For instance, where a target or outcome variable (e.g., extreme temperature modeling) can take extreme values, approximating the prediction function using a Gumbel or t-distribution prior can enable more accuracy. Finally, existing EAM methods consider that all entities are independent. However, in many scenarios, the entity can also be a mixture of base entities, such as a community of people or a category of micro-organisms. While several multi-modal EAM methods exist, formulating the prediction function as a mixture of components also allows for multi-modal modeling. ### Fairness In EAM, the imbalance in training data collected from multiple entities can naturally introduce bias for some entities or groups. Such entity-related bias can adversely affect both individual's opportunities and the inequity over the whole population. Another source of unfairness could be bias in measurement error of input features. For example, phenomena of datasets having higher error profile in emerging economies occurs in several applications [15, 16]. Fairness over multiple entities can be commonly formulated in three different ways. First, individual fairness follows the philosophy that similar entities should yield similar predictions with respect to a particular task, regardless of sensitive attributes (e.g., gender, income, and race). The second type of fairness (e.g., equal opportunity [1] and statistical parity [13]) aims to ensure that the model output distribution is fair across entities. Third, fairness can also be measured in terms of performance disparity across different entities, especially to identify biased predictions for entities in disadvantaged groups or low-resource environments. For these settings, fairness can also be defined over groups of entities formed by certain attributes. For example, fair flow prediction amongst river-streams groups that are grouped according to the local information of annual income and business type can reduce the chance of flood risks being underestimated for low-income areas. Amongst existing fairness-enforcing methods, the most common strategy is to include additional fairness-related losses during the training process [13]. Another major direction is to learn group-invariant features [1], in which discriminators are introduced to penalize learned features with the discriminative information of certain sensitive attributes (e.g., gender). Sensitive category de-correlation also employs the adversarial learning regime, but it tries to mitigate the polarization of predictions [23, 1]. To alleviate the competition between the predictive accuracy and fairness, a bi-level model refinement is proposed to disentangle model prediction and fairness objective [11]. Another benefit of this method is that it allows non-differentiable fairness measures. On the other hand, new data collection and filtering methods are developed to reduce bias in downstream learning tasks [15]. These methods have been applied to tasks related to face detection [13], text analysis [23], land cover mapping [11], etc. Existing fairness-enforcing methods in EAM face several challenges. First, although many definitions of fairness have been proposed in existing literature, fairness needs to be carefully formulated depending on the nature of the target problems. Second, fairness metrics are fragile or sensitive to the grouping of entities, i.e., conclusions on "fair" or "unfair" can be easily altered by simple changes grouping of entities. Third, in real-world EAM problems, the deployment environments may differ from the training environment. As a result, a fairness-enforced model learned from training samples may fail to preserve fairness in target testing scenarios. ## 5 Conclusion In this survey, we proposed a structured review of entity-aware modelling (EAM) research. As shown by this paper, many different research efforts have the potential to advance EAM. We organized the existing research based on the availability of entity characteristics and training samples. We hope that this structure will help in providing an organized view of this rapidly evolving field of research. This survey will also be valuable for domain scientists interested in exploring the use of ML to enhance EAM in their respective applications. Furthermore, we presented additional research directions that will improve the performance and usability of EAM in operational decision-making.
2308.02616
Designing for Passengers' Information Needs on Fellow Travelers: A Comparison of Day and Night Rides in Shared Automated Vehicles
Shared automated mobility-on-demand promises efficient, sustainable, and flexible transportation. Nevertheless, security concerns, resilience, and their mutual influence - especially at night - will likely be the most critical barriers to public adoption since passengers have to share rides with strangers without a human driver on board. As related work points out that information about fellow travelers might mitigate passengers' concerns, we designed two user interface variants to investigate the role of this information in an exploratory within-subjects user study (N = 24). Participants experienced four automated day and night rides with varying personal information about co-passengers in a simulated environment. The results of the mixed-method study indicate that having information about other passengers (e.g., photo, gender, and name) positively affects user experience at night. In contrast, it is less necessary during the day. Considering participants' simultaneously raised privacy demands poses a substantial challenge for resilient system design.
Lukas A. Flohr, Martina Schuß, Dieter P. Wallach, Antonio Krüger, Andreas Riener
2023-08-04T14:39:46Z
http://arxiv.org/abs/2308.02616v1
Designing for Passengers' Information Needs on Fellow Travelers: A Comparison of Day and Night Rides ###### Abstract Shared automated mobility-on-demand promises efficient, sustainable, and flexible transportation. Nevertheless, security concerns, resilience, and their mutual influence - especially at night - will likely be the most critical barriers to public adoption since passengers have to share rides with strangers without a human driver on board. As related work points out that information about fellow travelers might mitigate passengers' concerns, we designed two user interface variants to investigate the role of this information in an exploratory within-subjects user study (\(N=24\)). Participants experienced four automated day and night rides with varying personal information about co-passengers in a simulated environment. The results of the mixed-method study indicate that having information about other passengers (e.g., photo, gender, and name) positively affects user experience at night. In contrast, it is less necessary during the day. Considering participants' simultaneously raised privacy concerns, balancing security and privacy demands poses a substantial challenge for resilient system design. keywords: Automated mobility-on-demand; Automated vehicles; Ride-sharing; Security; Information needs; Context-based prototyping; Immersive video-based driving simulation. + Footnote †: journal: Journal of Computer Science ## 1 Introduction The rapid progress of automated driving technologies promises to revolutionize public transportation (PT) by creating automated mobility-on-demand (AMoD, [1]) systems. In AMoD, passengers are transported by driverless vehicles - i.e., cars with SAE level 4 (high driving automation) or level 5 (full driving automation) capabilities [2]. Those automated vehicles (AVs) will be guided by intelligent traffic management systems. They enable efficient route planning and smart ride-sharing, which will decrease the number of vehicles on the streets [3], turning traffic jams into a remembrance of the past [4]. While it is unclear ###### Abstract The study of the influence of a shared mobility among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared mobility among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared mobility among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared mobility among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared mobility among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared mobility among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared mobility among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared mobility among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared mobility among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared mobility among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared mobility among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are not in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are in a social environment. The study of the influence of a shared among users in a social environment is motivated by the fact that users are in a social environment. Recent studies examined people's willingness to switch from private to shared mobility modes in the future of mobility [11], with demographics such as gender [20] and age [21] as predictors for the adoption of shared AVs (SAVs) and young men being the group with the highest openness towards this technology. Some of the most critical barriers to accepting ASVs are related to security concerns because rides will have to be shared with strangers [22; 13; 11; 23; 24; 10]. Clayton et al. [25] examined the willingness to share an AV with other passengers and found uncertainty about sharing and a strong preference for privately owned vehicles. An online survey by Pakusch et al. [26] underlines the reluctance to switch from private rides to shared ones, predicting that private AVs will dominate the future of automated driving. Mapping these results to the context of SAVs, Lavieri and Bhat [27] found that people were willing to pay extra fees for trips in SAVs when only the vehicle, but not the trip, is shared with others. Their study indicated that privacy and security concerns would prevent participants from opting for sharing rides with strangers -- affecting commuting trips to a minor extent than rides for pleasure purposes [27]. A qualitative user study by Schuss et al. [11; 13] emphasizes security concerns as an important issue for automated ride sharing, especially for women -- and particularly during the night. These results were confirmed by Piao et al. [23; 20]. Passenger security will be challenging for SAMoD, with the time of the day playing a essential role since passengers - particularly women [20] - are more concerned about trips during the night [23]. In this context, the absence of a human driver is essential [20; 11; 28; 29]. Lavieri and Bhat [27] found that not having a driver on board in an AV seems to be particularly problematic for Millennials as they see a driver as a kind of "guardian". Consequently, confirmed by Biermann et al. [30], there seems to be an increased need for security. As a result, future passengers might tend to accept the use of monitoring systems for the purpose of preventing crime, vandalism, and in case of health emergencies [30]. In their online survey, Sarriera et al. [22] found that major deterrents for adopting SAMoD are being potentially related to unpleasant co-passengers, uncertainty regarding the length of a trip, and preferring privacy during a ride. The authors also discovered biased opinions toward passengers of different social statuses and races, leading passengers to prefer to have more information about their co-passengers [22]. An essential factor for adopting SAVs is related to passenger's acceptance to share space and time with strangers [21], which is even more important for leisure trips than business trips [27]. People are still hesitant toward automated systems and giving up their control. [17] establishes a triad of trust, control, and safety needs to ensure positive user experiences during AV rides. They propose to provide clear communication and transparent system feedback to increase these needs and suggest using visual and auditory feedback about the next stop of automated shuttles [17]. As mentioned above, the use of monitoring technologies and information sharing has the potential to increase security perception. At the same time, however, it needs to be taken into account that travel information can be a private matter [31]. Konig et al. [12] evaluated whether information about potential co-passengers influences the acceptability of SAMoD systems and measured how different levels of information affected participants' compensation demands. Detailed information about co-passengers proved to be beneficial [12]. Interestingly, they also found that information about men as fellow travelers resulted in higher refusal rates than information about women travelers [12]. In accordance with this observation, women seem to prefer being matched with other women to increase feelings of security [13; 29]. Indeed, women-only vehicles already have been discussed in the context of public transportation [32], ride-hailing [33], and in the context of SAVs [24]. The mentioned studies show that potential users have security issues with SAVs and underline the importance for the HCI community to come up with adequate solutions. ### Perceived Security, Resilience, and Their Mutual Influence Shared driverless travel poses new challenges for resilient system design. Generally, resilience can be referred to as the "process and outcome of successfully adapting to difficult or challenging [...] experiences" [34]. In terms of SAMoD, overall system resilience also depends on the psychological resilience of (prospective) users. Particularly passengers' perceived security seems to impact resilience in various ways. Firstly, it influences individuals' trust in the system and their confidence in the system's ability to protect them from potential harm or threats [35]. This trust acts as a foundation for their psychological resilience, as individuals are more likely to be adaptive and resilient when they feel secure and protected [35]. In addition, perceived security influences individuals' willingness to report security incidents or vulnerabilities [36]. A culture that encourages open communication and reporting fosters resilience by allowing for timely identification and response to security issues [37] - which then again also increases overall system resilience. When individuals perceive that their contributions are valued and acted upon, it enhances their motivation to actively participate in maintaining and improving the system's resilience [38]. Overall, perceived security plays a vital role in shaping resilience by influencing individuals' mindsets, behaviors, and willingness to engage in proactive actions within a system [39]. In turn, we argue that from a human factors and ergonomics (HF/E) perspective, resilience plays a significant role in influencing perceived security within a (SAMoD) system. When individuals and organizations exhibit resilience, it creates a sense of confidence and trust in the system's ability to withstand and recover from adverse events or security breaches. Taking into consideration the crucial role of perceived security in SAMoD systems (Section 2.1), this perception of security is vital as it will affect passengers' interaction with and trust towards the system and influence their willingness to use it. Consequently, factors affecting the resilience of both individual users and the overall system should be considered from early design phases. Regarding SAMoD, this particularly involves considering them in the design of suitable user interfaces. ### Interface Design and Evaluation for Shared Automated Mobility SAMoD UIs can range, e.g., from passenger information displays in vehicles and planning and booking applications on mobile devices to terminals at mobility hubs. They will provide the sole basis for the communication of passengers and the intelligent systems as no human operators (e.g., drivers) will be involved anymore. In terms of interaction modalities, already familiar technologies like touchscreens, information displays and control buttons seem to be preferred by potential SAMoD users [30]. The preference for established modalities, such as visual and auditory, and interior locations, such as the front area, is reflected by the systematic literature review on the in-vehicle design space by Jansen et al. [40]. They provide a comprehensive overview of input and output modalities and information locations and highlight the relevance of multi-modal in-vehicle interactions. Since most currently available (S)AVs are still limited, prototyping and simulation methods are used to test and evaluate future (S)AMoD UIs. To ensure a meaningful transfer of study results to the development of SAMoD systems, it is essential to integrate the highly dynamic element of the context of use [41; 42]. In ubiquitous systems like SAMoD, this involves not only the consideration of auditory and visual factors but also the inclusion of surrounding elements like other people that might be present, as well as their relation to the respective users [41]. There are several methods to prototype the physical and social context of human-AV interactions [43], including lab-based prototyping with mock-ups, virtual reality, and simulators [44; 45; 46; 42; 47], wizard-of-oz vehicles (WoOz) [48; 49; 50], and experimental AVs [51; 10; 52; 29]. [43] provide a detailed overview of suitable methods and discuss the value of context-based interface prototyping for the AV domain. In general, each method offers advantages and disadvantages that have to be weighed by the experimenters depending on the focus of the study. While the use of experimental AVs for evaluating SAMoD UIs intuitively seems to be the first choice, it needs to be considered that current setups are still quite limited, e.g., to specific test scenarios and low speed limits [52].If a study purpose can be achieved under these limitations, actual AVs might be suitable. However, to investigate human-AV interactions beyond those limitations, e.g., automated rides in complex urban environments, WoOz and simulators offer promising alternative approaches. In WoOz setups used to simulate AVs, a hidden driver de facto controls the vehicle, while study participants are told that the vehicle is driving fully automated [53]. WoOz studies allow for conducting realistic ride studies in complex environments. Still, the method is limited in terms of control and comparisons between rides because each ride varies due to contextual factors (e.g., traffic density, time of day, or the behavior of other road users). In contrast, simulators offer controllable and reproducible test environments [54; 55]. As [46] point out, virtual-reality-based simulators are often quite sophisticated constructs that can be applied to investigate driver-vehicle interactions where the simulation needs to adjust to the participants'/drivers' steering input. Since users of SAMoD systems are passive passengers, it is not necessary to enable study participants to control the simulation. Simulations can thus also be realized using "immersive video" [41], which offers a straightforward and time-efficient approach to prototyping SAMoD systems, e.g., [45; 46]. The latter is quite fitting to investigate human-AV interactions in specific situations (e.g., at a particular time of day) in a controllable manner but still with an adequate representation of the dynamic environment. ## 3 Material and Method To investigate our research question, we created a UI prototype representing a SAMoD in-vehicle passenger information display. Variants of the UI were evaluated in an exploratory within-subjects user study with a diverse sample of participants (\(N=24\); gender-balanced, wide range of ages) using a video-based automated vehicle simulator [46]. With our study, we aim to have a closer look into the information needs of passengers and counteract the limitations of an online survey by simulating rides in an SAV during different times of the day. At the same time, we do not necessarily say this would be the best solution. Quite the contrary, we acknowledge that 1) serious privacy side effects could arise, and 2) stereotypes could be further manifested. Still, we wanted to let participants experience receiving this information on their fellow travelers and discuss with them how it would influence their perceived security. Our motivation was to evaluate whether such a controversial concept would convey security after all and, if so, under what circumstances people would need and want to use it. We identified leisure trips as a typical case for the use of SAMoD during both day and night times (more details in section 3.3). In each simulated ride, an in-vehicle UI prototype (section 3.2) provided participants with information about the ride and fellow passengers. In the respective CIs, we varied the type and amount of information participants received when co-passengers boarded and left the vehicle. Information was either provided _with_ personal data on co-passengers (name, age, target destination, profile picture), or _without_. We used a within-subjects design, and each participant experienced four rides: two night and two day trips, one with and one without personal information about co-passengers. The order in which each participant experienced the variants was randomized and counter-balanced. Since we wanted to investigate shared rides, we identified two options to include the "sharing" aspect in the study: 1) simulating passengers boarding/leaving the vehicle with sounds and visual information displayed on the UI, and 2) using real persons ('actors') that complement the setup. Regarding the latter, Flohr et al. [46] investigated the effect of supplementing SAV simulator studies with actors mimicking co-passengers. While they found some support for the approach, it does not seem to increase participants' immersion in the simulation. Instead, it seems to increase the occurrence of motion sickness symptoms in simulator studies [46]. Therefore, considering the potential adverse effect on participants' well-being during the simulator study and the problematic pandemic situation at the time of the study conduct, we decided to simulate co-passengers getting on and off the SAV only virtually. While this supports, on the one hand, our intended focus on the information display, this can, on the other hand, also be considered a limitation of the study, which we further discuss in section 5.4. The study was conducted in accordance with the ethical guidelines stated in the Declaration of Helsinki [56]. Participants took part voluntarily, were obliged to provide their written informed consent, and had the opportunity to abort the study at any time without stating reasons. Figure 1: Study participants experienced two day and two night rides in the immersive video-based automated vehicle simulator. ### Setup Since contextual factors play a crucial role in passengers' travel experiences and information needs, we intended to establish a realistic but still controllable test environment for the user study. Therefore, we adapted the immersive video-based simulation setup used by Flohr et al. [46; 42] and combined it with a tent-based vehicle mock-up (e.g., used by Schuss et al. [11]) to provide even more realism. The resulting setup (Fig. 1, 3) consisted of three LCD screens that played back videos representing a passengers' view out of the front, left, and right windows of a shared AV. Similar to [46], we used audio and video footage of day and night rides through an urban environment to create simulations for two night and two day rides. The footage was captured using three action cameras mounted in the center of a BMW i3's windshield, as well as on the front side windows. In addition, we enhanced audio footage with additional sounds (e.g., opening and closing noises of sliding doors). Along with a 2x2 seating group, the footage was played back on three NEC Full HD 55.1-inch TV screens situated in a tent-based vehicle mock-up. The tent separated the simulation from the surrounding lab environment to support participants' immersion by entering a closed space when boarding the simulated SAV (Fig. 3). The UI prototype of the passenger information display was displayed visually on an additional 24.1-inch screen (Fig. 1). Audio sounds and voice prompts were provided by a Logitech 2.1 sound system. ### Design Process and Prototypes The tested UI prototypes were designed iteratively following findings from related user studies and a comprehensive literature review. We used video-based prototyping to create high-fidelity visual and auditory UI representations that matched the video-based simulation. The visual information display featured a split-view of 1) a schedule showing upcoming stops, estimated arrival times, and information on co-passengers getting on/off the vehicle, and 2) a map illustrating the current location of the AV and the planned route (Fig. 2), which follows proposals of previous work (e.g., [24; 46]). We created two general prototype variants to investigate the research question (Fig. 2). While the first variant ("without") does not show personal information about co-passengers, the second variant ("with") features such information by displaying name, age, target destination, and profile picture of co-passengers. For each test ride, participants experienced either a prototype with or without information on co-passengers, i.e., the variant stayed consistent within the rides. Previous research suggests that combining these data reduces overall compensation demands for sharing a ride with a stranger [12]. We did not include a rating of fellow passengers, as rating systems hold discriminating characteristics [13; 57]. We used AI-generated pictures with neutral facial expressions [58] as photos of the entering fellow passengers. We included fellow passengers' age as we hypothesized that this information might influence participants' perceptions. Thereby, we defined two age groups: young (between 20 and 30) and older (between 50 and 60). Age of fellow passengers was balanced so that each participant experienced one ride with a younger man/woman and an older man/woman as we expected that age could have an effect on passengers' perceived security. The provided contextual information (map, street names, Figure 2: Apart from time of day (“day” and "night"), study conditions varied in the amount of provided information on co-passengers: 1) without information, 2) with information. In the two rides with information, co-passengers’ age varied between "young" and "older". etc.) matched the real-world environment where the simulation footage was recorded and animated (using Adobe After Effects CC 2021) according to the simulated vehicle's movements (e.g., the position of the AV in the map). For permutation purposes, we created eight video prototypes of the UI to have one variant without and one with information on co-passengers for all four simulated rides. Signal sounds and voice prompts complemented the visual UI (e.g., without: _"Next stop: [stop name]. One passenger gets on. One passenger gets off. "_; with: _"Next stop: [stop name]. [Name of passenger] gets on. [Name of passenger] gets off. "_). Voice prompts were created using text-to-speech conversion by Microsoft Azure. ### Scenarios We intentionally included participants covering a wide age range in the study, with young people not working yet, and older adults who do not work anymore. To provide for a broad spectrum of participants' real lives, we chose leisure trips as scenarios for the four rides in the study. Since people are reported to be more likely to reject sharing rides with unknown fellow passengers for leisure trips compared to commute trips [27], we wanted to explore whether information about other passengers would mitigate this observation. All participants engaged in four trips: two during the day and two at night. We used storytelling to create authentic scenarios for each trip to enhance immersion. The day trips went from a bakery to a park to meet friends and back. The night trips started nearby the passenger's home and had a restaurant as a destination where some friends were supposed to meet and were also round trips. To get even better acquainted with the scenario, participants received a paper ticket before each ride with their name, destination, departure and arrival time. After reading the scenario to them and handing over the ticket, our participants entered the shuttle bus, chose one of the seats in the front row, and one of the investigators started the video simulation. During each trip, one man and one woman as a co-passenger entered the vehicle virtually (i.e., this was only stated by the information displayed in the UI prototype). We did not randomize the order, i.e., it was always the woman entering first to avoid losing statistical power due to too many conditions. However, participants always rode with only one person at a time since we hypothesized that it would affect participants' perceived security whether they would be sharing rides with a single man/woman or multiple persons simultaneously. The first (virtual) co-passenger entered at the first stop and got off at the second stop, where the second co-passenger entered the vehicle. At the third stop, participants' reached their target destination. ### Procedure and Measurements We used a mixed-method approach [59] and triangulated quantitative data collected during and between rides with observations and qualitative interview data. Each study session can be divided into three parts: briefing and pre-questionnaire, test rides and measures, and post-session interview. Each session took between 60 to 90 minutes in total. #### 3.4.1 Briefing and Pre-Questionnaire After receiving a briefing comprising general information about the study goal and the procedure, participants signed a declaration of consent. Then, they filled out a demographic pre-questionnaire. We also included the short version of the Big Five inventory [60; 61] to get insights into a participant's personality. Prior research showed that psychological factors and attitudes most likely influence people's adoption of AVs [62; 63; 64]. As the level of a person's anxiety influences the perceived security [65], we also included the state-trait anxiety inventory (STAI) [66] in our pre-questionnaire. Since current research is not conclusive on whether having experienced any sort of crime has an influence on perceived security [14], we left this aspect out. #### 3.4.2 Test Rides and Measures During each of the four rides, participants filled out Russell's Affect Grid [67] in an adapted emoji-based version inspired by [68] using pen and paper. The Affect Grid is one of the most widespread models for emotion measurement and consists of two dimensions to measure: pleasure (displeasure - pleasure) and arousal (low energy - high energy) [69]. Each time information about an upcoming stop and entering or leaving passenger was displayed during the ride, participants were instructed to set a cross to express their current emotional state in the grid. After each ride, participants got off the simulated automated vehicle and summarized their subjective emotional constitution throughout the journey by drawing an emotion curve on a template also used by [48; 42]. Subsequently, the experimenter accompanied them to a workplace where they filled out a digital questionnaire. Starting with the short version of the User Experience Questionnaire (UEQ-s; 8 bipolar items; 7-point scale; [70]) as well as the Usefulness [71] and Attractiveness [72; 73] dimensions of the UEQ+ (4 bipolar items for each dimension; 7-point scale; [71]) participants assessed their experiences of the ride and respective HMI concept. Since we expected the type and amount of provided information to have an effect on passengers' trust, participants also assessed the Trust in Automation scale of Korber (2 items; five-point Likert-type scale; [74]). Furthermore, we investigated users' acceptance with the Intention to Use (2 items; 5-point Likert-type scale), and Perceived Usefulness (3 items; five-point Likert-type scale) dimensions of Chen's adaption of the technology acceptance model [75]. Subsequently, we included Dekker's Security Concerns scale (1 item; 5-point Likert-type scale; [76]) and the Perceived Risks scale (1 item; 5-point Likert-type scale; [77]) as risk also has an influence on the perceived security [65]. After the last ride, each participant additionally filled out the Igroup Presence Questionnaire (IPQ, 14 items; 7-point Likert-type scale; [78; 79]) to assess the quality and immersion of the simulated environment. #### 3.4.3 Post-Session Interview Finally, we conducted a semi-structured post-session interview with each participant. We asked open-ended questions about the rides in general and the co-passenger information that was provided by the UI. Participants were asked which version of the UI they liked best and why. Participants were also prompted about potential feelings regarding security in the respective conditions, and we inquired whether some information was missing from their point of view. With the consent of participants, audio captures of all post-session interviews were recorded for an in-depth post hoc analysis. ### Participants In total, 24 participants (12 women, 12 men, 0 diverse, 0 n/a; from 18 to 81 years, \(M(SD)=40.5(21.3)\), \(Median=30\)) took part in the study. All participants were recruited through university mailing lists and word of mouth and attended the study voluntarily. For participation, all of them received financial compensation (approx. 25 US dollars). Their national background was [blinded for review], [blinded for review], [blinded for review], [blinded for review], and [blinded for review]. We used the STAI inventory Figure 3: Study procedure (top) and sequence of the four simulated SAMoD rides (bottom). to measure participants' interindividual tendency to evaluate situations as threatening or to react with increased feelings of anxiety. According to the reference values of the trait anxiety scale (items 21-40; [66]) our participants are at the expected medium level of responding with anxiety. The women in our study had a mean value of \(M=36.5\) (\(SD=7.0\); \(Mdn=38.0\); expected value according to references = 37.0) and the men a mean of \(M=35.8\) (\(SD=4.4\); \(Mdn=36.0\); expected value according to references = 34.5). Participants fall into the average age group between 36 to 65 years and have a high educational level. They correspond approximately to the reference values of the Big5-short (see [80]) for extraversion (\(M(SD)=3.25(1.29)\); reference: \(M(SD)=3.62(.91)\)), agreeableness (\(M(SD)=3.56(0.9)\); reference: \(M(SD)=3.43(.79)\)), conscientiousness (\(M(SD)=3.93(1.03)\), \(M(SD)=3.47(.95)\); reference: \(M(SD)=4.2(0.77)\)), neuroticism (\(M(SD)=2.45(0.94)\), \(M(SD)=2.48(0.9)\)) openness to experience (\(M(SD)=3.45(1.21)\); reference: \(M(SD)=3.70(0.89)\)). We therefore assume that the obtained results are not falsified through a non-representative sample (e.g., a sample with exceptional high scores in neuroticism could have an impact on the perceived security). ## 4 Results For the quantitative results, descriptive and inferential statistics were calculated using JASP 0.16 [81] and jamovi 2.2.5 [82]. The audio-recorded post-session interviews were transcribed verbatim and analyzed applying qualitative content analysis [83; 84] with MAXQDA [85]. Session notes and anecdotal evidence during the study complemented the data collection. ### Dependent Variables In the following, we report on descriptive and inferential statistics for a comparison of the study conditions in terms of our dependent variables, as well as for an assessment of the simulated setup by having a look at participants' presence perception. We computed repeated measures analysis of variances (RM-ANOVA) to explore differences in the study conditions with the RM factors 'time of day' (day, night) and 'information on fellow passengers' (without, with) as well as the between subjects factor 'gender' (women, men). One woman (P21) only completed three of the four rides due to occurring simulator sickness symptoms. The missing data of P21 was imputed with maximum likelihood estimates (e.g., [86]) for the respective scales. When a RM-ANOVA returned significant (\(\alpha=.05\)) for a certain scale, post-hoc tests in the form of Holm-adjusted pairwise comparisons for all conditions were calculated. Effect sizes were interpreted according to Cohen [87]. #### 4.1.1 User Experience With reference to the UEQ-s benchmarks [88; 70], the tested SAMoD system received excellent ratings for both pragmatic and hedonic UX quality throughout study conditions (Fig. 4). While we did not find meaningful differences in terms of pragmatic quality, hedonic quality, and usefulness, a RM-ANOVA revealed significant differences for the UEQ's attractiveness scale with regard to time of day (\(F(1,22)=6.820,p=.016,\eta^{2}{}_{\rm G}=0.026\)) and an interaction effect of passenger information and gender (\(F(1,22)=5.059,p=.035,\eta^{2}{}_{\rm G}=0.021\)). Post-hoc tests show that participants' overall impression was significantly more positive (\(t=2.612,p_{\rm holm}=.016\)) during daytime than during nighttime (Fig. 4), with a mean difference of \(M(SE)=0.3(0.1)\) and a medium effect of \(Cohen^{\prime}s\ d=0.533\). Despite the significant results of the RM-ANOVA, an interaction effect of information and gender was not confirmed by subsequent pairwise comparisons. #### 4.1.2 Acceptance A between-subjects effect of gender returned significant in the RM-ANOVA for both used scales of Chen's TAM [75]: Perceived Usefulness (\(F(1,22)=7.586,p=.012,\eta^{2}{}_{\rm G}=0.194\)) and Intention to Use (\(F(1,22)=6.490,p=.018,\eta^{2}{}_{\rm G}=0.159\)). Post hoc comparisons confirm that women perceive the tested SAMoD system to be more useful than men do (Fig. 5; \(t=2.754,p_{\rm holm}=.012\)) with a mean difference of \(M(SE)=0.5(0.2)\) and a medium-sized effect of \(Cohen^{\prime}s\ d\ =\ 0.562\). Similarly, women show a higher Intention to Use the SAMoD system compared to men (Fig. 5) with a mean difference of \(M(SE)=0.5(0.2)\) and a medium-sized effect (\(t=2.547,p_{\mathrm{holm}}=.018\), \(Cohen^{\prime}s\)\(d=0.520\)). Apart from the between-subjects effect and the generally medium-high to high acceptance ratings of the SAMoD system, no meaningful within-subjects effects of time of day and passenger information on Perceived Usefulness and Intention to Use were revealed. #### 4.1.3 Security, Trust, and Perceived Risk Participants' trust in the automated system was medium-high among all conditions (Fig. 5). With regards to the medium-rated security concerns (Fig. 5), participants seem to have some, but no severe concerns on their security during their ride. No meaningful difference induced by time of day or passenger information was detected. A significant difference was found in terms of perceived risks (\(F(1,22)=7.321,p=.013,\eta^{2}{\mathrm{G}}=0.013\)). AMoD rides without information were perceived as significantly more risky than rides with information about fellow passengers (Fig. 5) with a mean difference of \(M(SE)=0.2(0.1)\) and a medium-sized effect (\(t=2.706,p_{\mathrm{holm}}=.013\), \(Cohen^{\prime}s\)\(d=0.552\)). #### 4.1.4 Emotion Judging from visual inspection of the affect grids and emotion curves (Fig. 6), participants found rides during daytime and without information to be most pleasant. Rides without information seem to receive Figure 4: Boxplots of UEQ-s scales (pragmatic UX and hedonic UX), usefulness, and attractiveness (-3 = low; 3 = high) for the four study conditions and the between-subjects factor gender. Figure 5: Boxplots of acceptance scales (perceived usefulness, intention to use), trust in automation, security concerns, and perceived risk (1 = low; 5 = high) for the four study conditions and the between-subjects factor gender. more positive assessments whereas the UI variants with information show higher dispersion in the affect grids. Generally, rides during daytime seem to be perceived more pleasant than night rides. In accordance with that, the statistical analysis of the quantified (\(min=1,max=10\)) uni-dimensional subscales of the affect grid (pleasure, arousal) revealed no meaningful effect in terms of arousal but significant differences in the pleasure ratings with regards to the time of day (\(F(1,43)=12.386,p=.001,\eta^{2}{}_{\mathrm{G}}=0.032\)). Rides during daytime (\(M(SD)=7.6(2.2)\)) received higher pleasure ratings than rides during nighttime (\(M(SD)=6.8(2.2)\)) with a mean difference of \(M(SE)=0.8(0.2)\) and a medium-sized effect (\(t=3.519,p_{\mathrm{holm}}=.001\), \(Cohen^{\prime}s\)\(d=0.525\)). ### Qualitative Content Analysis For the qualitative content analysis [83; 84], interview transcripts were initially explored line-by-line. In a second step, we highlighted text passages, searched for keywords, and added notes. Subsequently, the transcripts were scrutinized again and codes were derived from the text by applying inductive coding to refine themes and codes in an iterative process until the final expressions were identified. In the following, we present our main findings (e.g., statements expressed during the post-session interviews) with their number of mentions (n) and the number of women and men in our study mentioning them. First, we present the perceptions of the rides in general. Then, we cluster them according to three main topics: information preferences, day vs. night, and the type of information that participants were requesting. #### 4.2.1 Presence Perception and Experience of the Rides In general, participants described the four rides as positive and considered the ride in the simulator as short, entertaining, and pleasant. Moreover, participants emphasized how realistic the four trips felt to them: _"Yes, it was quite real and I didn't feel I am in the simulation room and it was so real. It was quite good, yeah."_ [P15], which is also reflected in the medium to high ratings for the four IPQ scales (Realism (\(M(SD)\ =\ 4.0(1.1)\)), Involvement (\(M(SD)\ =\ 3.3(1.1)\)), Spatial Presence (\(M(SD)\ =\ 4.1(0.9)\)), and General (\(M(SD)\ =\ 4.9(0.8)\)). Participants' immersion in the simulated SAMoD can be judged to be quite high. Participants compared the simulated AMoD journey to using public transportation systems such as buses or metros today (16; 6 women, 6 men). #### 4.2.2 Information Preferences Overall, the qualitative data obtained in the study show that participants favored to have information about their co-passengers (15; 9 women, 6 men) over having no information (8; 2 women, 6 men). The most important reason for preferring the UI version with co-passenger information was security (22; 12 women, 9 men): _"I would have felt more secure with the display with the information and picture."_ [P05], _"I felt so much more secure compared to the other version."_ [P22]. Participants considered the information as more pleasant (9; 5 women, 4 men) in terms of being connected to others: _"when the person comes in and you have a little info about them, I thought that was pleasant. You could also - in case something happens - address them by name or, yes, it is more pleasant than the anonymous [version]."_ [P04]. Other advantages of having knowledge about fellow passengers were that participants considered it to be more interesting (4; 1 women, 2 men) and humane (3; 1 women, 1 men). Participants who preferred having information about other passengers where also willing to share these information about themselves. In line with this finding, the most important reason for preferring the UI version without information was privacy (17; 4 women, 6 men): _"My first thought was 'Oh no, people will know my name'. I don't like that at all."_ [P12]. Other participants regarded this information as not important (4; 1 woman, 3 men). Displaying passengers' details was even seen as insecure (3; 2 men) or untrustworthy (2; 1 man) as these details could potentially harm people. One participant expressed worries about the security of our young faux passenger ("Anna") as he elaborated: _"Well, at night you're just a bit more insecure, for example, when drunk, young people hop on. So [my worries] were also related to Anna, because people might think 'Oh, here comes Anna now, maybe we can hit on her or something.' That would be quite insecure for her then."_ [P05]. Although of our 24 participants, 15 expressed they would prefer the UI version with information, we would like to point out that this was not a clear decision every time. One participant even was unable to decide which version they Figure 6: Stacked Emotion Curves (left; Opacity: 0.1; Normalized at ‘Departure’) and Affect Grids (right) for the Four Study Conditions. preferred. Most participants found pros, as well as cons, for both versions and were weighing these off until finally making a decision. While this reflects how security and privacy are antagonists, the appropriateness of the variants was considered to be highly context-dependent, as outlined in the following paragraph. #### 4.2.3 Day vs. Night Generally speaking, our qualitative data confirms the difference the time of the day makes for sharing rides in SAVs with strangers as their number of mentions is higher (35; 9 women, 7 men) than statements that do not emphasize this importance (9; 2 women, 7 men). In this context, participants stated time-related concerns like _"during the night one is generally more careful and feels vulnerable_[P09]. Several women (17; 9 women) expressed concerns when sharing rides with unknown men and said they would favor sharing a vehicle with other women at night over mixed vehicles. For instance, [P03] explains _"well, especially in the dark. During the day is not that tragic, but in the dark, I don't want to share a ride with a man or get off the vehicle with him."_. Interestingly, some of the men in our study conveyed similar feelings towards sharing rides with other men (8; 7 men) - particularly at night: _"Because it was Brigitte who got on at the first stop and then at the second stop it was a gentleman. That indeed made a difference to me."_[P11]. As a reason they stated to feel more secure as a statement like, i.e., _"men tend to be more aggressive"_[P05] indicates. Participants also made clear that they would not need the displayed information on co-passenger during the day, but would prefer to have the information during the night: _"Especially at night it was more pleasant for me and more important. [...]. The fact that I was registered, for example, the [man/woman], as well. Yes, that was much more important for me at night than during the day."_[P22]. #### 4.2.4 Type of Information We asked participants which type of information they considered to be the most important one/s. The fellow passenger's profile picture was regarded to make all the difference (23; 7 women, 8 men) since it gives _"an impression of the person that is going to get on the vehicle at a glance"_[P16]. In this regard the photo seem to give participants a feeling of control over the situation while the other information provided was rather a _"nice-to-have"_[P23]. Knowing beforehand who would enter the vehicle also conveys security: _"Yes, I mean, I saw the picture and it looks nice and I actually had less fear."_[P03]. The co-passenger's gender was essential, as well (13; 5 women, 2 men), followed by their age (10; 6 women, 2 men), the name (8; 3 women, 2 men), and the respective destination of the co-passengers (7; 2 women, 2 men). Most of the participants in our study stated that the information the system was offering was sufficient and emphasized how helpful it was to see the vehicle's route and its arrival time on the display. Some participants provided improvement suggestions such as getting information in case people with special needs, big luggage, or strollers would enter the vehicle, or whether seat belt use was compulsory. ## 5 Discussion Overall, the results underline people's openness towards SAMoD, which is in line with previous work [11; 89]. Participants considered SAMoD to be useful and reported relatively high trust in the technology, intention to use, and positive experiences of the (simulated) SAMoD rides. However, participants also expressed concerns regarding security - especially with regard to night rides. In the following, we discuss our findings in detail and situate them among previous work. ### Night Trips Require Higher Levels of Information In general, the SAMoD rides during the day were evaluated more positively than night rides. Participants consider the overall attractiveness of the SAMoD system higher and report more pleasant rides during the day. Rides without information about co-passengers were perceived as more pleasant than rides with information. We hypothesize two reasons as sources of this findings: 1) participants are used to receiving no information about others when sharing a ride (as is the case in public transportation), and 2) people generally prefer rides during the daytime. This interpretation is comprehensively supported by our qualitative data and is in line with existing data from research in public transportation [23; 90; 91]. In contrast, rides with information provided by the in-vehicle UI were experienced to be significantly less risky compared to rides without information about co-passengers. Again, this is reflected in our qualitative data, with 21 participants underlining increased perceived security through the information. This can be taken as a general preference for information about co-passengers -- particularly during the night and is in line with [92], who found that people are willing to provide information such as their gender, age, etc. to visually impaired persons in public spaces, if higher security assurances can be made. While during the day, information about fellow passengers seems to have rather adverse effects (e.g., in terms of emotion), this changes during the night, where it has, on the contrary, positive effects. Prior work underlines the importance of privacy particularly in public transportation [93]. Security and privacy are often antagonists in today's public systems and this dynamic has implications for resilience from a human factors perspective. This became evident in our study as participants mentioned privacy concerns when displaying personal information about other passengers, or themselves. During the interviews, participants weighed the pros and cons of having (no) information. Despite a preference for information during the night, this was not a clear outcome, which is also apparent in a higher dispersion of the Affect Grid assessments for the rides with information. While the information on co-passengers positively influenced security for some participants, there were also concerns that this information could have a negative effect exactly on security as strangers would know one's name and destination. To overcome the conflict between security and privacy, it needs to be investigated which information people feel comfortable sharing in order to increase perceived security. From a human factors perspective, it is crucial to design systems that allow for individual differences and preferences, considering passengers' diverse needs and concerns. Resilience can be fostered by providing customizable options for displaying personal information, allowing passengers to make informed choices that align with their comfort levels. ### Both Men and Women Prefer Sharing Rides with Women While both men and women generally considered SAMoD systems useful, women rated them significantly more so and uttered a higher intention to use such services. We assume that finding is related to their (security) concerns in today's public transportation systems, especially considering night rides [11; 13; 23; 20]. In combination with the qualitative data and the discovered interaction effect of passenger information and gender, this finding provides evidence that women seem to consider SAMoD systems as more secure than 'classic' public transportation. Women and men alike explained in the interviews that they prefer sharing rides with women. This is in line with the findings of [12], who found people have higher refusal rates towards men as co-passengers. On the other hand, Polydoropoulou et. al [29] found different preferences of passengers for sharing with women/men between countries and cultures and that the number of fellow travelers further influences those preferences. In our study, we focused on rides with only one co-passenger as we expected this constellation would have the biggest effect on security. However, our results and the results from previous work [12; 29] underline once more the complexity of the topic. ### Balancing Security and Privacy as a Design Challenge In terms of overall SAMoD system design, considering resilience from a human factors perspective is crucial. However, there is most likely no 'one-fits-all' solution [94]. As, e.g., passengers' security needs are higher during the night, our data points toward flexible solutions for different times of the day. Based on our results, we propose that UIs for ride-sharing should provide general information on the route, arrival time, subsequent stops, and further information and functionalities to increase passengers' (feeling of) security for night rides. Providing information on fellow travelers can serve as a suitable option to do so. In our study, having a photo of fellow travelers was considered the most important information unit and was beneficial for passengers' feeling of security, while information on age, name, and destination played a subordinate role. In the study, we chose portraits with neutral facial expressions. However, other expressions might induce different - positive or adverse - feelings, e.g., feelings of insecurity. Given that photos seem to provide passengers with (at least some feeling of) control over the situation, they might be used in booking apps or in-vehicle displays. Passengers could then look for an alternative vehicle, or leave the vehicle at the next stop if someone's photo would make them feel uncomfortable. The feeling of control has been shown to have a positive effect on psychological security in the context of public transport [95] and, based on our results, we hypothesize that displaying a photo fosters this control, aligning with the principles of resilience.. However, given the disagreement among our participants and the aforementioned privacy issues, we suggest 1) not exposing sensible data about co-passengers during the ride and 2) considering alternative approaches. In terms of (1), it might be beneficial to relocate the information retrieval about fellow passengers to another time and place, e.g., the booking phase. For instance, [12] compared private and shared options on a mobile booking app and found that people tend to rather opt for shared rides when having detailed information on their fellow travelers prior to booking. This could also serve as a means to increase (perceived) security. In terms of (2), Schuss et al. [96] propose a "buddy system" to address women's security needs (during the night) that takes advantage of the fact that other passengers can also provide security. Instead of seeing them as potentially harmful, their approach focuses instead on the fact of not being alone and feeling secure instead of the feeling of controlling the situation through information. The concept of "social passengering" [97] among passengers inside the same or different vehicles points to a similar direction and might be beneficial for the perceived security. By acknowledging the trade-off between security and privacy and offering flexibility in information disclosure, SAMoD systems can adapt to individual needs, enhancing passengers' overall experience and resilience within the system. ### Limitations SAMoD is still a relatively 'theoretical' subject [98] with real-life applications remaining missing. Therefore, we let our participants experience a SAMoD system in a simulated environment. While participants report high presence perception and immersion, external validity is impaired due to the lab-based setup. As we were weighing off the negative side effects that come with lab studies, we opted for the simulated environment over conducting, e.g., a WoOz study in real traffic conditions, to compare the study conditions while ensuring high internal validity and high controllability. As mentioned in section 3, we decided to simulate the presence of other passengers in a shared ride only virtually with sounds and display visualizations. While this was in line with the recommendation of [46] and facilitated the study's conformity with applicable hygiene regulations during the Covid-19 pandemic, the representation of a shared ride's social contextual is limited. On the other, considering our study design with multiple measurements during a test ride, the physical presence of another person might have affected participants' assessment of the information and consequently the study's reliability. Furthermore, we did not intent to focus on the inherent social factors or mutual relationships (that definitely play an essential role in the context of shared mobility), but focused on the provided information. Nevertheless, this should be considered when conducting further studies on SAMoD. Taking into account the large and diverse population of future SAMoD users, our study has been conducted with a small sample and, although having placed value on a broad spectrum of people (gender-balanced, different age groups, different cultural backgrounds, different education levels), it covers only a part of the variety of potential users. According to the STAI inventory, our participants had relatively low levels of trait anxiety. Since this trait likely has an effect of risk and security evaluation, generalizability is limited. Furthermore, the study was conducted during the COVID-19 pandemic. We applied precautions like distancing and hygiene measures and followed the regulations of local and national authorities. While we consider the pandemic's effect on the study conducted to be minor, it might have affected the sample composition as, e.g., only people with medium fear and anxiety have signed up for the study. It would be interesting to repeat this study with people that show higher levels of trait anxiety as this trait influences the evaluation of risk and security of situations, and we hypothesize that these people could have evaluated the presented prototype in a more positive way. The selection of the displayed information on co-passengers covers only a part of the potential variety and might have fostered stereotypes. We derived the solution with information about co-passengers based on existing research findings [11; 12] and aimed to evaluate whether the availability positively influences security, UX, trust, and acceptance of SAMoD passengers. By no means we intended to manifest potential stereotypes or the exclusion of people through our selection. However, we want to point out that the selection likely affects the results (e.g., people might refuse rides with others due to their "look"). We are aware that the gender and age of other passengers is a limited view. Other factors, such as race, appearance, or the supposedly associated social statuses definitely play a role in people's assumptions about other people. However, we did not include more personal characteristics to 1) not confound too many different independent variables in the display variants and 2) we wanted to draw a clear line between evaluating the information about other passengers and participants' potential biases about, e.g., other cultures, as we aimed for the former. ### Future Work Passengers' information demands in SAMoD systems are a highly complex and context-dependent issue requiring more research, especially on how to overcome the conflict between security and privacy by design. Based on our results, we suggest extending the conduct of context-based empirical studies investigating factors like daytime and fellow passengers in SAMoD systems along the whole travel journey. Since, e.g., security issues are relevant for the booking, the ride itself, and on-/off-boarding [11]. While our study focused on the ride itself, further (empirical) studies should also consider the booking phase and the off-boarding when investigating the effect of co-travelers and time of day on passengers' need for information and controls. Here, additional information and safety measures (e.g., emergency/support button) might support passengers' feeling of control and security. To yield results with high external validity, future studies might include more contextual factors such as the (physical) presence of various and multiple other people in SAMoD rides during different situations. E.g., actors could be used to mimic specific situations [46]. It would also be interesting to repeat this study in different cultural contexts, as we conducted our study in Germany, where security in public transportation offers high levels of security [95]. However, we assume that conducting similar studies in countries, such as India or Latin American countries, where public transportation is more difficult to access - especially for women [90] - might yield different results. The applied simulation environment presents a context-based prototyping approach that can be used, e.g., to replicate this or similar studies in other countries and investigate potential cultural differences regarding passengers' (information) requirements. Future work might also consider the potential impact of culture and race as an independent variable in the information display. This could result in an exploration of people's explicit and implicit biases based on given prior knowledge. We used the front of the vehicle as the output location of the information, as these are common modalities [40]. Future concepts might also investigate whether (the combination with) other modalities, such as tactile, influence the perception of the presented information and the feeling of security. ## 6 Conclusion In this paper, we report on a simulator user study (\(N=24\)) investigating the effects of time of day and provided information on fellow travelers on SAMoD passengers' UX, acceptance, feeling of security, and emotions in shared automated rides. While the evaluated SAMoD system received excellent assessments of hedonic and pragmatic UX, trust, and acceptance, participants emphasized security concerns - mainly when using SAMoD at night. Furthermore, both women and men preferred sharing rides with women over sharing rides with men as co-passengers during the night, whereas, during the day, this information negatively affected participants' evaluation of the SAMoD system. Associated risks were experienced lower when participants were provided with information about their co-passengers. Most participants generally preferred having information on co-passengers, with photos of fellow travelers considered the most important information element. However, our results yield ambiguities since providing personal information also triggered privacy concerns among participants. This can be taken as an illustration of the complexity of psychological security and its context dependency. Building upon these findings, providing UIs with information on fellow passengers can support SAMoD passengers' feeling of security in shared rides and potentially improve UX, user acceptance, and overall system resilience. However, due to privacy concerns and associated risks, the timing and placement of the information need to be questioned. It might be beneficial to provide this information during the booking phase but not within the vehicle. Future work should consider the whole travel journey of SAMoD, foster the inclusion of contextual factors, and investigate how the provision of additional information and safety measures (e.g., emergency and support features) can increase passengers' feeling of control and security. ## 7 Acknowledgements This research was partly funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) under grant number 19A21047I (SUE) and by the German Federal Ministry of Transport and Digital Infrastructure (BMVI) under grant number 16AVF2134G (APEROL). We want to thank Claus Pfeilschifter for the technical support and Tatjana Rohr for the support in the study conduct and data analysis. Furthermore, we want to thank the study participants for their participation and the anonymous reviewers for their time and helpful feedback.
2307.14829
On the nature of long period radio pulsar GPM J1839$-$10: death line and pulse width
Recently another long period radio pulsar GPM J1839$-$10 is reported, similar to GLEAM-X J162759.5$-$523504.3. Previously, the energy budget and rotational evolution of long period radio pulsars had been considered. This time, the death line and pulse width for neutron star and white dwarf pulsars are investigated. The pulse width is included as the second criterion for neutron star and white dwarfs pulsars. It is found that: (1) PSR J0250+5854 and PSR J0901$-$4046 etc should be normal radio pulsars. They have narrow pulse width and they lie near the radio emission death line. (2) The two long period radio pulsars GLEAM-X J162759.5$-$523504.3 and GPM J1839$-$10 is unlikely to be normal radio pulsars. Their possible pulse width is relatively large. And they lie far below the fiducial death line on the $P-\dot{P}$ diagram. (3) GLEAM-X J162759.5$-$523504.3 and GPM J1839$-$10 may be magnetars or white dwarf radio pulsars. At present, there are many parameters and uncertainties in both of these two possibilities.
H. Tong
2023-07-27T13:07:21Z
http://arxiv.org/abs/2307.14829v2
# On the nature of long period radio pulsar GPM J1839\(-\)10: death line and pulse width ###### Abstract Recently another long period radio pulsar GPM J1839\(-\)10 is reported, similar to GLEAM-X J162759.5\(-\)523504.3. Previously, the energy budget and rotational evolution of long period radio pulsars had been considered. This time, the death line and pulse width for neutron star and white dwarf pulsars are investigated. The pulse width is included as the second criterion for neutron star and white dwarfs pulsars. It is found that: (1) PSR J0250+5854 and PSR J0901\(-\)4046 etc should be normal radio pulsars. They have narrow pulse width and they lie near the radio emission death line. (2) The two long period radio pulsars GLEAM-X J162759.5\(-\)523504.3 and GPM J1839\(-\)10 is unlikely to be normal radio pulsars. Their possible pulse width is relatively large. And they lie far below the fiducial death line on the \(P-\dot{P}\) diagram. (3) GLEAM-X J162759.5\(-\)523504.3 and GPM J1839\(-\)10 may be magnetars or white dwarf radio pulsars. At present, there are many parameters and uncertainties in both of these two possibilities. stars: magnetar - pulsars: general - pulsars: individual (GPM J1839\(-\)10) 0000 000 000 000 000 000 000 000 000 000 000 0000 000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 00000 0000 0000 0000 0000 0000 0000 00000 0000 00000 0000 00000 0000 00000 00000 0000 00000 00000 0000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 0000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 000000 00000 00000 000000 000000 00000 000000 00000 000000 00000 00000 000000 00000 000000 00000 00000 00000 00000 00000 00000 00000 00000 000000 00000 00000 00000 00000 000000 00000 00000 000000 00000 00000 00000 00000 000000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 000000 00000 00000 00000 00000 00000 000000 00000 00000 000000 00000 00000 000000 00000 000000 000000 000000 00000 00000 000000 00000 000000 000000 000000 00000 000000 000000 00000 000000 00000 000000 00000 000000 00000 00000 00000 00000 000000 000000 000000 000000 000000 000000 000000 00000 00000 00000 000000 00000 00000 00000 000000 000000 00000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 0000000 000000 00000 000000 000000 000000 000000 000000 000000 000000 00000 000000 000000 000000 00000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 0000000 000000 000000 000000 000000 000000 0000000 000000 0000000 000000 0000000 000000 000000 0000000 000000 000000 0000000 000000 00000 000000 000000 0000000 0000000 0000000 000000 000000 0000000 000000 000000 000000 0000000 0000000 0000000 0000000 000000 0000000 0000000 0000000 000000 0000000 000000 0000000 0000000 0000000 0000000 0000000 000000 0000000 000000 0000000 0000000 000000 0000000 0000000 0000000 0000000 000000 0000000 000000 0000000 0000000 0000000 0000000 0000000 0000000 0000000 000000 0000000 000000 0000000 0000000 00000000 0000000 0000000 0000000 000000 0000000 0000000 0000000 0000000 0000000 0000000 0000000 0000000 0000000 0000000 0000000 00000000 0000000 0000000 00000000 0000000 00000000 0000000 0000000 00000000 0000000 00000000 00000000 0000000 0000000 00000000 00000000 00000000 0000000 00000000 00000000 000000000 000000000 000000000 000000000 000000000 0000000000 000000000000 plied to it as that for GLEAM-X J1627 (Katz, 2022; Loeb & Maoz, 2022; Ronchi et al., 2022; Tong, 2023). Since their periods are similar (21 minutes verse 18 minutes), the same conclusion may also be applied to GPM J1839\(-\)10, i.e., magnetar+fallback disks or white dwarf pulsars. We will not repeat the calculations here. This time, we focus on (1) the death line for neutron star and white dwarf pulsars, (2) pulse width of LPRPs. From these two aspects, we want to discuss the nature of GPM J1839\(-\)10, which is the most recent example of LPRPs. ## 2 Model Calculations ### Death line for neutron star and white dwarf pulsars The death line for radio emission of normal pulsars and magnetars has been discussed in Section 2.5 in Tong (2023). The potential drop at a specific angle across the polar cap is just a classical electrodynamics exercise (Eq. 9 in Tong 2023). For normal radio pulsars with a dipole magnetic field, the maximum acceleration potential across the polar cap is (Eq. 7 in Tong 2023, Ruderman & Sutherland 1975; Zhou et al. 2017): \[\Phi_{\rm max}=\frac{B_{p}R^{3}\Omega^{2}}{2c^{2}}\equiv 10^{12}\ {\rm V}, \tag{1}\] where \(B_{p}\) is the surface magnetic field at the pole region (which is two times the equatorial value, Lyne & Graham-Smith 2012), \(R\) is the stellar radius, \(\Omega\) is the star's angular velocity. When the maximum acceleration potential equals \(10^{12}\) V, it is defined as the radio emission death line. Below the death line, the star is not expected to have radio emission. The value of \(10^{12}\) V is just a fiducial value. A rough estimation of the physics evolved is that (Ruderman & Sutherland, 1975): an electron accelerated in such a potential attains a Lorentz factor about \(\gamma\sim 10^{6}\). This electron may emit curvature photons of energy \[h\nu=h\frac{3\gamma^{3}c}{4\pi\rho}\geq 1\ {\rm MeV}. \tag{2}\] These curvature photons may be converted to electron-position pairs in strong magnetic fields. If the electron Lorentz factor (i.e., acceleration potential) is lower, the curvature photons may have energy less than 1 MeV. The subsequent pair production process can not continue. And this may result in cease of the radio emission (i.e., radio emission death line). For magnetar+fallback disk system, the magnetosphere may be modified by (1) the fallback disk if the disk is still active, (2) the magnetar's twisted magnetic fields. If the death line is modified by the fallback disk, the corotation radius will replace the light cylinder radius as the maximum radial extent of the field lines. The corresponding maximum acceleration potential and death line is presented in Eq. (10) in Tong (2023): \[\Phi_{\rm max,disk}=\frac{B_{p}R^{3}\Omega^{2}}{2c^{2}}\frac{R_{lc}}{R_{co}} \equiv 10^{12}\ {\rm V}, \tag{3}\] where \(R_{lc}=Pc/(2\pi)\) is the neutron star's light cylinder radius, \(R_{co}=(GM/4\pi^{2})^{1/3}P^{2/3}\) is the corotation radius. If the death line is modified by the twist of the field lines, a twisted field line will result in a larger polar cap and a larger potential drop. The maximum acceleration potential and death line for a twisted magnetic field is presented in Eq. (12) in Tong (2023): \[\Phi_{\rm max,twist}=\frac{B_{p}R^{3}\Omega^{2}}{2c^{2}}\left(\frac{R_{lc}}{R }\right)^{1-n}\equiv 10^{12}\ {\rm V}, \tag{4}\] where \(n\) is a parameter characterizing the twist of field lines. \(n=1\) corresponds to the dipole case (for \(n=1\), the equation returns to the dipole case Eq.(1)). \(0<n<1\) corresponds to the twisted dipole case. For white dwarf pulsars, it is possible that they have similar magnetospheric precess to that of neutron star pulsars (Zhang & Gil, 2005; Katz et al., 2022). However, there are several changes in the definition of death line (Eq.1) for white dwarf pulsars. 1. For a typical neutron star, the radius is usually set to be 10 km. For a typical white dwarf (figure 5.17 in Camenzind 2007), it has a radius of one precent the solar radius 0.01 \(R_{\odot}\) (the corresponding white dwarf mass is about 0.8 \(M_{\odot}\)). 2. The torque of a rotating magnetized object can be approximated as magnetic dipole braking (Xu & Qiao, 2001). The star's magnetic field can be obtained from the period and period-derivative measurement, which is a crude estimate of the star's true magnetic field. Assuming a perpendicular rotator, the magnetic field is \[B=\sqrt{\frac{3Ic^{3}}{8\pi^{2}R^{6}}P\dot{P}},\] (5) where \(B\) the equatorial magnetic field at the surface (it is two times smaller than the polar magnetic field), \(I\) is the star's moment of inertia. For typical neutron stars, with \(I\approx 10^{45}\) g cm\({}^{2}\), \(R\approx 10^{6}\) cm, it is the commonly cited formula for radio pulsars: \[B=3.2\times 10^{19}\sqrt{P\dot{P}}\ {\rm G}.\] (6) For white dwarfs, assuming a typical mass of 0.8 \(M_{\odot}\) and radius of 0.01 \(R_{\odot}\), the white dwarf's moment of inertia is about \(3\times 10^{50}\) g cm\({}^{2}\). Therefore, the characteristic magnetic field for a white dwarf pulsar is: \[B=5.2\times 10^{13}\sqrt{P\dot{P}}\ \mathrm{G}. \tag{7}\] This formula will be employed when drawing the death line on the \(P-\dot{P}\) diagram. 3. From Eq. (2), a minimum Lorentz factor (i.e. acceleration potential) is require to generate curvature photons with energy higher than 1 MeV. In the case of white dwarfs, the stellar radius is larger. The curvature radius of the magnetic field line is also larger, which is order of \(\sqrt{rR_{lc}}\) (\(r\) is the emission height, Xu & Qiao 2001). Therefore, a higher acceleration potential may be required, e.g. as high as \(10^{13}\) V in Eq. (1). The exact value depends on the detailed modeling of the white dwarf's magnetosphere (as in the case of neutron star pulsars). The death line for normal radio pulsars, magnetars and magnetar+fallback disk systems is shown in figure 1, along with GPM J1839\(-\)10 and other LPRPs. Figure 1 is updated from figure 2 in Tong (2023). The death line for white dwarf pulsars is so different from that of neutron star case that it is shown separately in figure 2. \(\Phi_{\mathrm{max}}=10^{12}\) V and \(\Phi_{\mathrm{max}}=10^{13}\) V are shown respectively. The characteristic magnetic field for white dwarf pulsars is also shown, for \(B=10^{8}\) G and \(B=10^{9}\) G. Most of the presently observed pulsating white dwarfs have magnetic field smaller than \(10^{9}\) G (Zhang & Gil 2005; Katz 2022; Marsh et al. 2016; Pelisoli et al. 2023). ### Pulse width GPM J1839\(-\)10 has a period of 1318 s. The single pulse vary in a pulse window 400 s (Hurley-Walker et al. 2023). Similar things also happen in normal radio pulsars and radio emitting magnetars (Levin et al. 2012; Yan et al. 2015; Huang et al. 2021). At present, the integrated pulse profile of GMP J1839\(-\)10 is not available. From previous experiences in pulsars and magnetars, the pulse window may be an estimate of the integrated pulse width. Then the pulse width of GPM J1839\(-\)10 is \(PW\approx 400/1318=30\%\) of the pulse phase (in this case, the pulse width may also be called the duty cycle, Tan et al. 2018). If this is the pulse width of GPM J1839\(-\)10, it can also constrain the nature of the source. Figure 1: Definition of death line and distribution of long period radio pulsars (red circles) on the \(P\)-\(\dot{P}\) diagram. The fiducial pulsar death line, the death line for a twisted magnetic field (\(n=0.8\)), and the death line modified by the fallback disk are also shown. Updated from figure 2 in Tong (2023). Figure 2: Definition of death line and contour of constant magnetic field for white dwarf pulsars. Two death lines are shown, with \(\Phi_{\mathrm{max}}=10^{13}\) V (upper one) and \(\Phi_{\mathrm{max}}=10^{12}\) V (lower one) respectively. Two contours of constant magnetic field are shown, with \(B=10^{9}\) G and \(B=10^{8}\) G respectively. The limiting Keplerian period for a typical white dwarf pulsar is also shown. For normal radio pulsars, the colatitude of the last open field line at emission height \(r\) is: \[\theta_{\rm open}=\sin^{-1}\left(\frac{r}{R_{lc}}\right)^{1/2}. \tag{8}\] The emission beam radius is 3/2 times the angle \(\theta_{\rm open}\) (Eq.(17.9) in Lyne & Graham-Smith 2012): \[\rho_{\rm beam}=\frac{3}{2}\sin^{-1}\left(\frac{r}{R_{lc}}\right)^{1/2}. \tag{9}\] The emission height of normal radio pulsars is always less than 100 times the neutron star radius (Johnston et al. 2023). Therefore, the emission height in Eq.(9) can be set as \(100R\), where \(R\) is the stellar radius. The observed pulse width depends also on the inclination angle \(\alpha\) (angle between rotation axis and line of sight), and impact angle \(\beta\) (closest approach between the magnetic axis and the line of sight). The impact angle may always be a small quantity, in this case the observed pulse width is related to the emission beam radius as (Eq.(15.2) in Lyne & Graham-Smith 2012): \[W=2\rho_{\rm beam}\frac{1}{\sin\alpha}. \tag{10}\] In terms of pulse phase, the observed pulse width (or duty cycle) is: \[PW=\frac{W}{2\pi}=\frac{3}{2\pi}\sin^{-1}\left(\frac{r}{R_{lc}}\right)^{1/2} \frac{1}{\sin\alpha}. \tag{11}\] A small inclination angle \(\alpha\) can result in a large pulse width. For a magnetar+fallback disk system, the calculation is similar to the calculations for death line. If the magnetosphere is regulated by the fallback disk, the corotation radius replaces the role of light cylinder radius in Eq. (11). For a twisted magnetic field, the colatitude of the last open field line will be larger (Eq. 11 in Tong 2023). The pulse width in units of pulse phase (or duty cycle) is: \[PW=\frac{3}{2\pi}\sin^{-1}\left(\frac{r}{R_{lc}}\right)^{n/2}\frac{1}{\sin \alpha}. \tag{12}\] For \(n=1\), the above equation returns to the dipole case (Eq.(11)). For a white dwarf pulsar, the expression for the pulse width is the same as Eq. (11), except that the stellar radius should be the white dwarf radius. The theoretical pulse width for normal pulsars, magnetar+fallback disk systems, and white dwarf pulsars is shown in figure 3. Magnetars and white dwarfs pulsars have wider pulse width compared with that of normal radio pulsars. This is especially true for LPRPs. ### On the nature of GPM J1839\(-\)10 From the death line on the \(P-\dot{P}\) diagram, the LPRP GPM J1839\(-\)10 can not be normal radio pulsars. A twisted magnetic field or the presence of a fallback disk can help to lower the position of the death line on the \(P-\dot{P}\) diagram. The quantitative result depends on the parameters involved, e.g. the twist parameter \(n\). Figure 1 shows the death line for a twisted magnetic field with \(n=0.8\). For a more twisted magnetic field (i.e. \(n=0.5\)), the position of the death line is lower on the \(P-\dot{P}\) diagram. However, the typical untwisting timescale may be smaller, which may have difficulties in explaining why GPM J1839\(-\)10 can have radio emission lasting more than 30 years. A magnetar+fallback disk system may be consistent with the position of GPM J1839\(-\)10 on the \(P-\dot{P}\) diagram. However, the generation of radio emission in the presence of an active fallback disk is uncertain, although there are such possibilities (see Section 2.5 in Tong 2023 for discussions). The death line for white dwarf pulsars is also consistent with the position of GPM J1839\(-\)10 on the \(P-\dot{P}\) diagram. However, there may be three constrains for white dwarf pulsars: (1) The maximum acceleration potential for white dwarf pulsars may be higher, e.g., as high as \(10^{13}\) V. (2) White dwarf pulsars generally have magnetic field lower than \(10^{9}\) G (Zhang & Gil 2005; Katz 2022; Marsh et al. 2016; Pelisoli et al. 2023). (3) For a white dwarf with mass \(0.8M_{\odot}\) and Figure 3: Theoretical pulse width as a function of period. The pulse width is in units of pulse phase. From bottom to top are: normal radio pulsars (black), magnetars with twisted magnetic field (dashed blue, for \(n=0.5\)), magnetar+fallback disk systems (solid blue), and white dwarf pulsars (green). Since the inclination angle is unknown, the plotted pulse width is actually \(PW\times\sin\alpha\). For a small \(\alpha\), the actual pulse width can be larger. radius \(0.01R_{\odot}\), the limiting Keplerian period is: \[P_{K}=2\pi\sqrt{\frac{R^{3}}{GM}}\approx 11\ \mathrm{s}. \tag{13}\] These three constrains will limit the existence of white dwarf pulsars to a small triangle on the \(P-\dot{P}\) diagram, see Figure 2. Such a triangle of parameter space may explain why there are so few white dwarf radio pulsars (LPRPs are just candidates). The pulse with of GPM J1839\(-\)10 is unknown at present. If its pulse window represents its pulse width, then it will have a pulse width of 30%. From the theoretical pulse width, normal radio pulsars may have difficulties in explaining the pulse width of GPM J1839\(-\)10. The solution in the normal radio pulsar case is that they should have an extremely large emission height. For a twisted magnetic field or magnetar+fallback disk system, the pulse width is larger, typically several precent. A small inclination angle or a slightly higher emission height may explain the possible observed pulse width of GPM J1839\(-\)10. White dwarf pulsars can naturally have larger pulse width. By combining the death line and pulse width requirement, it is unlikely that GPM J1839\(-\)10 is a normal radio pulsar. It is possible that GPM J1839\(-\)10 is a magnetar (including magnetar+fallback disk system) or a white dwarf pulsar. For these two possibilities, there are many parameters at present. This conclusion is consistent with previous calculations for GLEAM-X J1627, based on energy budget and rotational evolution (Katz, 2022; Loeb & Maoz, 2022; Ronchi et al., 2022; Tong, 2023). Population synthesis of neutron star and white dwarf pulsars also get similar conclusions (Rea et al., 2023). ## 3 Discussion and Conclusion Comparison with other LPRPs.All the three LPRPs PSR J2144-3933 (Young et al., 1999), PSR J0250+5854 (Tan et al., 2018) and PSR J0901\(-\)4046 (Caleb et al., 2022) have a very narrow pulse width, typically less than 1%. Assuming a maximum emission height of \(100R\)(Johnston et al., 2023), the theoretical upper limit on pulse width is (Eq.(11)): \[PW<7\%P^{-1/2}\frac{1}{\sin\alpha}. \tag{14}\] Therefore, PSR J0250+5854 (with a period of 23.5 s) and PSR J0901\(-\)4046 (with a period of 76 s) etc is consistent with a normal radio pulsar origin, from the pulse width point of view. However, both of GLEAM-X J1627 and GPM J1839\(-\)10 (Hurley et al., 2022, 2023) showed possible signatures of a large pulse width. Therefore, we propose that the pulse with of LPRPs may be taken as the second criterion in identifying their nature, in addition to their relative position to the death line on the \(P-\dot{P}\) diagram. If a future LPRP has narrow pulse width consistent with that of normal radio pulsars, it may be viewed as an extreme radio pulsar. All we have to do is to consider the corresponding magnetospheric physics involved, e.g. the definition of the death line etc. If a LPRP has a large pulse width, then we must consider the possibility of magnetars or white dwarf pulsars. Comparison with other radio emitting magnetars.Radio emitting magnetars generally have a large pulse width compared with normal radio pulsars (Levin et al., 2012; Yan et al., 2015; Huang et al., 2021). Especially, the single pulse width of magnetars are generally very narrow. They may vary randomly in the pulse window, therefore resulting in a wide integrated pulse width (Levin et al., 2012; Yan et al., 2015). We think this may also be the case of GPM J1839\(-\)10: narrow single pulse and a wide pulse window. From the pulse width point of view, the two LPRPs GLEAM-X J1627 and GPM J1839\(-\)10 are similar to radio-emitting magnetars. Maximum period of LPRPs.From Figure 1 and 2 (or Figure 1 in Rea et al., 2023), the existence of maximum magnetic field in combination with the death line implies there is a maximum period for radio-emitting neutron stars and white dwarfs. For neutron stars, the maximum magnetic field may be about \(10^{16}\) G. The definition of death line is rather uncertain (which will result in a death valley). However, the possible maximum period for radio emission may be around \(10^{4}\) s or so. In this respect, the magnetar inside RCW 103 (with a period of 6.6 hours) is not expected to have radio emissions. For white dwarfs, assuming of a maximum magnetic field of \(10^{9}\) G, the expected maximum period may be about several thousand seconds. Future LPRPs with longer periods may help to unveil their nature (i.e., neutron stars or white dwarfs). In conclusion, by considering the death line and pulse width together, it is found that: (1) PSR J0250+5854 (Tan et al., 2018) and PSR J0901\(-\)4046 (Caleb et al., 2022) should be normal radio pulsars. They have narrow pulse width and they lie near the radio emission death line. Further investigations of their magnetospheric physics is required. (2) The two LPRPs GLEAM-X J1627 and GPM J1839\(-\)10 (Hurley et al., 2022, 2023) is unlikely to be normal radio pulsars. Their possible pulse width is relatively large. And they lie far below the fiducial death line on the \(P-\dot{P}\) diagram. (3) GLEAM-X J1627 and GPM J1839\(-\)10 may be (a) magnetars with twisted magnetic field or magnetar+fallack disk systems, or (b) white dwarf radio pulsars. At present, there are many uncertainties in both of these two possibilities. More multiwave observations are required in order to tell whether they are magnetars or white dwarfs. ## Acknowledgments H. Tong would like to thank Dr. Huang Zhi-Peng and Yan Zhen for discussions on pulse width and emission height. This work is supported by National SKA Program of China (No. 2020SKA0120300) and NSFC (12133004).
2303.14306
Conceptual diagrams in Quantum Mechanics
Quantum Mechanics (QM) stands alone as a (very) successful physical theory, but the meaning of its variables and the status of many quantities in the mathematical formalism is obscure. This unique situation prompted the need for attribution of a physical meaning to the latter, a procedure known as interpretation. On the other hand, the study of QM is usually presented, even to future scientists, within the only framework developed by Bohr and the Copenhagen researchers, known as the Copenhagen interpretation. As a contribution to the understanding and teaching of Quantum Mechanics, aimed to a broader and deeper appreciation of its fundamentals, including contemplating alternatives and updated interpretations for physicists and philosophers interested in the study of exact sciences (through Ontology, Epistemology, Logic or the Theory of Knowledge), we present a set of Conceptual Diagrams elaborated and designed to expose and facilitate the visualization of elements intervening in any interpretation of Quantum Mechanics and apply them to several well-developed cases of the latter.
Jorge E. Horvath, Rodrigo Rosas Fernandes
2023-03-25T00:15:53Z
http://arxiv.org/abs/2303.14306v1
# Conceptual Diagrams in Quantum Mechanics ###### Abstract Quantum Mechanics (QM) stands alone as a (very) successful physical theory, but the meaning of its variables and the status of many quantities in the mathematical formalism is obscure. This unique situation prompted the need of an attribution of a physical meaning to the latter, a procedure known as _interpretation_. On the other hand, the study of QM is usually presented, even to future scientists, within the only framework developed by Bohr and the Copenhagen researchers, known as the Copenhagen interpretation. As a contribution to the understanding and teaching of Quantum Mechanics, aimed to a broader and deeper appreciation of its fundamentals, including contemplating alternatives and updated interpretations for physicists and philosophers interested in the study of exact sciences (through Ontology, Epistemology, Logic or the Theory of Knowledge), we present a set of Conceptual Diagrams elaborated and designed to expose and facilitate the visualization of elements intervening in any interpretation of Quantum Mechanics, and apply them to several well-developed cases of the latter. Keywords: Quantum Mechanics, Diagrams, Philosophy of Science Introduction "Hard" sciences are commonly associated to Mathematics and formal schemes, but they also comprise many other cognitive elements in their fundamental constitution. For instance, the use of some sort of diagrams is not a novelty in Science. In fact, there are many types of graphic elements employed to visualize and understand scientific issues, widely employed for teaching/learning in many disciplines, but which are sometimes a constituent element of a discipline. About this statement, in an article entitled _Multiplying Meaning - Visual and Verbal Semiotics in Scientific Text_, J.L.Lemke [1] forcefully argues for the existence and need of non-verbal resources in scientific matters as follows: "When scientists think, talk, work, and teach (cf. [1,2]) they do not just use words; they gesture and move in imaginary visual spaces defined by graphical representations and simulations, which in turn have mathematical expressions that can also be integrated into speech. When scientists communicate in print they do not produce linear verbal text; they do not even limit their visual forms to the typographical. They do not present and organize information only verbally; they do not construct logical arguments in purely verbal form. They combine, interconnect, and integrate verbal text with mathematical expressions, quantitative graphs, information tables, **abstract diagrams**, maps, drawings, photographs, and a host of unique specialized visual genres seen nowhere else." (our bold) As it stands, Lemke's statement gives a very important status to the graphic elements in Science. Examples of the type of elements which have become commonplace in modern science are abundant. One of the most outstanding cases is Venn's diagrams in set theory [3], developed during the 19th century when the definition of the present division of disciplines mostly took place. Venn's diagrams are now a part of the "disciplinary matrix" discussed by Kuhn [4] and Set Theory would be unthinkable today without them. Another paradigmatic case, this time of a different type, is the use of Feynman's diagrams in Quantum Field Theory. Initially devised as a tracking tool for the elementary terms of the S-matrix, their meaning is believed to be much wider, and their use is so widespread that contemporary practitioners first proceed to draw the diagrams for a given problem, and only later formalize their mathematical expressions. In the words of Veltman and t'Hoft [5]"...diagrams form the basis from which everything must be derived". This deep symbiosis illustrates colorfully Lemke's thoughts: Feynman diagrams may be said to have become part of the _logos_. The issue of the graphic representations has been suggested to play even a bigger role, in the very definition of Science. Latour [6] has argued for the uniqueness of Science to be related to the use of graphical elements, which facilitate the inscriptions and give at once a mobile, immutable and yet changeable character (in Latour's own definitions and words), quickly emerging since the Scientific Revolution. Even without subscribing this interesting thesis, there are a number of contemporary studies that contain an insight on graphs as semiotic resources (for example, Airey [7]) and agree on their central, key role. When dealing with a wide and controversial subject, these resources can prove to be particularly important. The subject of our attention, Quantum Mechanics (QM), is about to complete a century of existence and currently enjoys a very special status: on the one hand it is recognized as one of the greatest creations of humanity in its repeated attempts to understand the Universe in which we live, and all its predictions have been confirmed through experiments; but on the other hand, its theoretical conceptions differ so much from the preceding traditions that QM led to a controversy that is far from being resolved. In other words, however different their theoretical conceptions and interpretations may be, the predictions of QM have always been confirmed when applied in experimentally, sometimes in flagrant contrast with intuitive classical expectations (see below). In the foundations of these controversies, we can identify several elements that contradict the usual way that physics dealt with the objects of the world so far. In fact, the nature of microphysical reality and the way in which we apprehend the microphysical world were much questioned and led to alternative formulations which kept almost all of the original formalism, but gave a whole different meaning to the mathematical and physical elements, hence they are known as QM interpretations. The existence of different interpretations on the most varied issues is quite peculiar in contemporary physics. Classical physics has a unique and exclusive interpretation of its own, since its formalism is unambiguous to physical reality. It should be added that an objective realism is also assumed, in the sense of admitting that physical objects exist independently of the observer. However, this is not so with QM, even within the widely accepted Copenhagen interpretation and other works seeking for a consistent meaning of the quantum formalism (see, for instance, de la Pena [8]). Because of the deep meaning of the elements discussed in them, sometimes these interpretations are perceived as too broad and difficult, leading both scientists and teachers/students to misleading and many conceptually blocked paths. According to the multirepresentation view [7, 9], an attempt to improve QM understanding must go beyond the grammatical language, mathematics and Aristotelian logic. We believe that the use of abstract diagrams can be converted into an effective tool to understand its interpretations and clarify the interconnection of the elements constituting the whole theory. Therefore, the goal of this article is to propose a discussion bringing to light the main interpretations of QM, through conceptual diagrams that facilitate the understanding for future physicists and philosophers dedicated to the study of the quantum world.This article is organized and presented as follows: In Section 2, the conception of classical physics is presented and contrasted with Bohr's Quantum Mechanics and the fully developed conception of QM as proposed by the Copenhagen researchers, the current "orthodox" version, highlighting some of its most important points.Section 3 is dedicated to a discussion of the constituent elements of QM, followed in the next Section 4 by the introduction of Conceptual Diagrams designed explicitly to facilitate the understanding of each interpretation of QM. The Conclusions and recommendations for the possible use of diagrams in the study and teaching in the classroom are given in Section 5. ## 2 Classical Physics and Quantum Mechanics ### Classical Physics vs. QM It is true that Classical Physics, starting with Mechanics as initially proposed by Sir Isaac Newton, has in modern times a unique and exclusive interpretation, attributed to its mathematical formalism. As stated above, Classical Physics also implicitly contemplates objective realism by admitting that physical objects exist independently of the observer and that all experiments will obtain the same results as long as the same conditions are observed, that is, whenever the conditions of the experiment are compatible and analogous, however, the same does not happen with QM. It is true that the state-of-the-art of Classical Physics is the result of a long history of debates on the nature of space, the character of "forces" and other issues, but these have been sorted out and there is little or no trouble today. Unlike Classical Physics, the conception of QM as a physical theory led from the very beginning to a series of issues that are quite deep and unsolved. In fact we can identify several elements in it that contradict the usual way that Classical Physics deals with the objects of the world, and even the nature of QM's own formulation seems different. This led to a variety of ways of dealing with the nature of Reality in the microphysics realm, and also the way in which we apprehend the microphysical world have been and still are much questioned. While retaining almost all of the original formalism, these formulations give different meanings to the mathematical and physical elements of QM, which is why they are known as "QM interpretations". Consequently, a presentation of the foundations of QM, as initially conceived and developed by Niels Bohr and the Copenhagen School, as well as their subsequent interpretations are extremely relevant for the training and updating of the future professional physicists and future philosophers as well. ### Quantum Mechanics and its Postulates (brief overview) In the early 20th century, the pioneers of QM faced the challenge of building a physical theory for the micro world that presented major conceptual and formal problems. Indeed, the notions deriving from classical physics were not sufficient for this task. Grammatical language, mathematics and Aristotelian logic were also suggested to be insufficient for understanding QM. "Our words do not fit", a famous quotation expressed by Heisenberg [10] about their use in QM illustrates this very keenly. QM was initially developed by Niels Bohr and the so-called Copenhagen School and it must be considered that with its later definitive reformulation, QM was given a probabilistic nature, resulting in debatable and controversial issues (notably by Einstein, Schrodinger and others) and prompting a continuous search for a more adequate interpretation. Because of the well-known excellent sources on QM (i.e. Ismael [11]) our presentation of the subject will be very brief and essential. This entire formulation is usually taught, generation after generation, without touching on the numerous problems that arise as a result of its further interpretations. Indeed, as a initial problem, we know that in any physical theory, the experimenter measures some quantity and compares it with the prediction. In QM, however, it is said that the experiment will measure predefined values (eigenvalues). In fact, in QM we are obliged to accept statements such as: "If the system is in an eigenstate of its observable \(A\), corresponding to the eigenvalues, an observer measuring \(A\) will certainly obtain the value a". Objectively seen, the prescriptive dogmatic character of this type of framework is overwhelming, but it is presented as "natural" and inherent to QM without further discussion about it (at least within the orthodox Copenhagen interpretation), leaving much to be desired for those who need or want to go deeper into the issue [12]. The type of information desired for the quantum description required the formulation of the n-dimensional space of states (also called _Hilbert space_). It is postulated that all possible system states are contained in the state vector \(\mid\Psi\rangle\) representing the system. For each measurable quantity there is an Hermitian operator \(\hat{Q}\), the mathematical character of these operators that act on the states guarantees the consistency of the results (for example, they eliminate imaginary probabilities). The results of the physical measurements correspond to the \(q\) eigenvalues of the operator \(\hat{Q}\) with probabilities computed using the inner product in the Hilbert space. The dynamical evolution of the wavefunction/state vector \(\mid\Psi\rangle\) in the so-called Schrodinger picture is \[\hat{H}\mid\Psi\rangle=i\hbar\frac{\partial}{\partial t}\mid\Psi\rangle \tag{1}\] Once the solutions of eq.(1) are found,\(\mid\Psi\rangle\) can be decomposed in a complete mathematical basis, formed by the solutions, and its temporal evolution is just \[\mid\Psi\rangle=\sum_{n}a_{n}\mid\Psi_{n}\rangle\exp\left(-iE_{n}t/\hbar\right) \tag{2}\] where \(E_{n}\) are the eigenvalues of the Hamiltonian operator. According to the (postulated) structure pointed out above, a single measurement can only give as a result one of the eigenvalues of the system. With the probabilistic character of the description, quantum phenomena are the true objects of description of the theory, independently of the existence of a quantum Reality (see below). In fact, a central foundation of Copenhagen QM version is that, if a measurement is taken, the wavefunction \(|\;\Psi\rangle\)_collapses_ as a result of system-device interaction. There is a quantum "leap" that is not accessible to human understanding, and is not described by the formalism. It is further postulated that the measurement results are expressible only in classical terms. Niels Bohr insisted a lot on this point (and gave it the name of _correspondence_), for him the measurement results would not make sense without the existence of Classical Physics. Another fundamental difference of QM with other theories results from the fact that operators do not always commute, and although much has been discussed about the experimental situation related to statistical dispersion, this property stems directly from the mathematical structure of the Hilbert space. The _commutator_ of two operators \([\hat{p},\hat{q}]\) called conjugate quantities is proportional to the reduced quantum of action. \[[\hat{p},\hat{q}]=i\hbar. \tag{3}\] That is, the conjugate quantities simultaneously measured in QM cannot have definite values whose errors \(\to 0\) under any circumstances, as they emerge from a set of probabilities of eigenvalues of operators with an irreducible dispersion due to their quantum nature. The measurement process is considered the source of irreversibility, but it never enters into the calculation. Not even the observer "itself" nor the measuring apparatus appear anywhere in the formalism. These features, among others, was never accepted by Einstein and other physicists, who supported the incomplete character of QM, that is, the determination of physical quantities with infinite precision when incorporated into a more comprehensive and complete theory. Criticisms of QM by Einstein, Schrodinger, and others referred to these obscure aspects of its formalism and also to the very idea of the quantum object. The famous case of Schrodinger's cat, for example, was enunciated through a _reductio ad absurdum_ in which QM allows the display of a half-alive and half-dead cat simultaneously inside a closed box in which a poison bottle is activated through a quantum process. Schrodinger considered the quantum description of this situation as absurd. Over time these criticisms led Bohr's conception to further extreme positions: Reality came to be considered ultimately "metaphysical", in the sense that it does not have a demonstrable existence, but it was stated that it is even dangerous to think about it. The Copenhagen group formulated a kind of "No-go" interpretation about QM, completely differentiating it from all the preceding theories. Thus, the most obvious path resulted in the evolution of plain Idealism (subjectivism) into an Operationalism position (see below). In the latter philosophical approach, there is no concern for the nature of Reality and related problems. Late QM, in turn, has been reduced by some to the idea of a set of rules of calculations, and, without denying that there is a Reality, does not refer to it. Nothing could upset Einstein more than this _ab initio_ resignation, since he strongly believed that Physics needs to address the behavior of real objects in a objective world. Elements constituting QM interpretations We will show below that through abstract diagrams it will be possible to distinguish all the elements that enter the interpretation, and thus point out that in QM not only the nature and existence (Ontology) of a Quantum Reality is questioned, but also the ideas of "observer" and "phenomenon", which may be different in each case. As a first example of this problem, it must be considered the very separation between the subject (observer \(S\)) who experiences the world and a Reality object (\(R\)) to be apprehended by the subject through the analysis and observation of phenomena, which is central to QM (and all physical theories in fact). For Western science, in the classical realm, this separation is so evident that it is not even discussed or mentioned anywhere. However, for other ways of thinking (essentially in Eastern philosophies, but also many native Americans), this separation between subject \(S\) and the Reality object \(R\) being studied becomes impossible. We will see that some of the interpretations of QM identify this separation as a source of fundamental discrepancy between experiments and "reasonable" expectations. The so-called interface (Epistemology) with quantum phenomena is many times pointed out as a possible source of problems in QM. In general, any physical theory needs at least three fundamental elements to deal with the object of study (system). These are: (1) a _Logic_ based on syllogisms or other tools. As with the concept of "observer-subject" and supposedly separable from the system, there is the implicit hypothesis that the Logic of the world is Boolean (Aristotelian). Von Neumann was one of those who insisted on the possible non-human Logic of QM, in parallel with the case of non-Euclidean geometries. For example, in the class of statements like "if \(p\), then \(p\) or _q_" quantum version "if Schrodinger's equation is valid, then the system evolves according to it and a measurement will give one of the eigenvalues" is constantly formulated without its listeners noticing its inconsistency within a Boolean logic. (2) a consistent _Algebra_ to manipulate basic objects (\(\mid\Psi\rangle\), etc.) and obtain quantitative predictions. This is of course different from the underlying Logic in general, and is fundamental when obtaining concrete results (for example, the inner product in Hilbert space allows to calculate the probabilities of each eigenvalue in a single measurement). (3) Finally, a _Language_ (formal or informal), which brings as a corollary a semantics-semiology, not always properly scrutinized. Besides the remarkable quotation of Heisenberg (in Heisenberg [10]) "our words do not fit", this issue within a wider context (in the sense of the adequacy of human languages to formulate scientific statements) has been explored by Wittgenstein [13, 14] and others, and remains a major issue in the philosophy of Science, taking a dramatic turn for the interpretations of QM. Finally, and at the risk of sounding absurd, the very idea of a physical object to be studied is not guaranteed in QM. We have pointed out that in a late Copenhagen interpretation, Bohr even stated that QM is not about Reality, but about what can be said about phenomena [15], plainly stating that the late Copenhagen version of QM was an epistemological theory. Furthermore, in various versions of QM one can find a certain philosophical Idealism, namely that the physical world is actually a product of the mind. Thus, the question of the empirical content of QM takes dramatic dimensions, a question that should not be ignored. ## 4 Diagrams and QM Interpretations For a complete visualization of each interpretation of QM, a series of illustrative conceptual diagrams that explain the basic elements and their role in them are here created and presented. In the diagrams, the Subject (observer) is represented in the diagrams with a triangle with the letter \(S\), and it would be ultimately important to distinguish whether it has a "consciousness" or otherwise, for example, being a simple measuring device (although we will not attempt to develop this issue, well beyond the scope of this article). _Epistemology_ (horizontal thick arrows) is the set of empirical (experiences) and formal (Algebra and Language) tools for an assumed Logic, with which it is intended to apprehend the quantum phenomena QP (represented by an asterisk), which in turn are supposed to be produced by a QR object(s), if existing (we stress again that some representations deny the very existence of a deeper quantum reality). The human physiological/linguistic limitation discussed by Wittgenstein [13, 14] and other authors gives rise to a buffer we called \(W\)\(filter\), always explicitly indicated, that shapes and limits the subject's \(S\) perception and understanding. We will call the procedures of this epistemological connection generically as measurements. A _phenomenon_ is marked with an asterisk in all cases. Finally, the _Ontology_ is represented by an ellipse that includes the existing objects according to each interpretation. This can be classical (having defined values of physical quantities at all times) or quantum (without defined values for any time) or displaying a non-classical feature (such as entanglement of phases), in which case we have written "classical" within quotation marks. Finally, quantum objects may be non-existing within the interpretation, or at least sometimes not having an explicit characterization. ### The Classical Physics case The Classical Physics case is a benchmark to grasp what a conceptual diagram can deliver. Few objections against the classical picture have been raised, and this is why there are no current discussion on the "interpretations" of Classical Physics (except for some specific issues). In terms of our definitions above, the diagram describing the classical situation (Fig. 1) can presented as follows \(\bullet\) The "Reality" (Classical Reality, or CR here), the objects and their properties, the measured phenomena (asterisk) and the Subject \(S\) itself are causally separated, when measured, the properties are well defined for any time and are local (they do not depend on the distant environment, and they exist even if not measured by hypothesis). The phenomena are manifestations of existing CR object(s) and the task of Physics is to know the latter through the measurements and formulation of compelling theories. This complies with the Realism of A. Einstein and most physicists, although a group of putative Idealists may challenge the statements even within the classical realm. \(\bullet\) As stated, there is no need for any "interpretation" because classical theory defines objects with definite values of their physical quantities unambiguously for all times (hence the circle in "CR", which will be replaced by a "cloud" in "QR." Wittgenstein's filter \(W\) has here a secondary role, at least not as central as the one it will play in Quantum Mechanics. Figure 1.The Classical Physics diagram, in which the elements are well-defined with little or no dispute. ## 5 Conceptual diagrams for QM interpretations ### The Copenhagen interpretation Figure 2. The Copenhagen interpretation. The existence of a QR is denied (or at least not deemed necessary), although quantum phenomena exist and are the subject of QM. Their study, according to Bohr, would be impossible without the subject S, the measurement apparatus and the calculation rules belonging to the classical realm (Principle of Complementarity), all of them inside the large white ellipse. As discussed above, the late statements by Bohr (see Petersen [15]) about the Copenhagen interpretation are truly remarkable: it is said that there is no concrete Quantum Reality, objects and their properties do not have defined values (they do not exist!) before being measured. The act of measuring is what "creates the Reality", that is, it defines the type of phenomenon measured. The subject, measuring devices and results are expressible only in classical terms (Bohr insisted that this complementarity is crucial to be able to say something about the quantum phenomena). It is granted by the Copenhagen interpretation that the Logic of QM is Aristotelian (Boolean), but the results show the probabilistic nature of the phenomenon, because this is all we can get for them as a result: a set of probabilities. QM is therefore a theory that says how much we can say about objects (epistemological) and does not describe any true Quantum Reality (these points were absurd to Einstein, who claimed that they were enough to think that QM is incomplete and would be surpassed by a better approach to a realistic physical world, as advocated by him). These unusual features have been largely debated and rejected by many physicists and philosophers, and praised by the majority of the scientist's community. Among the former, Bunge [16] was emphatic to declare the Copenhagen Interpretation plainly false, and advocated its substitution by a form based on (non-classical) realism. In any case, and even in its earlier form in which a Reality was not denied, it is clear that the latter was never a main object of worry by their creators, and this justifies the question mark on the gray zone in Fig. 2. ### The non-local Reality (de Broglie-Bohm et al.) Figure 3. The non-local interpretation of Bohm and de Broglie, supported by many experimental results. The main ingredient is the so-called phase entanglement of the wavefunctions, a feature that does not exist in Classical Physics, although ultimately the nature of the quantum objects is not that different from the classical ones. In this version of the theory, it is postulated that the "Quantum Reality", objects and their properties, measured phenomena and the subject itself (\(S\)) cannot be separated. When measured, objects keep memory of their space and time history (a feature described as _phase entanglement_, Fig. 3). The entire Universe is an indivisible Whole (i.e., extremely non-local.) Particles ride the wave functions but remain hidden (hidden variables) without influencing them (it is not clear if they are ultimately superfluous). We have employed quotation marks for the word "classical" because entanglement is not a classical feature strictly speaking. Around 50 years ago a battery of experimental tests were devised and performed to verify the non-locality of quantum objects (see a panoramic account in Herbert [17]). The initial results already pointed out that the agreement with the predictions of QM, and rejection of the locality of the QR was obtained. Since then, every experiment confirmed the non-locality and the idea that, to some extent at least, the idea Whole applies, and the locality is just an approximation valid within certain limits. It is impressive that this conclusion is not as widely known and discussed as it should, and attempts to save the local character have not came with a proper "solution". However, a claim that the de Broglie-Bohm interpretation of QM is fully confirmed by these experiments is premature. ### The interpretation of the many-worlds (or _The Garden of the Forking Paths_, by J.L.Borges) Figure 4: Everett ’s many-worlds interpretation, in which each measurement splits the future history. This interpretation developed by H. Everett (see Pinto-Neto [18] for a discussion) holds that Quantum Reality is a set of systems disjoint in time. Quantum objects bifurcate their histories for each possible result of a measurement, and each version continue to exist in their own parallel Universes (which are real’, not a Gibbs ensemble of mental copies). Therefore, each measurement corresponds to one of the possibilities, and therefore there is no wavefunction collapse, but rather a probability of selecting one of the many components. The diagram of Fig. 4depicts symbolically this hypothesis and connects all the elements discussed in Section 4 to it. It is obvious that the scenario leads to an amazing conception of the whole physical reality, at least from the philosophical point of view. A literature piece based on this idea (but independent of Everett's scientific work which is several years older) was written in the form of a short story by J.L. Borges [19], but without any explicit mention to QM, which was not a subject of the literary piece [20], nor appears anywhere in Borges' writings. Everett declared that he did not know Borges story either. ### 5.4 Quantum Logic (Birkhoff-von Neumann) Figure 5. The Quantum Logic of Birkhoff and von Neumann, in which the non-Boolean logic operating inside the W-filter is held responsible for the problems encountered in QM. The Logic of humans is not the Logic of Nature. In the Birkhoff-von Neumann interpretation the main point is the examination of the subject \(S\). According to them, observers are conditioned by the reasoning structure (Language), represented by the Boolean logic. Their postulate is simply that QM does not follow the latter, and a different type of logic is needed. Therefore, in this interpretation, the problems originate and restricted to what we have called the Wittgenstein filter that connects the measurement results with the subject-observer. This does not mean that a QR cannot exist, only that its apprehension is complicated by the inherent Logic which applies to the theory. Metaphorically we may state that QM talks to human observers in a foreign language, unknown to us, and we are compelled to discover and learn that language to understand the physical world (Fig.5). ### Consciousness creates Reality (von Neumann-Wigner-Stapp) Figure 6. Consciousness is the ultimate entity beyond QR, according to the Idealistic interpretation of von Neumann- Wigner- Stapp. The interpretation of von Neumann-Wigner-Stapp is perhaps one of the most outrageous in all Physics, and raises controversies about the nature of reality and the existence of intelligence as well. It may be labeled in its final form as "pure Berkeley Idealism", because it suggests that consciousness creates Reality. It has a strong resonance in some oriental philosophies suggesting that the world is a kind of dream of a superior mind [21]. The interpretation is very polemic in itself, and was suggested by two outstanding contributors of Quantum Theory. Adopting it as true, it also beaks the remaining objectivity of orthodox QM in the sense that for the latter the measuring device could be an apparatus or a conscience, whereas in the von Neumann- Wigner- Stapp version only a conscientious observer stands, who not only measures, but also creates the observed phenomenon with its intervention. Quantum behavior is in this sense an attribute of Consciousness (Fig.6). ### Heisenberg's _potentia_ Figure 7. Heisenberg's proposal for the understanding of where the QP came from in the absence of a QR: the _potentia_ concept. Note that we have not located a QR, from which the _potentia_ P would extract the values in the diagram. This interpretation is due to one of the founding fathers of QM, Werner Heisenberg, suggested in a later development many years after his own initial work and the discussion prompted by the QM in the Copenhagen interpretation. Heisenberg suggested that there is a kind of limbo (called _potentia_ by him) in which objects of Quantum Reality exist, somewhat "halfway" between the QR and the observed quantum phenomena. The act of measuring defines the type of phenomenon, and the result stems from the world of _potentia_ P, not from the Quantum Reality itself. The Fig. 7 attempts to depict Heisenberg's interpretation which introduces the _potentia_ as a buffer of probabilities implied by the QM formalism. ### 5.7 Statistical interpretation (Born-Einstein) Figure 8. Born-Einstein statistical interpretation, in which probabilities of a measurement result stem from a quantum state constituted by a kind of statistical _ensemble_. After a vigorous discussion in the specialized literature, which also reached the public domain, Einstein and Max Born came to the conclusion that the quantum description is not applicable to an individual object, but to an ensemble of objects (not to be confused with Everett's interpretation). In this sense, the wavefunction contains information regarding the probabilities of measuring the allowable values of the physical properties of the objects in the ensemble [11] (Fig. 8). This way the probabilistic nature of the quantum predictions can be understood. The interpretation is considered minimal as far as the assumptions are made, and constitutes a zero-order framework for solving the known problems of QM. In other words, it is suggested that QM is a kind statistical theory, and there is room for a more pointed description of the quantum Reality in the future. ### Instrumentalism Figure 9. The Instrumentalism interpretation, in which no QR is of concern. In this interpretation the quantum description is a set of calculation rules, without any ontological pretension, that is, it explicitly renounces the logos of Quantum Reality and focuses just on the language that connects the Subject \(S\) with the results of experiments. Its domain is the Wittgenstein filter set (now strongly focused on the mathematical formalism). In some sense, this interpretation is a natural evolution of the early Copenhagen interpretation, as noted above (Fig. 9). ### The Quanton interpretation Figure 10. The Quanton interpretation assumes that quantum objects are very different than classical ones from scratch, and the classical limit applies to the macroworld. This interpretation builds on the proposal of a _quanton_, a term coined by Bunge [22]. The nature of objects is supposed to be different from the classic picture: they do not have definite values (but they do exist!) for any instant of time. The quantum formalism allows to calculate the expected values for a measurement (Fig. 10). But there is nothing idealistic or mystical about this, it is just the correction of the extrapolation made since the early days from the classical to the quantum world, carried out on the ontological level. In other words, since we only know classical objects in a direct unambiguous fashion, we have attributed to quantum objects similar properties which turn out to be misleading: it makes no sense, for example, to talk about a wave-particle duality, because these are classical views not to be applied to the quantum world objects, which do not possess such qualities as we know them. Electrons and not waves, nor small balls, but quite _sui generis_ entities. ## 6 Conclusions We have argued in this work for a more open and multiple representation of one of the most difficult issues of the 20th century Physics, Quantum Mechanics, a revolutionary theory that still hides its true meaning for practitioners and educators/students. The interpretation of QM issue is a long-term one and is not likely to end soon. The use of diagrams and graphs in Science is not new, as we pointed out in the Introduction. However, in the form presented above, the diagrams are close to schemes or tools for identifying the various possible interpretations of QM and, consequently, the specific barriers of its effective knowledge and meaning for each case. Therefore, as a useful tool, we constructed a set of Conceptual Diagrams for QM Interpretations, flexible enough to be applied to any interpretation of Quantum Mechanics, quite related in their essence to Venn's diagram of set theory. These have been shown explicitly for a (quite trivial) Classical Physics case and nine popular interpretations of QM, although there are many more proposals available which have not been addressed here [23]. Although the diagrams do not immediately solve any problem, their merging with the rest of cognitive tools may prove important in the long run. In many senses they could be helpful as the axiomatization of QM is (see, for example, [24], a tool to understand the inconsistencies and fundamentals of the theory related to its many interpretations put forward to make physical sense of it. We are aware that this initial proposal must be examined in-depth, refined and extended. Since understanding relies on diagrams as elements entangled with mathematics, verbal language, graphics, and other tools, we believe the issue of QM and its difficulties can be better grasped with the aid of the presented Conceptual Diagrams. In the long run, it is hoped that they may be merged with other aspects of QM exposition and study, following the path of previous examples. Besides helping philosophers and physicists, we believe that this tool would be particularly useful for education and teaching. ## Acknowledgments This work was performed under the auspices of a Research Fellowship granted by the _CNPq Agency_, Brazil and _FAPESP Agency_, Sao Paulo State through the grant 2020/08518-2.
2309.01036
SEPAL: Spatial Gene Expression Prediction from Local Graphs
Spatial transcriptomics is an emerging technology that aligns histopathology images with spatially resolved gene expression profiling. It holds the potential for understanding many diseases but faces significant bottlenecks such as specialized equipment and domain expertise. In this work, we present SEPAL, a new model for predicting genetic profiles from visual tissue appearance. Our method exploits the biological biases of the problem by directly supervising relative differences with respect to mean expression, and leverages local visual context at every coordinate to make predictions using a graph neural network. This approach closes the gap between complete locality and complete globality in current methods. In addition, we propose a novel benchmark that aims to better define the task by following current best practices in transcriptomics and restricting the prediction variables to only those with clear spatial patterns. Our extensive evaluation in two different human breast cancer datasets indicates that SEPAL outperforms previous state-of-the-art methods and other mechanisms of including spatial context.
Gabriel Mejia, Paula Cárdenas, Daniela Ruiz, Angela Castillo, Pablo Arbeláez
2023-09-02T23:24:02Z
http://arxiv.org/abs/2309.01036v3
# SEPAL: Spatial Gene Expression Prediction from Local Graphs ###### Abstract Spatial transcriptomics is an emerging technology that aligns histopathology images with spatially resolved gene expression profiling. It holds the potential for understanding many diseases but faces significant bottlenecks such as specialized equipment and domain expertise. In this work, we present SEPAL, a new model for predicting genetic profiles from visual tissue appearance. Our method exploits the biological biases of the problem by directly supervising relative differences with respect to mean expression, and leverages local visual context at every coordinate to make predictions using a graph neural network. This approach closes the gap between complete locality and complete globality in current methods. In addition, we propose a novel benchmark that aims to better define the task by following current best practices in transcriptomics and restricting the prediction variables to only those with clear spatial patterns. Our extensive evaluation in two different human breast cancer datasets indicates that SEPAL outperforms previous state-of-the-art methods and other mechanisms of including spatial context. ## 1 Introduction Histopathology is the study of diseases in tissues through microscopic sample examination. Among the different staining methods, Hematoxylin and Eosin (H&E) is the most common one and is currently considered the gold standard for diagnosing a wide range of diseases [31, 28, 24]. More recently, this approach has been complemented with molecular biomarkers, such as mRNA expression profiling, offering high specificity and the ability to directly predict prognosis and determine treatments [29, 5]. Interestingly, these two data types prove complementary: while H&E imaging lacks the specificity of transcriptomics, gene profiling lacks the physiological insights derived from morphology. By aligning dense spatial mRNA profiling with H&E histopathological images, Spatial Transcriptomics technologies (ST) provide comprehensive insights into the spatial organization of gene expression within tissues [3]. The advent of direct gene expression assessment on tissue harbors the potential for an unprecedented understanding of the mechanistic causes behind many diseases. However, obtaining these datasets in real clinical practice encounters major bottlenecks, primarily stemming from the need for specialized equipment, domain expertise, and considerable time requirements [36]. To overcome these burdens and leverage the fact that H&E images are ubiquitous in medical settings, the computer vision community has recently delved into predicting gene expression from tissue images. Although various works demonstrate promising results [23, 21, 2, 12, 34, 35], existing methods are still far from clinical deployment. Upon closer examination of the problem, it becomes evident that changes in gene expression are typically associated with alterations in tissue appearance. However, it is important to note that this correlation does not necessarily apply to all genes. For example, constitutive genes that exhibit constant expression within the spatial context [7] are unsuitable for prediction based solely on visual information. Hence, methods should focus on genes with a verifiable dependence on tissue appearance for the task to be well defined. Another challenge lies in the scarcity of data. The current publicly available datasets encompass \(2-70\) Whole Slide Images (WSI) with \(5,000-15,000\) genes for a set of \(300-3500\) coordinates, depending on the technology [3]. Consequently, generating such high-dimensional predictions with such limited samples is intrinsically difficult. Finally, as the technology is still in development, ground-truth data is sparse and noisy; specifically, pepper noise is observed in the expression maps [36]. Current approaches present a dichotomy between complete globality, which uses the WSI to jointly predict an expression map for all the coordinates at once (WSI-based methods [23, 21, 2], Fig.1.A), and complete locality, which only uses visual information available at each coordinate to predict gene expression (patch-based methods [12, 34, 35], Fig.1.B). While complete globality leverages spatial information and long-range interactions, it suffers from severe data scarcity, making models prone to overfitting. In contrast, complete locality benefits from abundant data for deep learning training but disregards spatial relations, resulting in suboptimal performance. To overcome these challenges, we propose a new problem formulation, benchmark, and state-of-the-art method for **S**patial **E**xpression **P**rediction by **A**nalysis **L**ocal graphs (**SEPAL**). Our problem formulation strategically exploits the biological nature of the problem; our benchmark uses a robust bioinformatic pipeline to overcome acquisition issues; and our model bridges the gap between locality and globality by performing local spatial analysis. In terms of problem formulation, we leverage a domain-specific advantage: the expression of a gene is expected to be within a specific range of values, and the variations inside that range are the ones with physiological significance. Rather than solely focusing on the absolute value of gene expression, we exploit this knowledge by bounding the prediction space within a defined box and using its center as an inductive bias. By estimating this bias from the training data, we can focus on learning relative differences instead of absolute values. Specifically, we supervise expression changes with respect to the mean expression of each gene in the training dataset. This novel approach differs from previous works since they directly predict the absolute gene expression. We build our benchmark by first incorporating standard bioinformatic processing normalizations (TPM [1]), which were previously lacking. Then, we apply a modified version of an adaptive median filter [6] to manage the pepper noise. And finally, to ensure the selection of relevant prediction genes, we filter by Moran's I [20] value, a statistic designed to identify significant spatial patterns over a graph. By leveraging Moran's I, we ensure our focus remains on genes that depend on tissue appearance. Lastly, we introduce a novel approach that harnesses the power of local spatial analysis. Our strategy starts with a completely local learning stage and then integrates information from local neighborhoods surrounding each patch with the help of a graph neural network (GNN, see Fig.1.C). Our key hypothesis is that gene expression is predominantly influenced by nearby visual characteristics rather than long-range interactions. SEPAL benefits from the advantages of local-based and global-based training (spatial relations and enough data) without succumbing to their respective limitations. We conduct extensive experimentation on two different human breast cancer datasets obtained with different technologies and report favorable results relative to existing techniques. Our contributions can be summarized as follows: * We propose a paradigm shift to supervise gene expression changes relative to the mean rather than absolute Figure 1: Different approaches for predicting gene expression from tissue images. The inputs of each model type are enclosed by a black frame. (A) _Global methods_ analyze a whole slide image and make a prediction about the tissue expression in every spot at once. (B) _Local methods_ process the image by patches and predict the expression of each individual patch, one at a time. (C) SEPAL uses graphs that contain information from multiple patches to represent spatial information and predict gene expression for the central node of each graph. values. * We propose a benchmark that follows current best practices in transcriptomics, deals with pepper noise, and restricts prediction genes to only those with clear spatial patterns. * We develop a new state-of-the-art method that applies local spatial analysis via graph neural networks. To promote further research on ST, our project's benchmark and source code is publicly available at [https://github.com/BCV-Uniandes/SEPAL](https://github.com/BCV-Uniandes/SEPAL). ## 2 Related Work Multiple approaches have been proposed to tackle the gene expression prediction task, with works focusing on different aspects of the visual data. State-of-the-art methods can be divided into two paradigms: global (WSI-based) and local (patch-based) focused. ### Global Methods Global methods predict the gene expression of all the spots of a WSI at once, meaning that their input corresponds to the complete data from a high-resolution histopathology image. The most notable work of this family of methods is HisToGene [21], which receives a WSI and divides it into patches that are represented through image and positional embeddings fed to a Vision Transformer (ViT) architecture [8]. The mechanism in HisToGene enables the model to consider spatial associations between spots [21]. Nonetheless, this method demands many WSIs, posing a challenge as WSIs are often scarce in most datasets. Additionally, processing the entire WSI incurs in a high computational cost. Therefore, we propose a more efficient spatial analysis at a smaller scale. Instead of using an entire sample as a single data element, we adopt a patch-based strategy, enabling us to execute predictions one patch at a time. This granular approach not only conserves computational resources but also mitigates the overfitting risks associated with using large WSIs. ### Local Methods Unlike global methods, local methods estimate the gene expression one spot at a time by dividing the WSI into individual patches. Some examples of this approach include STNet [12], EGN [34], and EGGN [35]. The focal point of local methods is the visual information in the patch of interest, and they do not take into consideration characteristics such as the vicinity of the patch or long-range interactions. For instance, STNet [12], which is one of the most popular methods, formulates the task as a multivariate regression problem, and its architecture consists of a finetuned CNN (DenseNet-121 [13]) whose final layer is replaced with a linear layer that predicts the expression of 250 genes. A characteristic strategy of STNet is that during inference, it predicts the gene expression for 8 different symmetries of that image (4 rotation angles and their respective reflections) and returns the mean result as the final estimation. This model generalizes well across datasets and has high performance when predicting the spatial variation in the expression of well-known cancer biomarkers [12]. Other examples of this approach include EGN [34] and its upgraded version, EGGN [35]. The core of these methods is exemplar guidance learning [34], a tool that they apply to base their predictions on the expressions of the patches that are most visually similar to the patch of interest. These reference patches are known as the exemplars and correspond to the nearest neighbors of a given patch in the latent space of an image encoder. The difference between these two models lies in the main processing of the input, where EGN uses the exemplars to guide a ViT [8], while EGGN uses the exemplars to build visual similarity graphs that are fed to a GraphSAGE-based backbone [10]. The key hypothesis of EGN and EGGN is that similar images have similar gene expression patterns, no matter their location within a tissue. Nevertheless, depending on the scale of the patches, this assumption could neglect their local context. For instance, if each patch contains a single cell, several similar patches with different physiological contexts might differ in their transcriptomic profile. Thus, for our approach we choose to guide our model with spatially close patches rather than with visually similar patches. When we consider the surroundings of a specific patch, we take into account its location within the tissue and the possible differences in its biological profile. As a result, we tackle the potential limitation that the scale of the input could impose. ## 3 Sepal ### Problem Formulation Given an input image patch \(X\in\mathbb{R}^{[H,W,3]}\), and \(k\) spatial neighbors \(Z\in\mathbb{R}^{[k,H,W,3]}\), we want to train an estimator \(F_{\theta}(\cdot)\) that predicts the difference between the gene expression \(y\) of patch \(X\) and the mean expressions in the training set \(\bar{y}_{\text{train}}\). Consequently, we aim to optimize a set of parameters \(\theta^{*}\) such that: \[F_{\theta^{*}}(X,Z)\approx\Delta y=y-\bar{y}_{\text{train}} \tag{1}\] Where, \(\Delta y\in\mathbb{R}^{[n_{g},1]}\) is the difference between \(\bar{y}_{\text{train}}\in\mathbb{R}^{[n_{g},1]}\) and the real gene expression \(y\in\mathbb{R}^{[n_{g},1]}\) of the patch. This paradigm shift of predicting \(\Delta y\) instead of \(y\), has the purpose of allowing our method to focus directly on the nuances in the data since we are centering the dynamic range of the prediction space around zero. ### Architecture Overview SEPAL is comprised of two stages: local learning and spatial learning, which are shown in Fig.2. While local learning follows the classic approach of finetuning and image encoder, the spatial learning of SEPAL relies on representing the input patch and its neighbors as a graph, where the central node corresponds to the image for which we want to predict the gene expression. With this representation, our model has access to the visual features in the current location and in its surroundings. Prior to the construction of the graphs, in the first stage of our proposal (Fig.2.A), we train a feature extractor \(I(\cdot)\) to process an input image patch \(X\) and return a low-dimensional embedding \(I_{\text{emb}}\in\mathbb{R}^{[d_{\text{emb}},1]}\). Besides, this module also outputs a local prediction \(\Delta\hat{y}_{i}\in\mathbb{R}^{[n_{g},1]}\) obtained by applying a linear layer \(L(\cdot)\) to \(I_{\text{emb}}\) as follows: \[I(X)=I_{\text{emb}} \tag{2}\] \[L(I_{\text{emb}})=\Delta\hat{y}_{i}\approx y-\bar{y}_{\text{train}} \tag{3}\] Consequently, the preliminary prediction \(\Delta\hat{y}_{i}\) is completely based on \(X\) and is later refined in the spatial learning stage. After training \(I(\cdot)\), we fix it and use it to obtain the visual features of all the patches in the dataset. We integrate these embeddings, together with a transformer-like positional encoding, to construct a local neighborhood graph \(\mathcal{G}(X)\) for each patch (Fig.2.B). Lastly, in the spatial learning stage (Fig.2.C), input graphs are processed by a GNN Module to obtain a spatial correction vector \(\hat{s}\in\mathbb{R}^{[n_{g},1]}\) which is then added to \(\Delta\hat{y}_{i}\) to obtain \(\Delta\hat{y}\). This spatially aware prediction is summed with the bias \(\bar{y}_{\text{train}}\) to present the final gene expression estimation \(\hat{y}\) for the input patch: \[\Delta\hat{y}=\hat{s}+\Delta\hat{y}_{i} \tag{4}\] \[\hat{y}=\Delta\hat{y}+\bar{y}_{\text{train}} \tag{5}\] ### Graph construction The process of building the graphs is shown in Fig.2.B and aims to follow the spatial connectivity of the WSI. Therefore, for a patch of interest \(X\), we first select the \(k\) neighbors within an \(m-\)hop vicinity of \(X\). For example, in Fig.2B \(m=1\) and \(k=6\) because of the hexagonal coordinate geometry. We join the patch and its neighbors in a single set \(P=\{X,Z\}\in\mathbb{R}^{[k+1,H,W,3]}\) and compute the visual embedding matrix \(M_{i}\in\mathbb{R}^{[d_{\text{emb}},k+1]}\) using our frozen image encoder \(I(\cdot)\). Additionally, to enrich the spatial information beyond the topology of our graphs, we calculate a positional embedding \(E_{\text{pos}}\in\mathbb{R}^{[d_{\text{emb}},1]}\) for each patch in \(P\) using the 2D transformer-like positional encoder from [33]. The inputs of that encoder are the relative coordinates of each neighbor w.r.t. the center patch. This computation gives us a positional matrix \(M_{p}\in\mathbb{R}^{[d_{\text{emb}},k+1]}\) that is added with \(M_{i}\) to give the final graph features \(M\). Summarizing, we define graphs as: \[G(X) =\mathcal{G}(P,E,M) \tag{6}\] \[M =M_{i}+M_{p} \tag{7}\] Where \(E\) is a binary and undirected set of edges defined by dataset geometry. ### Spatial Learning Module Once a graph \(\mathcal{G}(X)\) is fed to the spatial learning module, it is passed through a series of \(h\) Graph Convolutional Operators (GNN\({}_{i}(\cdot)\)) with a sequence \(C=\{d_{\text{emb}},c_{1},c_{2},\dots,c_{h-1},n_{g}\}\) of hidden channels following the recursive expression: \[g_{0} =\mathcal{G}(X) \tag{8}\] \[g_{i+1} =\sigma\left(\text{GNN}_{i}(g_{i})\right)\] (9) \[\hat{s} =\text{Pooling}(g_{h}) \tag{10}\] Where \(g_{i}\) is the representation of \(\mathcal{G}(X)\) at layer \(i\in\{0,1,2,\dots,h\}\), \(\sigma(\cdot)\) is an activation function, and the Pooling\((\cdot)\) operator represents a global graph pooling operator. The correction vector \(\hat{s}\) represents the contribution of local spatial information to the final prediction. ## 4 Experiments ### Datasets We evaluate our performance in two breast cancer datasets produced with different technologies: (1) the 10x Genomics breast cancer spatial transcriptomic dataset [Section 1, Section 2] (referred to as _Visium_ because of the experimental protocol), and (2) the human breast cancer in situ capturing transcriptomics dataset [27, 26] (referred to as _STNet dataset_ because of the first deep learning method that used this data). The Visium dataset contains two slide images from a breast tissue sample with invasive ductal carcinoma from one patient, each with \(3798\) and \(3987\) spots of \(\approx 55\mu\)m detected under the tissue. On the other hand, STNet dataset consists of 68 slide images of H&E-stained tissue from 23 patients with breast cancer and their corresponding spatial transcriptomics data. Specifically, the number of spots of size \(\approx 150\mu\)m varies between \(256\) and \(712\) in each replication, so the complete dataset contains \(30,612\) gene expression data points with their respective spatially associated image patch. For both datasets, we take reshaped patches to a \([224,224,3]\) dimension as input for SEPAL. ### Benchmark To design a robust benchmark, we focus on three main characteristics: (1) a bioinformatic pipeline on par with current best practices in transcriptomic analysis, (2) a pepper noise filter to improve data quality and allow better model training, and (3) a selection strategy to ensure that all genes have spatial patterns. In terms of the processing pipeline, we first filter out both genes and samples with total counts outside a defined range (See Supplementary Table 1 for detailed values in each dataset). Then, we discard genes based on their sparsity. Here, we ensure that the remaining variables are expressed in at least \(\varepsilon_{T}\) percent of the total dataset and \(\varepsilon_{WSI}\) percent of each WSI. Following the filtering, we perform TPM [32] gene normalization and a \(\log_{2}(x+1)\) transformation. To address pepper noise, we applied a modified version of the adaptive median filter [14]. Shortly, for each zero value in a gene map, we replace it with the median of a growing circular region around the interest patch up to the Figure 2: (A)First stage of our proposal. Pretraining of the Image Encoder \(I(\cdot)\) and a linear layer \(L(\cdot)\) to output the Image Embedding (\(I_{\text{emb}}\)) of a patch \(X\), along with a preliminar prediction \(\Delta\hat{y}_{i}\) of the difference between the expression in the patch and the mean expression in the train dataset. (B) The Graph Construction process begins with an image patch of interest and its spatial neighbors to build the graph representation based on the patch embeddings returned by the frozen \(I(\cdot)\) and the positional encoding of each neighbor. (C) Architecture of the spatial learning module, which receives as input a Spatial Graph of the patch neighborhood and applies a GNN to predict the spatial correction \(\hat{s}\) that further improves the \(\Delta\hat{y}_{i}\) to get the \(\Delta\hat{y}\) associated to the center patch of the graph and obtain the final gene expression prediction \(\hat{y}\). \(7^{th}\) unique radial distance. If no value is obtained at the end of this process, we assign the median of nonzero entries of the WSI. The results of this procedure can be appreciated for a particularly noisy gene map in Figure 3. It is worth noting that the percentage of imputed values is \(5.3\%\) and \(26.0\%\) for the Visium and STNet datasets, respectively, as we have already filtered genes based on their sparsity (\(\varepsilon_{T},\varepsilon_{WSI}\)). Once the bioinformatic pipeline and the denoising procedure are complete, we select the final prediction variables with the help of Moran's I [20]. This statistic is a spatial autocorrelation measure and can detect if a given gene has a pattern over spatial graphs. The closer its value to one, the more autocorrelated the variable is. For our benchmark, we compute Moran's I for every gene and WSI and average across the slide dimension. We select the top \(n_{g}=256\) genes with the highest general Moran's I value as our final prediction variables (See supplementary Figures 4-7). Finally, if batch effects are observed in UMAP [19] embeddings (Supplementary Figures 1-3) of the data (only seen in the STNet dataset), they are corrected with ComBat [15]. Summarizing, the processed Visium and STNet datasets have a total of 7,777 and 29,820 samples, respectively, along with a set of 256 prediction genes. As the Visium dataset only contains two WSIs, we use one for training (3795 samples) and the other one as the validation/test set (3982 samples). For the STNet dataset, from the 23 patients, we randomly choose 15 for training (20,734 samples), 4 for validation (3,397 samples), and 4 for testing (5,689 samples). ### Evaluation Metrics We use three standard metrics in multivariate regression problems: global standard errors (MSE, MAE), Pearson Correlation Coefficients (PCC-Gene, PCC-Patch), and linear regression determination coefficients (R2-Gene, R2-Patch). Both PCC and R2 have gene and patch variants since they address two aspects of the problem. The gene type metrics aim to quantify how good expression maps are in general, while the patch type metrics evaluate how good multiple gene predictions are for a specific patch. For instance, to compute PCC-Gene, we obtain PCC values for each one of the \(n_{g}\) gene maps and then average over the gene dimension. Conversely, computing PCC-Patch involves calculating PCC values for each patch and the average over that dimension. Importantly, the imputed values are ignored for metric computation, and consequently, performance measurements are based exclusively on real data. ### State-of-the-art Methods We compare SEPAL to four of the most popular methods in this task, including three local options (STNet, EGN, EGGN), as well as one global method (HisToGene). For a fair comparison, we choose the best performance between 50 different training protocols. If the method allows batch size as a hyperparameter (STNet, EGN), we test combinations with an empirical Bayes approach by selecting learning rates in the logarithmic range \([10^{-2},10^{-6}]\) and batch sizes from the list \([32,64,128,256,320]\). If the method only accepts the learning rate, we perform a logarithmic grid search within the range \([10^{-2},10^{-6}]\). Both the best epoch during training and the best model of the sweep are selected based on the validation MSE. The only exception to this protocol (due to computational cost) is the STNet method in the STNet dataset, for which we report the best between the original hyperparameters and the best Visium hyperparameters. ### Architecture Optimization We extensively experiment with our spatial module, aiming to select the most effective architecture to integrate local information. For this purpose, we: (1) optionally introduce pre-processing and post-processing stages via multi-layer perceptrons of varying sizes, (2) allow the positional encoding to be added or concatenated during the graph construction, (3) change the number of hops \(m\) from one to three, (4) try six different convolutional operators, and (5) vary the hidden dimensions \(h\) of our graph convolutional network going from one to four layers. Furthermore, we train all architecture variations with 12 different settings of learning rate and batch size. For a detailed explanation of every tunned hyperparameter, we refer the reader to the Supplementary Material (Sec. 2). ### Implementation Details After comprehensive experimentation (Supplementary Material Sec. 3), we choose ViT-B-16 [8] as our image encoder. We use ELU as our activation function, and SAGPooling [18] as our pooling function. We train in the denoised version of the dataset but only use real data for metric computation during inference. We implement SEPAL using Pytorch [22] and Pytorch geometric [9] for graph operators. All experiments run on a single NVIDIA Figure 3: Example of the pepper denoising for a specific gene map in the Visium dataset. ## 5 Results Table 1 presents the final hyperparameter configurations of SEPAL for the Visium and the STNet datasets. ### Main Results Table 2 depicts the performance of local and global state-of-the-art methods against SEPAL on the Visium and STNet datasets. Our method consistently outperforms these methods on all but one evaluation metric. In particular, we attend primarily to the standard error metrics and find that SEPAL presents significant improvements on the Visium dataset and performance on par with state-of-the-art for the STNet dataset. Likewise, the R2 metric calculated on the genes increases for both datasets when using SEPAL. Furthermore, we find that calculating the PCC and the R2 metrics in a gene-wise fashion results in a remarkably poorer performance compared to the patch-wise evaluation. This means that predicting the distribution of the expression of a single gene in a WSI is a significantly more difficult task than aiming to obtain the expression of all the genes in one single spot. Nevertheless, despite this different trend for gene or patch-focused evaluations, our method consistently achieves the best results. HisToGene has poorer performance on Visium than on STNet, and overall it shows the worst results on the Visium dataset. These differences within the results of HisToGene support the observation that data scarcity of small datasets like Visium leads to deficient results in global methods. Conversely, we demonstrate that our method is able to retrieve important information from the input despite the difference in the data acquisition technologies and number of samples since it achieves high performance on both datasets. Finally, Fig.5 shows a histogram of the PCC between the ground-truth and the predictions of each gene on Visium. None of the genes has a negative correlation, and the lowest PCC is 0.052 and goes as far as 0.639. Overall, our model has a satisfactory performance for the evaluation of the genes selected. We observe that the PCC has an approximately normal distribution, with no evident outliers. As a sidenote observation, we also validated our data imputation protocol by obtaining the main results for the Visium dataset when training in noisy data. The metrics show a consistent drop in performance disregarding the method (Sec. 4 Supplementary Material), which supports the need for our denoising approach and sets a best practice for future works. ### Control Experiments Table 3 shows the results for the ablation experiments. Comparing the results between predicting the absolute expression (ViT) and predicting the expression variations of the genes (ViT\(+\Delta\)), we notice that the latter option has a better performance in every metric. For instance, when predicting delta variations, the MSE is 0.035 points below that of the absolute expression prediction. The PCC-Gene also increased 0.065 points with our problem formulation. These results reflect the suitability of the paradigm shift that we propose by learning the difference between \(y\) and \(\bar{y}_{\text{train}}\) instead of directly predicting \(y\). We evaluate the benefit of using a larger neighborhood to determine how raw spatial information affects gene pre \begin{table} \begin{tabular}{c c c} \hline \hline **Hyperparameters** & **STNet Dataset** & **Visium** \\ \hline Number of hops & 1 & 3 \\ Embeddings aggregation & Sum & Concat \\ Graph operator & GraphConv[16] & GCNConv[17] \\ Preprocessing Stage & - & \(d_{\text{emb}}\), 512 \\ Graph hidden channels & \(d_{\text{emb}}\), 256 & 512, 256, 128 \\ Postprocessing Stage & - & 128, 256 \\ Learning rate & \(10^{-4}\) & \(10^{-5}\) \\ Batch size & 256 & 256 \\ \hline \hline \end{tabular} \end{table} Table 1: Hyperparameters with the best performance for both datasets. \begin{table} \begin{tabular}{c|c c c|c c|c c} \hline \hline \multicolumn{2}{c}{} & \multicolumn{2}{c}{**Local**} & \multicolumn{2}{c}{**Global**} & \multicolumn{2}{c}{**Hybrid**} \\ \hline \multirow{2}{*}{**Method**} & **STNet**[12] & **EGN**[34] & **EGGN**[35] & **HisToGene**[21] & **SEPIAL** & **SEPIAL** & **SEPIAL\({}^{*}\)** \\ \cline{2-9} & MAE & 0.654 & 0.659 & 0.645 & 0.665 & **0.630** & 0.636 \\ & MSE & 0.762 & 0.772 & 0.736 & 0.784 & **0.708** & 0.712 \\ & PC-Gene & 0.300 & 0.314 & 0.313 & 0.199 & **0.383** & 0.3453 \\ & R2-Gene & 0.053 & 0.038 & 0.070 & 0.024 & **0.106** & 0.091 \\ & PC-Patch & 0.924 & 0.927 & 0.926 & 0.921 & **0.935** & 0.927 \\ & R2-Patch & 0.843 & 0.841 & 0.846 & 0.839 & **0.853** & 0.851 \\ \hline \multirow{2}{*}{**Method**} & MAE & 0.560 & 0.520 & 0.550 & 0.529 & **0.519** & 0.527 \\ & SSI & 0.537 & 0.480 & 0.549 & 0.493 & **0.478** & 0.499 \\ & PC-Gene & 0.030 & **0.064** & 0.011 & -0.007 & -0.004 & 0.002 \\ & R2-Gene & 0.165 & -0.037 & 0.228 & 0.066 & **0.028** & -0.052 \\ & PC-Patch & 0.910 & **0.911** & 0.908 & **0.911** & **0.911** \\ & R2-Patch & 0.779 & 0.805 & 0.780 & 0.799 & **0.809** & 0.802 \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative comparison with state-of-the-art methods on Visium and STNet datasets. The best performance is written in **bold**, and the second best result is underlined for each metric. *: SEPAL architecture with optimal parameters from the other dataset \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & **ViT** & **ViT\(+\Delta\)** & **ViT\(+\Delta\)\(+\)S7** & **SEPIAL** \\ \hline MAE & 0.655 & 0.638 & 0.648 & **0.630** \\ MSE & 0.760 & 0.725 & 0.737 & **0.708** \\ PCC-Gene & 0.282 & 0.347 & 0.339 & **0.383** \\ R2-Gene & 0.053 & 0.086 & 0.065 & **0.106** \\ PCC-Patch & 0.924 & 0.927 & 0.925 & **0.928** \\ R2-Patch & 0.843 & 0.849 & 0.847 & **0.853** \\ \hline \hline \end{tabular} \end{table} Table 3: Control experiments on the Visium validation/test set. \(\Delta\): predicting differences with respect to the mean expression \(\bar{y}_{\text{train}}\). S7: input patch is 7 times bigger than the original one. diction (ViT+\(\Delta\) against ViT+\(\Delta\)+S7). Table 3 compares the behavior of the exact same image encoder while solely altering the scale of the patches (with seven times more visual context). For all metrics, keeping a scale of 1.0 remains the best option among the ViT architectures tested. Our findings suggest that increasing the visual coverage of an image encoder does not yield significant improvements in gene prediction. In addition, the results from SEPAL show an improvement over all metrics, with respect to ViT+\(\Delta\)+S7. Notably, both SEPAL and ViT+\(\Delta\)+S7 have access to the same visual context in the WSI and are differentiated only by how spatial information is represented. This compelling outcome underscores the importance of incorporating spatial features in the description of each patch and constructing graphs to glean highly relevant information for accurate expression prediction. The performance of SEPAL shows that predictions from the spatial module do further improve the preliminary predictions obtained during the local learning stage. Our results validate the efficacy of our novel approach, emphasizing the value of spatial interactions in gene expression prediction. ### Qualitative Results Figure.4 shows the heatmaps for the real and the predicted expression distribution of the genes with the best and worst performances. Focusing on the genes with the highest PCC, we see that for the second best gene, the expressions both on the ground-truth and on the prediction are highly associated with the tissue color. Note that the regions with darker tissue obtain lower expression predictions, and regions with lighter tissue obtain higher expression predictions. These results suggest that our model might be basing estimates solely on the color of the patches rather than looking for specific morphology patterns. Nevertheless, for the best gene, the predicted expressions are not uniformly the same for all dark or light tissue sections, conveying that our model does not rely only on the tone of images and is actually learning from the spatial context of the patches and tissue morphology. Figure 4: Visualization of the two genes with the highest (left) and lowest (right) Pearson Correlation Coefficient. At the top is the Ground-Truth of the expression and at the bottom is the qualitative prediction of our method with its respective PCC. Figure 5: Histogram of the Pearson correlation between the ground-truth and the prediction of each gene. The X-axis displays the values of the Pearson correlation coefficient, while the Y-axis shows the number of genes that have that particular correlation. The predicted expressions show a lower intensity than the ground-truth for both genes, indicating that the dynamic range of SEPAL predictions may not match that of the real expression levels. Notably, for the two genes with the highest PCC, the output of our method appears over-smoothed compared to the ground-truth. An evident distinction arises when comparing the real expression, which exhibits adjacent spots with drastically different expression levels, to the predictions, where no regions display sudden changes in expression tendencies. While our model's consistent predictions showcase its strength, this attribute may also be considered a drawback when seeking to detect gene expression deviations with high spatial resolution. Regarding the hardest genes, Fig.4 shows that the predictions tend to correspond to the mean expression value of each gene and are practically constant throughout the entire WSI. For these cases, SEPAL fails to capture the spatial dependencies, even though clear patterns are present in the ground-truths. We hypothesize that these shortcomings are due to the joint prediction of 256 genes. ## 6 Conclusions In this work, we develop a novel framework to approach the spatial gene expression prediction task by integrating local context and exploiting inductive biases inherent to the biological nature of the problem. Our proposed SEPAL consistently outperforms state-of-the-art models and closes the gap between completely global and completely local analysis. Furthermore, aligning with biological expectations, it is capable of recognizing patterns in histological data that go beyond simple color intensities. Consequently, our approach represents a significant step forward in spatial expression prediction, enhancing the applicability of deep learning methods in the context of disease analysis and precision medicine. ## 7 Acknowledgements Gabriel Mejia acknowledges the support of a UniAndes-DeepMind Scholarship 2022. This work was supported by Azure sponsorship credits granted by Microsoft's AI for Good Research Lab. We thank Andres Hernandez for his valuable help during the conception of the project.
2310.12897
Critical exponential tiltings for size-conditioned multitype Bienaymé--Galton--Watson trees
We consider here multitype Bienaym\'e--Galton--Watson trees, under the conditioning that the numbers of vertices of given type satisfy some linear relations. We prove that, under some smoothness conditions on the offspring distribution $\mathbf{\zeta}$, there exists a critical offspring distribution $\tilde{\mathbf{\zeta}}$ such that the trees with offspring distribution $\mathbf{\zeta}$ and $\tilde{\mathbf{\zeta}}$ have the same law under our conditioning. This allows us in a second time to characterize the local limit of such trees, as their size goes to infinity. Our main tool is a notion of exponential tilting for multitype Bienaym\'e--Galton--Watson trees.
Paul Thévenin
2023-10-19T16:48:15Z
http://arxiv.org/abs/2310.12897v1
# Critical exponential tiltings for size-conditioned multitype Bienayme-Galton-Watson trees. ###### Abstract We consider here multitype Bienayme-Galton-Watson trees, under the conditioning that the numbers of vertices of given type satisfy some linear relations. We prove that, under some smoothness conditions on the offspring distribution \(\boldsymbol{\zeta}\), there exists a critical offspring distribution \(\tilde{\boldsymbol{\zeta}}\) such that the trees with offspring distribution \(\boldsymbol{\zeta}\) and \(\tilde{\boldsymbol{\zeta}}\) have the same law under our conditioning. This allows us in a second time to characterize the local limit of such trees, as their size goes to infinity. Our main tool is a notion of exponential tilting for multitype Bienayme-Galton-Watson trees. ## 1 Introduction The main purpose of this paper is to study the asymptotic behaviour of multitype Bienayme-Galton-Watson trees (or BGW trees), which are a famous model of random trees used initially to describe the evolution of a population. Roughly speaking, vertices of the tree are individuals who have children independently according to a given distribution. In addition, each individual is given a type (which is for us an integer). We consider here only the case where the number \(K\) of possible types is finite. A question of interest is to take such a BGW tree \(\mathcal{T}_{n}\) conditioned to have size \(n\) (where the size of a tree is a parameter that needs to be defined), and investigate the asymptotic properties of the tree \(\mathcal{T}_{n}\) when \(n\to\infty\). In the monotype case (i.e. when \(K=1\)), the natural notion of size is the total number of vertices and the question has been extensively investigated. The first results on the structure of \(\mathcal{T}_{n}\) for \(n\) large date back to Kesten [7] (see also Janson [6]), who proves the local convergence of \(\mathcal{T}_{n}\). In words, balls of fixed radius around the root of the tree converge in distribution. The limiting object, the so-called Kesten tree, is made of an infinite spine on which i.i.d. subtrees are grafted. In another direction, still in the monotype case, Aldous [2, 3, 4] shows the convergence of the tree \(\mathcal{T}_{n}\) seen as a metric space, where edges of the tree are rescaled to have length \(n^{-1/2}\), to a limiting random metric space called Aldous' Brownian Continuum Random Tree. Analogous results exist when \(2\leq K<\infty\). In this multitype setting, different definitions of the size of a tree are possible: total number of vertices, number of vertices of type \(1\), among others. Under diverse assumptions, Penisson [11], Abraham-Delmas-Guo [1] or Stephenson [12] characterize the local limit of multitype BGW trees. On the other hand, Miermont [9] and more recently Haas and Stephenson [5] prove the convergence of multitype BGW trees, under an assumption of finite covariance, towards Aldous' Continuum Random tree. In all these results, an important assumption made on the tree is that the distribution of the offspring of a vertex is critical (in the case \(K=1\), this corresponds to the fact that the average number of children of an individual is \(1\)). Again, in the monotype case, such results are known. Janson [6] shows that, when \(K=1\) and an offspring distribution \(\mu\) is given, it is possible to characterize offspring distributions \(\tilde{\mu}\) with the following property: for all \(n\), let \(\mathcal{T}_{n}\) be a \(\mu\)-BGW tree and \(\tilde{\mathcal{T}}_{n}\) a \(\tilde{\mu}\)-BGW tree, conditioned to have \(n\) vertices. Then, \(\mathcal{T}_{n}\) and \(\tilde{\mathcal{T}}_{n}\) have the same distribution. These distributions \(\tilde{\mu}\) are obtained from \(\mu\) by performing an operation called exponential tilting. If there exists such a \(\tilde{\mu}\) which is critical, then local and scaling limit results that hold for \(\tilde{\mathcal{T}}_{n}\) also hold for \(\mathcal{T}_{n}\). Janson [6] covers also cases where such a \(\tilde{\mu}\) does not exist, and where condensation phenomena may appear. Our aim here is to extend the scope of these results, by generalizing the notion of exponential tilting to the multitype case. AcknowledgementsThe author would like to thank Svante Janson and Stephan Wagner for insightful discussions, comments and corrections. The author acknowledges the support of the Austrian Science Fund (FWF) under grant P33083. General notationIn the whole paper, we let \(\mathbb{N}:=\{0,1,2\ldots\}\) be the set of nonnegative integers and \(\mathbb{N}^{*}:=\{1,2,\ldots\}\) be the set of positive integers. For \(K\in\mathbb{N}^{*}\), we set \([K]=\{1,\ldots,K\}\). Furthermore, we will write \(\mathbf{0}\) for \((0,\ldots,0)\in\mathbb{R}^{d}\) for a given \(d\) (the value of \(d\) will always be made clear by the context). ## 2 Background on trees We start by recalling some definitions and useful well-known results concerning BGW trees. ### Plane trees. We first define plane trees using Neveu's formalism [10]. We let \(\mathcal{U}:=\bigcup_{k\geq 0}(\mathbb{N}^{*})^{k}\) be the set of finite sequences of positive integers, with the convention that \((\mathbb{N}^{*})^{0}=\{\varnothing\}\). By a slight abuse of notation, for \(k\in\mathbb{N}\), we write an element \(u\) of \((\mathbb{N}^{*})^{k}\) by \(u=u_{1}\cdots u_{k}\), with \(u_{1},\ldots,u_{k}\in\mathbb{N}^{*}\). For \(k\in\mathbb{N}\), \(u=u_{1}\cdots u_{k}\in(\mathbb{N}^{*})^{k}\) and \(i\in\mathbb{N}\), we denote by \(ui\) the element \(u_{1}\cdots u_{k}i\in(\mathbb{N}^{*})^{k+1}\) and by \(iu\) the element \(iu_{1}\cdots u_{k}\in(\mathbb{N}^{*})^{k+1}\). A plane tree \(t\) is a subset of \(\mathcal{U}\) satisfying the following three conditions: (i) \(\varnothing\in t\) (the tree has a root); (ii) if \(u=u_{1}\cdots u_{n}\in t\), then, for all \(k\leq n\), \(u_{1}\cdots u_{k}\in t\) (these elements are called ancestors of \(u\)); (iii) for any \(u\in t\), there exists a nonnegative integer \(k_{u}(t)\) such that, for every \(i\in\mathbb{N}^{*}\), \(ui\in t\) if and only if \(1\leq i\leq k_{u}(t)\) (\(k_{u}(t)\) will be called the number of children of \(u\), or the outdegree of \(u\), and an element of the form \(ui\) is called a child of \(u\)). The elements of \(t\) are called vertices, and we denote by \(|t|\) the total number of vertices of \(t\). Finally, we denote by \(\mathbb{T}\) the set of plane trees. Multitype plane treesFix \(K\in\mathbb{N}^{*}\) and let \([K]:=\{1,\ldots,K\}\) be the set of types. A \(K\)-type plane tree is a pair \(T:=(t,\mathbf{e}_{t})\) where \(t\in\mathbb{T}\) is a plane tree and \(\mathbf{e}_{t}:t\mapsto[K]\) is a map associating a type with each vertex of \(t\). For \(u\in t\), \(\mathbf{e}_{t}(u)\) is called the type of the vertex \(u\). For all \(\in[K]\), we also denote by \(N_{i}(T)\) the number of vertices \(u\) of the tree \(t\) such that \(\mathbf{e}_{t}(u)=i\). We let \(\mathbb{T}^{(K)}\) be the set of \(K\)-type plane trees and, for \(i\in[K]\), we denote by \(\mathbb{T}^{(K;i)}\) the subset of \(\mathbb{T}^{(K)}\) of trees whose root has label \(\mathbf{e}_{t}(\varnothing)=i\). ### Multitype BGW trees We now define our main model of random trees, which we call \(K\)-type BGW trees. For \(K\in\mathbb{N}^{*}\), set \(\mathcal{W}_{K}:=\bigcup_{n\geq 0}[K]^{n}\). Let \(\boldsymbol{\zeta}:=(\zeta^{(i)})_{i\in[K]}\) be a family of probability distributions on \(\mathcal{W}_{K}\). Let \((X^{i}_{u},u\in\mathcal{U},i\in[K])\) be a family of independent variables with values in \(\mathcal{W}_{K}\) such that, for all \((u,i)\in\mathcal{U}\times[K]\), \(X^{i}_{u}\) is distributed according to \(\zeta^{(i)}\). We also denote by \(|X^{i}_{u}|\) the size of the vector \(X^{i}_{u}\). Now fix \(i\in[K]\). We recursively construct a (random) \(K\)-type tree \(\mathcal{T}^{(i)}:=(t,\mathbf{e}_{t})\) with values in \(\mathbb{T}^{(K;i)}\), as follows: * \(\varnothing\in t,\mathbf{e}_{t}(\varnothing)=i\); * if \(u\in t\) and \(\mathbf{e}_{t}(u)=j\), then, for \(k\in\mathbb{N}^{*}\), \(uk\in t\) if and only if \(1\leq k\leq|X^{j}_{u}|\) and in this case \(\mathbf{e}_{t}(uk)=X^{j}_{u}(k)\). In other words, the root of \(\mathcal{T}^{(i)}\) has type \(i\) and vertices of type \(j\) in \(\mathcal{T}^{(i)}\) have children independently according to \(\zeta^{(j)}\). We call \(\mathcal{T}^{(i)}\) a \(\boldsymbol{\zeta}\)-BGW tree. Note that \(\mathcal{T}^{(i)}\) may be finite or infinite. It is useful in our context to define the _projection_ of the family \(\boldsymbol{\zeta}\). First, for any \(w\in\mathcal{W}_{K}\) and \(j\in[K]\), let \(w^{(j)}\) be the number of \(j\)'s in \(w\). Define the projection of \(w\) as the element \(p(w)=(w^{(1)},\ldots,w^{(K)})\in\mathbb{N}^{K}\). For \(i\in[K]\), denote by \(\mu^{(i)}\) the probability distribution on \(\mathbb{N}^{K}\) defined by: for all \((k_{1},\ldots,k_{K})\in\mathbb{N}^{K}\), \[\mu^{(i)}(k_{1},\ldots,k_{K})=\sum_{\begin{subarray}{c}w\in\mathcal{W}_{K}\\ p(w)=(k_{1},\ldots,k_{K})\end{subarray}}\zeta^{(i)}(w).\] It turns out that numerous asymptotic structural properties of \(\mathcal{T}^{(i)}\) only depend on the projection \(\boldsymbol{\mu}:=(\mu^{(i)},i\in[K])\). In this paper, we will only consider nondegenerate \(\boldsymbol{\zeta}\), that is, such that its projection \(\boldsymbol{\mu}\) satisfies: \[\exists i\in[K],\mu^{(i)}\left(\left\{\mathbf{z},\sum_{j\in[K]}z_{j}\neq 1 \right\}\right)>0.\] We define the mean matrix of \(\boldsymbol{\zeta}\), \(M:=(m_{i,j})_{i,j\in[K]}\) as the \(K\times K\) matrix such that \[m_{i,j}=\sum_{\mathbf{z}\in\mathbb{N}^{K}}z_{j}\mu^{(i)}(\mathbf{z}).\] In other words, \(m_{i,j}\) is the expected number of children of type \(j\) of a vertex of type \(i\). We say that \(\boldsymbol{\mu}\) is entire if, for all \(i\), the generating function \(\phi^{(i)}\) of \(\mu^{(i)}\) is entire, and we say that \(\boldsymbol{\zeta}\) is entire if its projection \(\boldsymbol{\mu}\) is entire. We say that \(\boldsymbol{\zeta}\) is critical (by convention, we will also say that its projection \(\boldsymbol{\mu}\) is critical) if the spectral radius \(\rho(M)\) of \(M\) is equal to \(1\). We say that \(\boldsymbol{\zeta}\) is irreducible (again, we also say that \(\boldsymbol{\mu}\) is irreducible) if, for all \(i,j\in[K]\), there exists \(p\in\mathbb{N}^{*}\) such that \(M^{p}_{i,j}>0\). In particular, all these properties of \(\boldsymbol{\zeta}\) only depend on its projection \(\boldsymbol{\mu}\). ### Conditioning a \(K\)-type tree History and resultsThe asymptotic structure of large multitype BGW trees has been a topic of interest in the past few years. People have in particular studied the so-called scaling limit of such trees: seeing a tree as a metric space, does \(\mathcal{T}^{(i)}\), conditioned to have a large size, converge after renormalization as a metric space? In the monotype case, the notion of size is usually the number of vertices in the tree. Under mild conditions, Aldous [2] shows that a \(\zeta\)-BGW tree conditioned to have \(n\) vertices converges, after rescaling distances by \(\sqrt{n}\), to a limiting object called Aldous' Brownian Continuum Random Tree (or CRT). In the multitype case, there are many possible notions of size, and thus many possible conditionings: by the total number of vertices, by the number of vertices of a given type or by the numbers of vertices of each type for example. Haas-Stephenson [5] (see also Miermont [9] for a slightly weaker result) proves that, under a finite covariance assumption, a \(K\)-type BGW tree \(\mathcal{T}_{n}\) conditioned to have \(n\) vertices of type \(1\) converges after renormalization towards the Brownian CRT. One of their crucial hypotheses is that the offspring distribution \(\boldsymbol{\zeta}\) that they consider must be critical. On the other hand, a lot of attention has been given to the so-called local limit of multitype BGW trees. We say that the tree \(\mathcal{T}_{*}\) is the local limit of a sequence \((\mathcal{T}_{n})\) of trees if, for any fixed \(r\geq 0\), the ball of radius \(r\) centered at the root of \(\mathcal{T}_{n}\), seen as a random rooted plane tree, converges in distribution towards the ball of radius \(r\) centered at the root of \(\mathcal{T}_{*}\). In the monotype case, Kesten [7] first introduced a discrete infinite tree, Kesten's tree, which is the local limit of size-conditioned \(\mu\)-BGW trees, where \(\mu\) is any critical offspring distribution (see Janson [6] for a proof). Recently, some multitype generalizations have been proven, under different conditionings. A vector \((a_{1},\ldots,a_{K})\in[0,1]^{K}\) of sum \(1\) being given, Penisson [11] (see also Abraham-Delmas-Guo [1]) proves under some smoothness condition the local convergence of the \(K\)-type tree \(\mathcal{T}_{\mathbf{k}(n)}\) conditioned to have \(k_{i}(n)\) vertices of type \(i\), where \(\mathbf{k}(n):=(k_{i}(n),i\in[K])\) is a sequence of vectors such that, for all \(i\), \[\lim_{n\to\infty}\frac{k_{i}(n)}{\sum_{j=1}^{K}k_{j}(n)}=a_{i}.\] Stephenson [12] shows, under an assumption of exponential moments, the local convergence of a critical multitype BGW tree conditioned on a linear combination of its type population. Again, their main assumption is that the offspring distribution \(\boldsymbol{\zeta}\) is critical. See Section 5 for more details. The main goal of this paper is to obtain such results without the criticality assumption: a non-critical distribution \(\boldsymbol{\zeta}\) being given, does a \(\boldsymbol{\zeta}\)-BGW tree (under some conditioning) admit a scaling limit or a local limit? In the monotype case, it turns out (see Janson [6, Section 4]) that, under very mild conditions on a distribution \(\boldsymbol{\zeta}\), a size-conditioned \(\boldsymbol{\zeta}\)-BGW tree converges after renormalization to the Brownian CRT, and locally to Kesten's tree. In particular, it is the case when \(\boldsymbol{\zeta}\) is entire, or when \(\boldsymbol{\zeta}\) is supercritical. In the multitype case, Penisson [11, Lemma 5.3] shows that, under a smoothness condition, a \(\boldsymbol{\zeta}\)-BGW tree conditioned to have \(k_{i}\) vertices of type \(i\) for all \(i\in[K]\) is distributed as a \(\tilde{\boldsymbol{\zeta}}\)-BGW tree with the same conditioning, for some critical offspring distribution \(\tilde{\boldsymbol{\zeta}}\). Our goal is to extend this result, which would allow to prove limit results for possibly non-critical multitype BGW trees. General conditioningsWe consider a fairly large class of conditionings. Fix \(L\geq 1\) and let \(\Gamma\in\mathcal{M}_{L,K}(\mathbb{R})\). Fix \(i\in[K]\), a \(L\)-tuple \(\mathbf{g}:=(g_{1},\ldots,g_{L})\in\mathbb{R}^{L}\). We consider the tree \(\mathcal{T}_{\Gamma,\mathbf{g}}^{(i)}\), which is the \(\boldsymbol{\zeta}\)-BGW tree \(\mathcal{T}^{(i)}\) under the conditioning \[\Gamma\begin{pmatrix}N_{1}(\mathcal{T}^{(i)})\\ \vdots\\ N_{K}(\mathcal{T}^{(i)})\end{pmatrix}=\begin{pmatrix}g_{1}\\ \vdots\\ g_{L}\end{pmatrix}. \tag{1}\] Observe that 1. if \(\Gamma=\begin{pmatrix}\gamma_{1}&\cdots&\gamma_{K}\end{pmatrix}\in\mathcal{M} _{1,K}(\mathbb{N})\), we are in Stephenson's case; 2. if \(\Gamma\) is the identity matrix \(\in\mathcal{M}_{K,K}(\mathbb{Z})\), we are in Penisson and Abraham-Delmas-Guo's case; 3. if \(\Gamma=\begin{pmatrix}1&0&\cdots&0\end{pmatrix}\in\mathcal{M}_{1,K}(\mathbb{Z})\), we are in Miermont and Haas-Stephenson's case. **Definition 1**.: _Fix \(L\geq 1\) and \(\Gamma\in\mathcal{M}_{L,K}(\mathbb{R})\). We say that two families \(\boldsymbol{\zeta},\tilde{\boldsymbol{\zeta}}\) are \(\Gamma\)-equivalent if the following two conditions hold:_ 1. _for all_ \(\mathbf{x}\in\mathcal{W}_{K}\)_, all_ \(i\in[K]\)_,_ \(\zeta^{(i)}(\mathbf{x})=0\) _if and only if_ \(\tilde{\zeta}^{(i)}(\mathbf{x})=0\)_;_ 2. _for all_ \(i\in[K]\)_, all_ \(\mathbf{g}\) _such that (_1_) holds for a_ \(\boldsymbol{\zeta}\)_-BGW with positive probability, we have in distribution_ \[\mathcal{T}_{\Gamma,\mathbf{g}}^{(i)}\stackrel{{(d)}}{{=}}\tilde {\mathcal{T}}_{\Gamma,\mathbf{g}}^{(i)},\] _where_ \(\tilde{\mathcal{T}}_{\Gamma,\mathbf{g}}^{(i)}\) _is a_ \(\tilde{\boldsymbol{\zeta}}\)_-BGW tree with root label_ \(i\) _conditioned on (_1_)._ Observe that, under these assumptions, irreducibility of \(\boldsymbol{\zeta}\) implies irreducibility of \(\tilde{\boldsymbol{\zeta}}\). It is clearly an equivalence relation. **Remark 2**.: _In the monotype case, it turns out that we can obtain (i) as a consequence of (ii), and thus we only need Assumption (ii). However, in the multitype case, it may happen that (ii) does not imply (i), and thus that assuming only (ii) does not define an equivalence relation. For example, consider the case \(K=2\), the matrix \(\Gamma:=I_{2}\) and the two distributions \(\mathbf{\zeta},\tilde{\mathbf{\zeta}}\) defined as follows:_ * \(\zeta^{(1)}(\varnothing)=\zeta^{(1)}(1,2)=1/2,\zeta^{(2)}(\varnothing)=\zeta^{( 2)}(1,2)=1/2\)_;_ * \(\tilde{\zeta}^{(1)}(\varnothing)=\tilde{\zeta}^{(1)}(1,2)=\tilde{\zeta}^{(1 )}(1,1,1,2)=1/3,\tilde{\zeta}^{(2)}(\varnothing)=\tilde{\zeta}^{(2)}(1,2)= \tilde{\zeta}^{(2)}(1,1,1,1,2)=1/3\)_._ _In this case, (ii) holds for \(\mathbf{\zeta}\) but not for \(\tilde{\mathbf{\zeta}}\). Indeed, for any \(n\geq 1\), we have that_ \[\mathbb{P}\left(N_{2}(\mathcal{T}^{(1)})=n-1|N_{1}(\mathcal{T}^{(1)})=n \right)=1,\] _while_ \[\mathbb{P}\left(N_{2}(\tilde{\mathcal{T}}^{(1)})=n-1|N_{1}(\tilde{\mathcal{T }}^{(1)})=n\right)\in(0,1).\] _We conjecture however that, if \(\Gamma\in\mathcal{M}_{1,K}(\mathbb{Z}_{+})\), then (ii) implies (i)._ We can now state our main theorem. To this end, the technical conditions that we will assume on the offspring distribution \(\mathbf{\zeta}\) are the following: 1. \(\mathbf{\zeta}\) is entire. 2. For all \(j\in[K]\), \(\zeta^{(j)}(\varnothing)>0\). 3. for all \(i\in[K]\), for \(b_{i}\) large enough, uniformly in \((b_{j})_{j\neq i}\in\mathbb{R}_{+}^{K-1}\), we have \(\frac{\partial\phi^{(i)}(b_{1},\dots,b_{K})}{\partial b_{i}}\geq\phi^{(i)}(b_ {1},\dots,b_{K})/b_{i}\). Here, recall that \(\phi^{(i)}\) denotes the generating function of \(\mu^{(i)}\), where \(\mathbf{\mu}:=(\mu^{(1)},\dots,\mu^{(K)})\) is the projection of \(\mathbf{\zeta}\). We will also only consider matrices \(\Gamma\) satisfying the following condition: 1. There exists \(\mathbf{\gamma}\in(Ker\Gamma)^{\perp}\) such that \(\mathbf{\gamma}\in(\mathbb{N}^{*})^{K}\) and \(\gamma_{i}=1\) for some \(i\in[K]\). **Remark 3**.: _Observe that (**A.1**)-(**A.3**) are technical smoothness conditions on the distribution \(\mathbf{\zeta}\), while (**B**) is only a condition on the matrix \(\Gamma\). It is not clear that these conditions can be easily lifted, see Section 6._ **Examples**.: _An interesting example is when there exist \(f_{1},\dots,f_{K}\) entire functions with nonnegative coefficients such that, for all \(i\in[K]\), \(\phi^{(i)}=e^{f_{i}-f_{i}(1,\dots,1)}\) and \(f_{i}(0,\dots,0,b_{i},0,\dots,0)\to\infty\) as \(b_{i}\to\infty\). In this case, it is clear that Assumptions (**A.1**)-(**A.3**) are satisfied. This includes, for example, exponentials of polynomials._ We can now expose our main theorem, which states the existence of critical \(\Gamma\)-equivalent distributions under Assumptions (**A.1**)-(**A.3**) and (**B**). **Theorem 4**.: _Let \(\mathbf{\zeta}\) be a probability distribution satisfying (**A.1**)-(**A.3**) and a matrix \(\Gamma\) such that (**B**) holds. Then, there exists a critical distribution \(\tilde{\mathbf{\zeta}}\) that is \(\Gamma\)-equivalent to \(\mathbf{\zeta}\). Furthermore, if \(rk(\Gamma)=1\) and \(\mathbf{\zeta}\) is irreducible, then this critical distribution is unique._ It is worth noticing that, in the monotype case, only Assumption (**A.1**) is needed, as (**A.3**), (**A.2**) and (**B**) come for free. However, the proof (see [6]) makes use of a continuity argument which is not valid anymore with two or more types. The main idea to prove Theorem 4 is to introduce a family of multitype exponential tiltings, generalizing the results of [6]. The assumptions made on \(\mathbf{\zeta}\) ensure the existence of a critical exponential tilting of \(\mathbf{\zeta}\) with is \(\Gamma\)-equivalent to \(\mathbf{\zeta}\). In particular, the following holds. **Corollary 5**.: _Let \(\mathbf{\zeta}\) be an irreducible distribution satisfying (**A.1**)-(**A.3**) and \(\Gamma\) satisfying (**B**). Fix \(j\in[K]\), and let \((k_{n},n\geq 1)\) be a sequence of positive integers such that \(k_{n}\to\infty\) and, for all \(n\),_ \[\mathbb{P}\left(\sum_{i=1}^{K}\gamma_{i}N_{i}(\mathcal{T}^{(j)})=k_{n}\right)>0.\] _Then, there exists a discrete infinite \(K\)-type tree \(\mathcal{T}_{*}\) such that_ \[\mathcal{T}^{(j)}_{\Gamma,k_{n}}\overset{(loc)}{\to}\mathcal{T}_{*},\] _where \(\mathcal{T}^{(j)}_{\Gamma,k_{n}}\) is the tree \(\mathcal{T}^{(j)}\) conditioned on \(\sum_{i=1}^{K}\gamma_{i}N_{i}(\mathcal{T}^{(j)})=k_{n}\)._ Overview of the paperWe start by defining the notion of multitype exponential tiltings and describe a class of \(\Gamma\)-equivalent distributions in Section 3. Then, Section 4 is devoted to the proof of our main result, Theorem 4, and Section 5 to the proof of Corollary 5, concerning local limits of non-critical multitype trees. In the last section, Section 6, we list a few open questions, mainly on the possibility of lifting our different assumptions (**A.1**)-(**A.3**) and (**B**). ## 3 Exponential tiltings In this section, we provide a sufficient criterion for two distributions to be \(\Gamma\)-equivalent. Observe that, the same way as we define equivalent families of distributions on \(\mathcal{W}_{K}\), we can define equivalent families of distributions on \(\mathbb{N}^{K}\) as follows. We denote in what follows \(\Gamma\in\mathcal{M}^{(K)}(\mathbb{R}):=\bigcup_{L\geq 1}\mathcal{M}_{L,K}( \mathbb{R})\). **Definition 6**.: _Let \(\Gamma\in\mathcal{M}^{(K)}(\mathbb{R})\). We say that two families \(\mathbf{\mu}\), \(\tilde{\mathbf{\mu}}\) odf distributions on \(\mathbb{N}^{K}\) are \(\Gamma\)-equivalent if there exist two families \(\mathbf{\zeta},\tilde{\mathbf{\zeta}}\) of distributions on \(\mathcal{W}_{K}\) such that \(\mathbf{\mu}\) is the projection of \(\mathbf{\zeta}\), \(\tilde{\mathbf{\mu}}\) is the projection of \(\tilde{\mathbf{\zeta}}\), and \(\mathbf{\zeta}\) and \(\mathbf{\zeta}\) are \(\Gamma\)-equivalent._ **Proposition 7**.: _The \(\Gamma\)-equivalence on distributions on \(\mathbb{N}^{K}\) is an equivalence relation._ Proof.: It is clear that any distribution on \(\mathbb{N}^{K}\) is the projection of a distribution on \(\mathcal{W}_{K}\). More precisely, a distribution \(\mathbf{\mu}\) on \(\mathbb{N}^{K}\) being given, we can characterize the distributions \(\mathbf{\zeta}\) on \(\mathcal{W}_{K}\) whose projection is \(\mathbf{\mu}\): \(\mathbf{\zeta}\) has projection \(\mathbf{\mu}\) if and only if there exist probability measures \((\nu_{i,e},i\in[K],e\in\mathbb{N}^{K})\) on \(\mathcal{W}_{K}\) indexed by \([K]\times\mathbb{N}^{K}\), such that for all \(e:=(e_{1},\dots,e_{K})\), \(\nu_{i,e}\) takes its values in \(\mathcal{W}_{K}^{(e)}:=\{w\in\mathcal{W}_{K},(w^{(1)},\dots,w^{(K)})=(e_{1}, \dots,e_{K})\}\), and, for any \(i\in[K]\), for any \(w\) in the set \(\mathcal{W}_{K}^{(e)}\), \(\zeta^{(i)}(w)=\mu^{(i)}(e)\times\nu_{i,e}(w)\). In other words, \(\mathbf{\zeta},\tilde{\mathbf{\zeta}}\) are obtained from \(\mathbf{\mu},\tilde{\mathbf{\mu}}\) by specifying the same ordering of the children of each vertex of the tree. Using this characterization, it becomes clear that the \(\Gamma\)-equivalence is an equivalence relation. The following is then immediate. **Proposition 8**.: _Let \(\Gamma\in\mathcal{M}^{(K)}(\mathbb{R})\), and let \(\mathbf{\zeta}\) be a family of distributions on \(\mathcal{W}_{K}\). Let \(\mathbf{\mu}\) be its projection. Then there exists a critical family \(\tilde{\mathbf{\zeta}}\) that is \(\Gamma\)-equivalent to \(\mathbf{\zeta}\) if and only if there exists a critical family \(\tilde{\mathbf{\mu}}\) that is \(\Gamma\)-equivalent to \(\mathbf{\mu}\)._ ### Good exponential tiltings We exhibit here a sufficient criterion for two projections \(\mathbf{\mu},\tilde{\mathbf{\mu}}\) to be \(\Gamma\)-equivalent, similar to [6, Section 4] in the monotype case. By Proposition 8, finding a critical projection \(\tilde{\mathbf{\mu}}\) that is \(\Gamma\)-equivalent to a given projection \(\mathbf{\mu}\) is the same as finding a critical distribution \(\tilde{\mathbf{\zeta}}\) that is \(\Gamma\)-equivalent to a given distribution \(\mathbf{\zeta}\) whose projection is \(\mathbf{\mu}\). We emphasize that the criterion that we will use is only a sufficient condition for two projections to be \(\Gamma\)-equivalent, and does not fully characterize the \(\Gamma\)-equivalence, contrary to the monotype case. Therefore, our main result only provides a partial answer to the question of whether there exists a critical distribution that is \(\Gamma\)-equivalent to a given one. The main concept of this section is the notion of _exponential tiltings_ for projections. **Definition 9**.: _Let \(\boldsymbol{\mu}:=(\mu^{(1)},\ldots,\mu^{(K)})\), \(\tilde{\boldsymbol{\mu}}:=(\tilde{\mu}^{(1)},\ldots,\tilde{\mu}^{(K)})\) be two families of projections on \(\mathbb{N}^{K}\). We say that \(\tilde{\boldsymbol{\mu}}\) is an exponential tilting of \(\boldsymbol{\mu}\) if there exist \(2K\) constants \(a_{1},\ldots,a_{K},b_{1},\ldots,b_{K}>0\) such that, for any \(\mathbf{k}:=(k_{1},\ldots,k_{K})\in\mathbb{N}^{K}\), any \(i\in[K]\):_ \[\tilde{\mu}^{(i)}(\mathbf{k}):=a_{i}\prod_{j=1}^{K}b_{j}^{k_{j}}\mu^{(i)}( \mathbf{k}).\] _Equivalently, for all \(i\in[K]\), all \(s_{1},\ldots,s_{K}\in[0,1]^{K}\):_ \[\tilde{\phi}^{(i)}(s_{1},\ldots,s_{K})=a_{i}\phi^{(i)}(b_{1}s_{1},\ldots,b_{K }s_{K}),\] _where we denote by \(\tilde{\phi}^{(i)}\) the generating function of \(\tilde{\mu}^{(i)}\) for \(i\in[K]\)._ It is clear that, if \(\tilde{\boldsymbol{\mu}}\) is an exponential tilting of \(\boldsymbol{\mu}\), then \(\boldsymbol{\mu}\) is an exponential tilting of \(\tilde{\boldsymbol{\mu}}\), and that \(\boldsymbol{\mu}\) is entire (resp. irreducible) if and only if \(\tilde{\boldsymbol{\mu}}\) is entire (resp. irreducible). Furthermore, the fact that \(\tilde{\mu}^{(i)}\) is a probability distribution for all \(i\in[K]\) implies that \(a_{1},\ldots,a_{K},b_{1},\ldots,b_{K}\) shall satisfy \[\begin{cases}\tilde{\phi}^{(1)}(1,\ldots,1)=1\\ \tilde{\phi}^{(2)}(1,\ldots,1)=1\\ \vdots\\ \tilde{\phi}^{(k)}(1,\ldots,1)=1,\end{cases} \tag{2}\] which is equivalent to \(a_{i}^{-1}=\phi^{(i)}(b_{1},\ldots,b_{K})\) for all \(i\in[K]\). In other words, specifying the \((b_{i},i\in[K])\) forces the values of the \((a_{i},i\in[K])\). Our first result characterizes a family of exponential tiltings that preserve the distribution of the conditioned multitype trees. **Definition 10**.: _Let \(\Gamma\in\mathcal{M}^{(K)}(\mathbb{R})\). We say that \((a_{i},b_{i})_{i\in[K]}\) satisfying (2) is a good exponential tilting if \(\boldsymbol{\mu}\) and \(\tilde{\boldsymbol{\mu}}\) are \(\Gamma\)-equivalent, where \(\tilde{\boldsymbol{\mu}}\) is the exponential tilting of \(\boldsymbol{\mu}\) obtained from \((a_{i},b_{i})_{i\in[K]}\)._ The interest of this definition lies in the following result. **Proposition 11**.: _Let \(\Gamma\in\mathcal{M}^{(K)}(\mathbb{R})\) and \(\{(a_{i},b_{i}),i\in[K]\}\in\big{(}(\mathbb{R}_{+}^{*})^{2}\big{)}^{K}\) satisfying (2). Define, for all \(i\in[K]\), \(c_{i}:=\log(a_{i}b_{i})\). Then, if_ \[\mathbf{c}:=(c_{i},i\in[K])\in(Ker\,\Gamma)^{\perp},\] _we have that \(\{(a_{i},b_{i}),i\in[K]\}\) is a good exponential tilting._ **Remark 12**.: _Observe that it is not an equivalence, as there may be good exponential tiltings that do not satisfy \(\mathbf{c}\in(Ker\,\Gamma)^{\perp}\)._ Proposition 11 makes clear the dependency in \(\Gamma\) of the notion of good exponential tilting: different matrices \(\Gamma\) clearly provide different notions of good exponential tiltings. As a corollary of Proposition 11, we have \(rk(\Gamma)\) degrees of freedom in the choice of a good tilting. In particular this is minimum when \(rk(\Gamma)=1\), in which case we can restrict ourselves to conditionings of the form \[\sum_{i=1}^{K}\gamma_{i}N_{i}(\mathcal{T})=Q\] for some constant \(Q\in\mathbb{R}\), that is (by (**B**)), \(\Gamma\in\mathcal{M}_{1,K}(\mathbb{N}^{*})\). As an example, Penisson [11] and Abraham-Delmas-Guo [1] consider the case \(\Gamma=Id_{K}\). The existence of a critical projection \(\Gamma\)-equivalent to \(\boldsymbol{\mu}\) can (under our assumptions) be deduced from the same result for \(\Gamma=(10\ldots 0)\in\mathcal{M}_{1,K}\). We now prove Proposition 11. Proof of Proposition 11.: Fix \(L\geq 1\) and \(\Gamma\in\mathcal{M}_{L,K}(\mathbb{R})\). Let \(j\in[K]\) and \(\mathbf{g}=(g_{1},\ldots,g_{L})\) such that \(\mathbb{P}(\mathcal{T}^{(j)}\) satisfies (1)) \(>0\). Let \(\mathbb{T}^{(j)}_{\Gamma,\mathbf{g}}\) be the set of trees \(T\) with root label \(j\) satisfying \[\Gamma\begin{pmatrix}N_{1}(T)\\ \vdots\\ N_{K}(T)\end{pmatrix}=\begin{pmatrix}g_{1}\\ \vdots\\ g_{L}\end{pmatrix}. \tag{3}\] For a tree \(T\), a vertex \(v\in T\) and \(i\in[K]\), let \(k^{(i)}_{v}(T)\) be the number of children of \(v\) in \(T\) with label \(i\). For all \(T\in\mathbb{T}^{(j)}_{\Gamma,\mathbf{g}}\), we have that \[\mathbb{P}\left(\mathcal{T}^{(j)}_{\Gamma,\mathbf{g}}=T\right)=\frac{w(T)}{Z_ {\Gamma,\mathbf{g}}},\] where \[w(T)=\prod_{i\in[K]}\prod_{v\in T,\ell(v)=i}\mu^{(i)}\left(k^{(1)}_{v}(T), \ldots,k^{(K)}_{v}(T)\right)\] and \[Z_{\Gamma,\mathbf{g}}=\sum_{U\in\mathbb{T}^{(j)}_{\Gamma,\mathbf{g}}}w(U).\] On the other hand, we have \[\mathbb{P}\left(\tilde{\mathcal{T}}^{(j)}_{\Gamma,\mathbf{g}}=T\right)=\frac {\tilde{w}(T)}{\tilde{Z}_{\Gamma,\mathbf{g}}},\] where \[\tilde{w}(T) =\prod_{i\in[K]}\prod_{v\in T,\ell(v)=i}\!\!\!\!\!\!\!\!\!\!\!a_{ i}\mu^{(i)}\left(k^{(1)}_{v}(T),\ldots,k^{(K)}_{v}(T)\right)\prod_{r=1}^{K}b^{k ^{(r)}_{v}(T)}_{r}\] \[=\prod_{i\in[K]}a^{N_{i}(T)}_{i}b^{N_{i}(T)}_{i}b^{-1}_{j}w(T)\] (the factor \(b^{-1}_{j}\) corresponds to the root label), and \[\tilde{Z}_{\Gamma,\mathbf{g}}=\sum_{U\in\mathbb{T}^{(j)}_{\Gamma,\mathbf{g}}} \tilde{w}(U).\] Hence, \(\mathcal{T}^{(j)}_{\Gamma,\mathbf{g}}\stackrel{{(d)}}{{=}}\hat{ \mathcal{T}}^{(j)}_{\Gamma,\mathbf{g}}\) if and only if \(\prod_{i\in[K]}a_{i}^{N_{i}(T)}b_{i}^{N_{i}(T)}\) is constant on \(\mathbb{T}^{(j)}_{\Gamma,\mathbf{g}}\). This is equivalent to \[\langle\mathbf{c},\mathbf{N}(T)\rangle\text{ is constant},\] where \(\langle\cdot,\cdot\rangle\) is the usual scalar product on \(\mathbb{R}^{K}\), \(\mathbf{c}=(c_{1},\ldots,c_{K})\) and \(\mathbf{N}(T)=(N_{1}(T),\ldots,N_{K}(T))\). In particular, \(\{(a_{i},b_{i}),i\in[K]\}\) is a good exponential tilting if \(\mathbf{c}\in S_{\Gamma}\), where \[S_{\Gamma}:=\left\{\mathbf{c}\in\mathbb{R}^{K},\forall\mathbf{x},\mathbf{y} \in\mathbb{N}^{K},\Gamma\mathbf{x}=\Gamma\mathbf{y}\Rightarrow\langle\mathbf{c },\mathbf{x}\rangle=\langle\mathbf{c},\mathbf{y}\rangle\right\}.\] Since \(\mathbb{N}-\mathbb{N}=\mathbb{Z}\), it is clear that \[S_{\Gamma} =\left\{\mathbf{c}\in\mathbb{R}^{K},\forall\mathbf{x}\in \mathbb{Q}^{K},\Gamma\mathbf{x}=0\Rightarrow\langle\mathbf{c},\mathbf{x} \rangle=0\right\}\] \[=\left\{\mathbf{c}\in\mathbb{R}^{K},Ker\,\Gamma\cap\mathbb{Q}^{K} \subseteq KerF_{\mathbf{c}}\right\},\] where \(F_{\mathbf{c}}\in(\mathbb{R}^{K})^{*}:\mathbf{x}\mapsto\langle\mathbf{c}, \mathbf{x}\rangle\) is the linear form associated to \(\mathbf{c}\). Using the fact that \(dim_{\mathbb{Q}}(Ker\,\Gamma)=dim_{\mathbb{R}}(Ker\,\Gamma)\), we get that \[S_{\Gamma} =\left\{\mathbf{c}\in\mathbb{R}^{K},Ker\,\Gamma\subseteq KerF_{ \mathbf{c}}\right\}\] \[=(Ker\,\Gamma)^{\perp}.\] In particular, \(S_{\Gamma}\) is a vector space of dimension \(dim\,S_{\Gamma}=rk(\Gamma)\). ## 4 Existence of a critical exponential tilting We prove here the first part of our main theorem, Theorem 4, stating the existence of a critical exponential tilting of any offspring entire distribution, under assumptions (**A.1**)-(**A.3**) and (**B**). To this end, by Proposition 11, we can restrict ourselves to the case where \((Ker\,\Gamma)^{\perp}=\mathbb{R}\boldsymbol{\gamma}\) for some \(\boldsymbol{\gamma}\) satisfying (**B**). Without loss of generality, we can assume that \(\gamma_{1}=1=\min\{\gamma_{i},i\in[K]\}\). In other words, without loss of generality, \(\Gamma\) is of the form \[\Gamma=(1\,\gamma_{2}\,\ldots\gamma_{K})\in(\mathbb{N}^{*})^{K}.\] ### The setting We fix the type of the root of our trees (say, \(j\in[K]\)) and condition our trees on their total weighted number of vertices \(N_{1}(\mathcal{T}^{(j)})+\sum_{i=2}^{K}\boldsymbol{\gamma}_{i}N_{i}(\mathcal{ T}^{(j)})\). Hence, we have, with the notation of Section 3: \[S_{\Gamma}=\mathbb{R}\begin{pmatrix}1\\ \gamma_{2}\\ \vdots\\ \gamma_{K}\end{pmatrix}.\] Take \(\mathbf{c}\in S_{\Gamma}\), and set \(\beta=\exp(c_{1})\). By assumption, \(c_{i}/\gamma_{i}=\log(\beta)\) for all \(i\in[K]\). The system (2) becomes \[\begin{cases}\beta\frac{\phi^{(1)}(b_{1},\ldots,b_{K})}{b_{1}}=1\\ \beta\Big{(}\frac{\phi^{(2)}(b_{1},\ldots,b_{K})}{b_{2}}\Big{)}^{1/\gamma_{2}} =1\\ \vdots\\ \beta\Big{(}\frac{\phi^{(K)}(b_{1},\ldots,b_{K})}{b_{K}}\Big{)}^{1/\gamma_{K}} =1.\end{cases} \tag{4}\] ### The tilted mean matrix. We start by connecting the spectral radius of the tilted mean matrix to the original one. For any \(\mathbf{b}\coloneqq(b_{1},\ldots,b_{K})\in(0,+\infty)^{K}\), we denote by \(\tilde{\rho}(\mathbf{b})\) the spectral radius of the mean matrix \(\tilde{M}\) of the tilted projection associated to \(\{(a_{i},b_{i}),i\in[K]\}\) (recall that, by definition, the \(a_{i}\)'s are uniquely defined by the \(b_{i}\)'s). **Lemma 13**.: _For any \(\mathbf{b}\coloneqq(b_{1},\ldots,b_{k})\) satisfying (4), the spectral radius \(\tilde{\rho}(\mathbf{b})\) of \(\tilde{M}\) satisfies_ \[\tilde{\rho}(\mathbf{b})=\rho(M^{\prime}),\] _where \(M^{\prime}=\left(\beta^{\gamma_{i}}\frac{\partial\tilde{\phi}^{(i)}}{ \partial x_{j}}\left(\mathbf{b}\right)\right)_{1\leq i,j\leq K}\) and \(\rho(M)\) stands for the spectral radius of a matrix \(M\)._ Proof of Lemma 13.: It is clear that, for all \(1\leq i,j\leq k\): \[\tilde{M}_{i,j} =\frac{\partial\tilde{\phi}^{(i)}}{\partial x_{j}}(1,\ldots,1)\] \[=\frac{\beta^{\gamma_{i}}}{b_{i}}b_{j}\frac{\partial\phi^{(i)}}{ \partial x_{j}}(\mathbf{b})=\frac{b_{j}}{b_{i}}M^{\prime}_{i,j}.\] In particular, we have \[\tilde{M}=P^{-1}M^{\prime}P,\] where \(P=diag(b_{1},\ldots,b_{K})\) is the diagonal matrix with \(P_{i,i}=b_{i}\) for all \(i\in[K]\). Since \(\tilde{M}\) and \(M^{\prime}\) are similar, they have the same eigenvalues, and thus the same spectral radius. ### Proof of the main result. We now turn to the proof of the first part of Theorem 4. Let us explain the strategy of the proof. We consider the set \(A_{+}:=\{\mathbf{0}\}\cup\{\mathbf{b}\in(0,+\infty)^{K},(4)\}\) holds for some \(\beta>0\}\). We prove that, in a neighbourgood of \(\mathbf{0}\) in \((\mathbb{R}+)^{K}\), there exists a nontrivial continuous simple curve \(\mathcal{C}\) containing \(\mathbf{0}\) such that \(\mathcal{C}\subseteq A_{+}\). Furthermore, \(\tilde{\rho}(\mathbf{b})\) goes to \(0\) as \(\mathbf{b}\in A_{+}\setminus\{\mathbf{0}\}\) goes to \(\mathbf{0}\). Then, the idea is roughly speaking to follow this curve \(\mathcal{C}\) starting from \(\mathbf{0}\) and prove that it contains a point \(\mathbf{b}\in\mathbb{R}_{+}^{K}\) at which \(\tilde{\rho}(\mathbf{b})=1\). #### 4.3.1 Around the origin Our first goal is to study the set \(A_{+}\) in a neighbourhood of \(\mathbf{0}\). To this end, we introduce for all \(1\leq i,j\leq K\): \[G_{i,j}:(b_{1},\ldots,b_{K})\mapsto b_{j}^{\gamma_{i}}\left(\phi^{(i)}(b_{1}, \ldots,b_{K})\right)^{\gamma_{j}}-b_{i}^{\gamma_{j}}\left(\phi^{(j)}(b_{1}, \ldots,b_{K})\right)^{\gamma_{i}}.\] In particular, \(G_{i,j}=-G_{j,i}\) for all \(i,j\in[K]\). Since we have assumed that the \(\phi^{(i)}\)'s are all entire (Assumption (**A.1**)), for all \(i,j\in[K]\), \(G_{i,j}\) can be extended on all \(\mathbb{R}^{K}\). Clearly all these functions are holomorphic (since \(\gamma_{i}\in\mathbb{N}^{*}\) for all \(i\in[K]\) by (**B**)), and \[A_{+}=\left(\{\mathbf{0}\}\cup(0,\infty)^{K}\right)\cap\bigcap_{1\leq i<j\leq K }\left\{G_{i,j}^{-1}(0)\right\}.\] Our first result is the existence of the curve \(\mathcal{C}\) mentioned above. In other words, close to \(\mathbf{0}\), \(A_{+}\) is the graph of a function. It is useful to define the extension of \(A_{+}\) to \(\mathbb{R}^{K}\), and set \[A:=\mathbb{R}^{K}\cap\bigcap_{i,j\in[K]}G_{i,j}^{-1}(\{0\}).\] **Theorem 14**.: _There exists a function \(\psi:\mathbb{R}\to\mathbb{R}^{K-1}\) defined on an open neighbourhood \(V\) of \(\mathbf{0}\) in \(\mathbb{R}^{K}\) such that, for \((b_{1},\ldots,b_{K})\in V\),_ \[(b_{1},\ldots,b_{K})\in A\Leftrightarrow(b_{2},\ldots,b_{K})=\psi(b_{1}).\] _Furthermore, we have for all \(2\leq j\leq K\): \(\psi_{j}^{[s]}(\mathbf{0})=0\) for \(s\in\{0,\ldots,\gamma_{j}-1\}\) and_ \[\psi_{j}^{[\gamma_{j}]}(\mathbf{0})=\gamma_{j}!\left(\phi^{(1)}(\mathbf{0}) \right)^{-\gamma_{j}}\phi^{(j)}(\mathbf{0})\] _where \(f^{[s]}\) denotes the \(s\)-th derivative of \(f\) and \(\psi_{j}\) is the \((j-1)\)-st coordinate of \(\psi\). In particular, \(\psi_{j}^{[\gamma_{j}]}>0\)._ This ensures that the connected component of \(A\) containing \(\mathbf{0}\) is a simple curve around \(\mathbf{0}\) and that, in a neighbourhood \(V\) of \(\mathbf{0}\) in \(\mathbb{R}^{K}\), for any \((b_{1},\ldots,b_{K})\in A\cap V\), we have \((b_{1},\ldots,b_{K})=\mathbf{0}\) or all \(b_{i}\)'s have the same sign. Proof of Theorem 14.: We apply the implicit function theorem to the function \[G:(b_{1},\ldots,b_{K})\in\mathbb{R}^{K}\mapsto(G_{1,2}(b_{1},\ldots,b_{K}), \ldots,G_{1,K}(b_{1},\ldots,b_{K}))\in\mathbb{R}^{K-1}.\] This function is clearly \(C^{\infty}\) on \(\mathbb{R}^{K}\). Observe that \(G(\mathbf{0})=\mathbf{0}\). Furthermore, we have for all \(2\leq j,j^{\prime}\leq K\): \[\frac{\partial G_{1,j}}{\partial b_{j^{\prime}}}(\mathbf{0})=\left\{\begin{array} []{ll}\left(\phi^{(1)}(\mathbf{0})\right)^{\gamma_{j}}&\text{if $j=j^{\prime}$}\\ 0&\text{otherwise.}\end{array}\right.\] In particular, by (**A.2**), \(\phi^{(1)}(\mathbf{0})>0\) and the Jacobian matrix at \(\mathbf{0}\) is diagonal and invertible. The first part of the result follows by the implicit function theorem. The second part follows directly from the chain rule and the computation of \(\frac{\partial^{\sigma}G_{1,j}}{\partial b_{1}^{2}}\) for \(1\leq s\leq\gamma_{j}\). Observe that this proof is based on Assumption (**B**) and the fact that \(\gamma_{1}=1\). From now on, we denote by \(\mathcal{C}\subseteq\mathbb{R}^{K}_{+}\) the connected component of \(A_{+}\) containing \(\mathbf{0}\). By Theorem 14, \(\mathcal{C}\neq\{\mathbf{0}\}\). It is interesting to notice that, in general, the set \(A_{+}\) is not connected. We now prove that, for \((b_{1},\ldots,b_{K})\in\mathcal{C}\) close enough to \(\mathbf{0}\), the associated tilted spectral radius is at most \(1\). **Lemma 15**.: _There exists \(r>0\) such that, for all \(\mathbf{b}:=(b_{1},\ldots,b_{K})\in A_{+}\setminus\{\mathbf{0}\}\) such that \(\sup_{i\in[K]}b_{i}\leq r\), \(\tilde{\rho}(\mathbf{b})<1\)._ Proof of Lemma 15.: Define \(\rho:=\tilde{\rho}(1,\ldots,1)\), the spectral radius of the original mean matrix. Observe that, by Assumption (**A.2**) and (4), for \(\mathbf{b}\in A_{+}\setminus\{\mathbf{0}\}\) close enough to \(\mathbf{0}\), we have \(\sup_{i\in[K]}\beta^{\gamma_{i}}<\frac{1}{2}\rho^{-1}\). Furthermore, if \(b_{i}\leq 1\) for all \(i\in[K]\), then \[\rho\left(\left(\frac{\partial\phi^{(i)}}{\partial b_{j}}(b_{1},\ldots,b_{K}) \right)_{i,j\in[K]}\right)\leq\rho,\] since the spectral radius is a nondecreasing function of each coordinate (provided that they are all nonnegative). The result follows by Lemma 13. The interest of this lemma is the following: since \(\tilde{\rho}\) is continuous on \(\mathcal{C}\), we only need to show that there exists \(\mathbf{b}\in\mathcal{C}\) such that \(\tilde{\rho}(\mathbf{b})\geq 1\) (or directly \(\mathbf{b}\in A_{+}\) such that \(\tilde{\rho}(\mathbf{b})=1\)). To this end, we consider different cases, depending on whether of not \(\overline{\mathcal{C}}\) is compact in \(\mathbb{R}^{K}_{+}\). A first result of importance is the fact that \(\mathcal{C}\) cannot escape the cone \((0,+\infty)^{K}\), in the following sense. Recall that \(A:=\bigcap_{i,j\in[K]}G_{i,j}^{-1}(\{0\})\). **Lemma 16**.: _Let \(\mathbf{b}\in[0,\infty)^{K}\cap A\) such that there exists \(i\in[K]\) for which \(b_{i}=0\). Then, \(b_{i}=0\) for all \(i\in[K]\)._ Proof.: Assume without loss of generality that \(b_{1}=0\). For all \(2\leq j\leq K\), since \(G_{1,j}(\mathbf{b})=0\) and \(\phi^{(1)}(b_{1},\ldots,b_{K})\neq 0\) (by Assumption (**A.2**)), we have \(b_{j}=0\) #### 4.3.2 If \(\overline{\mathcal{C}}\) is not compact Let us first consider the case where \(\overline{\mathcal{C}}\) is not compact. **Theorem 17**.: _Assume that \(\overline{\mathcal{C}}\) is not compact. Then \(\mathcal{C}\) contains a good critical exponential tilting._ Proof.: It is clear that \(\tilde{\rho}\) is continuous on \(\mathcal{C}\). Furthermore, by Lemma 15, for \((b_{1},\ldots,b_{K})\in A_{+}\smallsetminus\{\mathbf{0}\}\) close enough to \(\mathbf{0}\), we have \(\tilde{\rho}(b_{1},\ldots,b_{K})<1\). Hence, it suffices to prove that, for \((b_{1},\ldots,b_{K})\in\mathcal{C}\) far enough from \(\mathbf{0}\), we have \(\tilde{\rho}(b_{1},\ldots,b_{K})\geq 1\). By Lemma 16, we only need to consider points in the cone \(\mathbb{R}_{+}^{K}\). To this end, assume without loss of generality that \(b_{1}\to+\infty\) on \(\mathcal{C}\) (possibly along a subsequence). By Assumption (**A.3**), we have, for \(b_{1}\) large enough, uniformly in \(b_{2},\ldots,b_{K}\in\mathbb{R}_{+}\), \[\frac{\partial\phi^{(1)}(b_{1},\ldots,b_{K})}{\partial b_{1}}\geq\frac{\phi^{ (1)}(b_{1},b_{2},\ldots,b_{K})}{b_{1}}.\] Now observe that the spectral radius of a matrix is nondecreasing in all coefficients, and the spectral radius of the matrix \[\left(\frac{\partial\phi^{(1)}(b_{1},b_{2}\ldots,b_{K})}{\partial b_{1}} \mathbbm{1}_{i=j=1}\right)_{1\leq i,j\leq K}\] is \(\frac{\partial\phi^{(1)}(b_{1},b_{2}\ldots,b_{K})}{\partial b_{1}}\). We get therefore that \[\tilde{\rho}(b_{1},\ldots,b_{K})\geq\beta^{\gamma_{1}}\frac{\partial\phi^{(1) }(b_{1},b_{2}\ldots,b_{K})}{\partial b_{1}}\geq\beta^{\gamma_{1}}\frac{\phi^{ (1)}(b_{1},b_{2},\ldots,b_{K})}{b_{1}}=1.\] The result follows. #### 4.3.3 The set of degenerate points Assume now that \(\overline{\mathcal{C}}\) is compact. The rest of the proof is based on the study of the set of degenerate points, that is, points around which \(\mathcal{C}\) is not the graph of a function of one of the \(b_{i}\)'s. Let us first introduce some functions, slightly different from the \(G_{i,j}\)'s. For all \(i,j\in[K]\), define \[H_{i,j}(b_{1},\ldots,b_{K})=b_{j}^{1/\gamma_{j}}\left(\phi^{(i)}(b_{1},\ldots,b_{K})\right)^{1/\gamma_{i}}-b_{i}^{1/\gamma_{i}}\left(\phi^{(j)}(b_{1}, \ldots,b_{K})\right)^{1/\gamma_{j}},\] and the associated Jacobian matrices \((I_{i\in[K]}^{(i)})\in\mathcal{M}_{K-1,K-1}\) defined as \[I^{(i)}(b_{1},\ldots,b_{K})=\left(\frac{\partial H_{i,j}}{\partial b_{j^{ \prime}}}(b_{1},\ldots,b_{K})\right)_{j,j^{\prime}=i}.\] Observe in particular that \(A_{+}=\{\mathbf{0}\}\cup\bigcap_{i,j\in[K]}H_{i,j}^{-1}(\{0\})\). For convenience, we still label the rows and columns of \(I^{(i)}\) by \([K]\backslash\{i\}\) and not \([K-1]\). We also define the set of degenerate points as follows: \[E:=\left\{\mathbf{b}\in(0,\infty)^{K},\,\forall i\in[K],\,det\,I^{(i)}( \mathbf{b})=0\right\}\] Our proof is divided in several parts, which we informally describe. First, we show that the value of \(\tilde{\rho}\) at any degenerate point in \(A_{+}\) is \(\geq 1\). Second, we show that, if \(\overline{\mathcal{C}}\) is compact, then \(\overline{\mathcal{C}}\) necessarily contains a degenerate point \(x\). Studying separately the cases \(x\in\mathcal{C}\) and \(x\notin\mathcal{C}\), we complete the proof of the existence of a good critical exponential tilting. **Theorem 18**.: _Let \(\mathbf{b}\in E\). Then, \(\tilde{\rho}(\mathbf{b})\geq 1\)._ As a corollary, we obtain the following: **Corollary 19**.: _Assume that \(\mathcal{C}\cap E\neq\emptyset\). Then, there exists a good critical exponential tilting._ Proof of Corollary 19.: This is a simple consequence of Theorem 18 along with the continuity of \(\tilde{\rho}\) on \(\mathcal{C}\) along with Lemmas 15 and 16. The idea of the proof of Theorem 18 is to exhibit an eigenvector of \(\tilde{M}\) whose associated eigenvalue is \(1\). Proof of Theorem 18.: Let \(\mathbf{b}\in E\), and recall that the matrix \(\tilde{M}\) is defined as \[\tilde{M}_{i,j}=\frac{\partial\tilde{\phi}^{(i)}}{\partial b_{j}}(1,\ldots,1).\] Recall that, since \(\mathbf{b}\in\mathcal{C}\), there exists \(\beta>0\) such that, for all \(i\in[K]\), \[\beta\left(\frac{\phi^{(i)}(\mathbf{b})}{b_{i}}\right)^{1/\gamma_{i}}=1\text{ (see Section \ref{sec:2}).}\] Our claim is the following: if \(\mathbf{b}\in E\), then there exists a vector \(Z\) satisfying \[\tilde{M}Z=Z. \tag{5}\] In particular, \(1\) is in the spectrum of \(\tilde{M}\) and necessarily \(\tilde{\rho}\geq 1\). We first compute the matrix \(I^{(i)}(\mathbf{b})\) at a point of \(E\). Set for convenience \(\delta_{i}=1/\gamma_{i}\) for \(i\in[K]\). In what follows, since it is clear by the context, all functions are taken at the point \(\mathbf{b}\). By definition, for any \(j,j^{\prime}\neq i\), we have \[I^{(i)}(\mathbf{b})_{j,j^{\prime}} =\frac{\partial H_{i,j}}{\partial b_{j^{\prime}}}\] \[=b_{j}^{\delta_{j}}\frac{\partial\left[\phi^{(i)}\right]^{\delta_ {i}}}{\partial b_{j^{\prime}}}-b_{i}^{\delta_{i}}\frac{\partial\left[\phi^{(j )}\right]^{\delta_{j}}}{\partial b_{j^{\prime}}}+1_{j\gets j^{\prime}} \delta_{j}b_{j}^{\delta_{j}-1}\left(\phi^{(i)}\right)^{\delta_{i}}.\] We now choose, for each \(i\in[K]\), an eigenvector \(z^{(i)}:=(z_{j}^{(1)},j\neq i)\in Ker\,I^{(i)}\backslash\{0\}\). This vector exists by assumption, since \((b_{1},\ldots,b_{K})\in E\). Again, we label its coordinates by \([K]\backslash\{i\}\) for convenience. We will construct a \(1\)-eigenvector \(Z\) of \(\tilde{M}\) as a linear combination of the \(z^{(i)}\)'s. To this end, let \((d_{1},\ldots,d_{K})\in\mathbb{R}^{K}\backslash\{\mathbf{0}\}\) such that \[\sum_{j=1}^{K}\frac{d_{j}}{b_{j}^{\delta_{j}}}\sum_{\begin{subarray}{c}i=1\\ i\neq j\end{subarray}}^{K}\frac{\partial\left[\phi^{(j)}\right]^{\delta_{j}}}{ \partial b_{i}}z_{i}^{(j)}=0. \tag{6}\] Define the vector \(Y\) whose coordinates satisfy \[Y_{i}=\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{K}d_{j}z_{i}^{(j)}.\] **Lemma 20**.: _The vector \(Z:=P^{-1}Y\) is a \(1\)-eigenvector of the matrix \(\tilde{M}\), where \(P:=diag(b_{1},\ldots,b_{K})\)._ In particular, this immediately implies Theorem 18. It is quite clear that we can choose \((d_{1},\ldots,d_{K})\) so that \(Y\) is not the \(0\) vector. Indeed, the following holds: * if, for all \(i\neq j\), \(z_{i}^{(j)}=0\), then any \((d_{1},\ldots,d_{K})\in\mathbb{R}^{K}\) satisfies (6). In particular \((1,0,\ldots,0)\) works, and there exists \(i\in[K]\) such that \(Y_{i}:=z_{i}^{(1)}\neq 0\) (because \(z^{(1)}\) is an eigenvector); * otherwise, let \(i\neq j\) such that \(z_{i}^{(j)}\neq 0\), and assume without loss of generality that \(j=1\). * If \(d_{1}=0\) for all \((d_{1},\ldots,d_{K})\) satisfying (6), then it means that the set of \((d_{1},\ldots,d_{K})\) satisfying (6) is \(\{0\}\times\mathbb{R}^{K-1}\). Then, let \(\ell\in[K]\) such that \(z_{\ell}^{(2)}\neq 0\) and choose \((d_{1},\ldots,d_{K})=(0,1,0,\ldots,0)\). It satisfies (6) and \(Y_{\ell}=z_{\ell}^{(2)}\neq 0\). * otherwise, let \((d_{1},\ldots,d_{K})\) satisfying (6) with \(d_{1}\neq 0\) and \(d_{\ell}=0\) for \(\ell\notin\{1,i\}\) (such a solution exists since the space of solutions has dimension \(\geq K-1\)). We have in particular \(Y_{i}=d_{1}z_{i}^{(1)}\neq 0\). Proof of Lemma 20.: For all \(i\neq j\in[K]\), by definition of \(z^{(j)}\), we have \[0 =\sum_{i^{\prime}=1,i^{\prime}\neq j}^{K}I_{i,i^{\prime}}^{(j)}z_ {i^{\prime}}^{(j)}\] \[=\sum_{i^{\prime}=1,i^{\prime}\neq j}^{K}\left(b_{i}^{\delta_{i} }\frac{\partial\left[\phi^{(j)}\right]^{\delta_{j}}}{\partial b_{i^{\prime}}}- b_{j}^{\delta_{j}}\frac{\partial\left[\phi^{(i)}\right]^{\delta_{i}}}{ \partial b_{i^{\prime}}}\right)z_{i^{\prime}}^{(j)}+\delta_{i}b_{i}^{\delta_{i }-1}\left(\phi^{(j)}\right)^{\delta_{j}}z_{i}^{(j)}\] \[=\sum_{i^{\prime}=1,i^{\prime}\neq j}^{K}\left(b_{i}^{\delta_{i} }\frac{\partial\left[\phi^{(j)}\right]^{\delta_{j}}}{\partial b_{i^{\prime}} }-b_{j}^{\delta_{j}}\frac{\partial\left[\phi^{(i)}\right]^{\delta_{i}}}{ \partial b_{i^{\prime}}}\right)z_{i^{\prime}}^{(j)}+\beta^{-1}\delta_{i}b_{i}^ {\delta_{i}-1}b_{j}^{\delta_{j}}z_{i}^{(j)}. \tag{7}\] For any \(i\in[K]\), we have: \[\sum_{i^{\prime}=1}^{K}\frac{\partial\left[\phi^{(i)}\right]^{ \delta_{i}}}{\partial b_{i^{\prime}}}Y_{i^{\prime}} =\sum_{i^{\prime}=1}^{K}\frac{\partial\left[\phi^{(i)}\right]^{ \delta_{i}}}{\partial b_{i^{\prime}}}\sum_{j=1,j+i^{\prime}}^{K}d_{j}z_{i^{ \prime}}^{(j)}\] \[=\sum_{j=1}^{K}d_{j}\sum_{i^{\prime}=1,i^{\prime}\neq j}^{K} \frac{\partial\left[\phi^{(i)}\right]^{\delta_{i}}}{\partial b_{i^{\prime}}}z _{i^{\prime}}^{(j)}\] \[=\sum_{j=1}^{K}\frac{d_{j}}{b_{j}^{\delta_{j}}}\left(\sum_{i^{ \prime}=1,i^{\prime}\neq j}^{K}b_{i}^{\delta_{i}}\frac{\partial\left[\phi^{(j) }\right]^{\delta_{j}}}{\partial b_{i^{\prime}}}z_{i^{\prime}}^{(j)}\right)+ \sum_{j=1}^{K}\frac{d_{j}}{b_{j}^{\delta_{j}}}\beta^{-1}\delta_{i}b_{i}^{ \delta_{i}-1}b_{j}^{\delta_{j}}z_{i}^{(j)},\] by (7). Now observe that \[\sum_{j=1}^{K}\frac{d_{j}}{b_{j}^{\delta_{j}}}\left(\sum_{i^{ \prime}=1,i^{\prime}\neq j}^{K}b_{i}^{\delta_{i}}\frac{\partial\left[\phi^{(j) }\right]^{\delta_{j}}}{\partial b_{i^{\prime}}}z_{i^{\prime}}^{(j)}\right)=b_{ i}^{\delta_{i}}\sum_{j=1}^{K}\frac{d_{j}}{b_{j}^{\delta_{j}}}\sum_{i^{ \prime}=1,i^{\prime}\neq j}^{K}\frac{\partial\left[\phi^{(j)}\right]^{\delta_ {j}}}{\partial b_{i^{\prime}}}z_{i^{\prime}}^{(j)}=0,\] by definition of \((d_{1},\ldots,d_{K})\). We are thus left with \[\sum_{i^{\prime}=1}^{K}\frac{\partial\left[\phi^{(i)}\right]^{ \delta_{i}}}{\partial b_{i^{\prime}}}Y_{i^{\prime}} =\beta^{-1}\sum_{j=1}^{K}d_{j}\delta_{i}b_{i}^{\delta_{i}-1}z_{i} ^{(j)}\] \[=\beta^{-1}\delta_{i}b_{i}^{\delta_{i}-1}Y_{i},\] which can be rewritten \[\sum_{i^{\prime}=1}^{K}\frac{\partial\phi^{(i)}}{\partial b_{i^{ \prime}}}Y_{i^{\prime}} =\beta^{-\frac{1}{\delta_{i}}}Y_{i}.\] This implies that \(M^{\prime}Y=Y\), where \(M^{\prime}\) is the matrix defined in Lemma 13. By Lemma 13 again, it is equivalent to saying that \(\tilde{M}P^{-1}Y=P^{-1}Y\). #### 4.3.4 If \(\mathcal{C}\cap E=\varnothing\). The last case to consider is the case where \(\mathcal{C}\) does not contain any element of \(E\). By the implicit function theorem, \(\mathcal{C}\) is locally, around each of the points of \(\mathcal{C}\backslash\{\mathbf{0}\}\), the graph of a function of \(b_{i}\) for some \(i\in[K]\). **Proposition 21**.: _Assume that \(\overline{\mathcal{C}}\) is compact and that \(\mathcal{C}\cap E=\varnothing\). Then, there exists a good exponential tilting in \(A_{+}\)._ Proof.: Since \(\mathcal{C}\cap E=\varnothing\), for any \(\mathbf{b}\in\overline{\mathcal{C}}\cap(\mathbb{R}_{+}^{*})^{K}\), there exists \(i\in[K]\) such that \(detI^{(i)}(\mathbf{b})\neq 0\). Then, \(\mathcal{C}\) is a \(1\)-dimensional connected manifold with boundary \(\partial\mathcal{C}\subset\mathbb{R}_{+}^{K}\backslash(\mathbb{R}_{+}^{*})^{K}\). It is known that then it is homeomorphic to either \(\mathbb{R},\mathbb{R}_{+}\), the circle \(\mathbb{S}^{1}\) or the interval \([0,1]\). Since its boundary contains \(\mathbf{0}\), \(\partial\mathcal{C}\) is nonempty and \(\mathcal{C}\) is homeomorphic to either \([0,1]\) or \(\mathbb{R}+\). If there exists a homeomorphism \(f:\mathcal{C}\rightarrow[0,1]\), then one can assume without loss of generality that \(f(\mathbf{0})=0\). In this case, let \(x:=f^{-1}(1)\). Necessarily, by Lemma 16, \(x\in(\mathbb{R}_{+}^{*})^{K}\) and \(detI^{(i)}(x)=0\) for all \(i\in[K]\). Hence, \(x\in\mathcal{C}\cap E\), which contradicts our assumption. Therefore, there exists a homeomorphism \(f:\mathcal{C}\rightarrow\mathbb{R}_{+}\). Clearly, \(f(\mathbf{0})=0\). Consider the sequence \((x_{n})_{n\geq 1}:=(f^{-1}(n))_{n\geq 1}\). Since \(\overline{\mathcal{C}}\) is compact, \((x_{n})_{n\geq 1}\) has an accumulation point in \(\mathbb{R}_{+}^{K}\), say \(x_{\infty}\). Furthermore, \(x_{\infty}\in\overline{\mathcal{C}}\cap(\mathbb{R}_{+}^{*})^{K}\) by Lemma 16. Indeed, by Theorem 14, \(x_{\infty}\neq\mathbf{0}\). In addition, since \(\mathcal{C}\) is a manifold with boundary \(\partial\mathcal{C}=\{\mathbf{0}\}\), necessarily \(x_{\infty}\notin\mathcal{C}\). In particular, \(detI^{(i)}(x_{\infty})=0\) for all \(i\in[K]\) and \(x_{\infty}\in E\). Observe now that, since \(A\) is closed, we have that \(x_{\infty}\in A\). Thus, if \(\tilde{\rho}(x_{\infty})=1\), then \(x_{\infty}\) corresponds to a good exponential tilting. If \(\tilde{\rho}(x_{\infty})\neq 1\), then by Theorem 18 we have \(\tilde{\rho}(x_{\infty})>1\). By definition of \(x_{\infty}\), there exists \(n>0\) such that \(\tilde{\rho}(x_{n})>1\). We conclude by continuity of \(\tilde{\rho}\) and Lemmas 15 and 16. We can finally prove our main theorem. Proof of Theorem 4.: It is a consequence of Theorem 17, Corollary 19 and Proposition 21. ## 5 Convergence of conditioned BGW trees In this final section, we prove Corollary 5 as a consequence of Theorem 4, and deduce from it the second part of Theorem 4. Let \(\boldsymbol{\zeta}\) be an offspring distribution satisfying (**A.1**)-(**A.3**), and let \(\Gamma:=(\gamma_{1},\dots,\gamma_{K})\) satisfying (**B**). The main idea is that, by Theorem 4, there exists a critical distribution equivalent to \(\boldsymbol{\zeta}\). We then invoke [12, Theorem 3.1] to conclude the proof. ### Kesten-like trees We construct here the infinite discrete trees that appear as local limits of critical multitype BGW trees. It turns out that they all share a common structure: a unique end (infinite spine), on which are grafted independent multitype trees that are identically distributed conditionally on their root label. In regard of Kesten's seminal work [7], we will call these trees Kesten-like trees. This multitype construction was first introduced in [8], see also [12, Proposition 3.1] for a proof in the broader case of mutitype forests. Let \(\boldsymbol{\zeta}\) be an irreducible \(K\)-type critical distribution. The Perron-Frobenius theorem ensures that, under this irreducibility assumption, \(M\) has a real eigenvalue \(\rho>0\) of maximal modulus which is simple, and every \(\rho\)-eigenvector of \(M\) has only non-zero coordinates, all of the same sign. Denote by \(\mathbf{r}:=(r_{1},\dots,r_{K})\) the renormalized right \(1\)-eigenvector of the mean matrix \(M\). Denote by \(\hat{\boldsymbol{\zeta}}:=(\hat{\zeta}^{1}),\dots,\hat{\zeta}^{(K)})\) the biased family of distributions defined as: \[\forall j\in[K],\forall\mathbf{x}\in\mathcal{W}_{K},\hat{\zeta}^{(j)}( \mathbf{x})=\frac{1}{r_{j}}\sum_{\ell=1}^{|\mathbf{x}|}r_{x_{\ell}}\zeta^{(j)}( \mathbf{x}),\] where \(|\mathbf{x}|\) denotes the length of \(\mathbf{x}\). In particular, \(\hat{\zeta}^{(j)}(\varnothing)=0\). **Definition 22**.: _Let \(\mathbf{\zeta}\) be a \(K\)-type critical distribution Given a type \(i\in[K]\), we define the tree \(\mathcal{T}_{*}^{(i)}\) as follows: it is made of a spine, which is an infinite branch starting from the root which has label \(i\). On this infinite branch, vertices have distribution \(\hat{\mathbf{\zeta}}\). Given an element \(v\) of the spine, denote by \(\mathbf{w}_{v}\) its ordered list of offspring types. Then, the probability that the child of \(v\) belonging to the infinite spine is \(vj\) (that is, the \(j\)-th of its children) is proportional to \(r_{\ell(vj)}\) - that is, equal to_ \[\frac{r_{\ell(vj)}}{\sum_{i=1}^{|\mathbf{w}_{v}|}r_{\ell(vi)}}.\] _Finally, on any offspring of type \(j\) of a vertex of the spine that is not itself on the spine, we graft a tree \(\mathcal{T}^{(j)}\) that is independent of the rest of the tree._ In the monotype case, the child of a vertex on the spine that will be itself on the spine is just chosen uniformly at random. Observe also that, since \(\hat{\zeta}^{(j)}(\varnothing)=0\) for all \(j\in[K]\), the spine is indeed infinite. We mention the following local limit result concerning multitype trees. **Theorem 23** (Stephenson [12], Theorem 3.1).: _Assume that \(\mathbf{\zeta}\) is nondegenerate, critical and irreducible, and that \(\mathbf{\zeta}\) has small exponential moments, in the sense that_ \[\exists z>1,\forall i\in[K],\sum_{w\in\mathcal{W}_{K}}\zeta^{(i)}(w)z^{\sum w _{i}}<\infty.\] _Fix in addition \(\Gamma:=(\gamma_{1},\ldots,\gamma_{K})\in\mathcal{M}_{1,K}(\mathbb{N})\), such that at least one of the \(\gamma_{i}\)'s nonzero. Fix \(j\in[K]\), and let \((k_{n})_{n\geq 1}\) be a sequence of positive integers going to \(+\infty\), such that, for all \(n\)_ \[\mathbb{P}\left(\sum_{i=1}^{d}\gamma_{i}N_{i}\left(\mathcal{T}^{(j)}\right)=k _{n}\right)>0.\] _Then, we have_ \[\mathcal{T}_{\Gamma,k_{n}}^{(j)}\underset{n\rightarrow\infty}{\overset{(d)}{ \rightarrow}}\mathcal{T}_{*}^{(j)},\] _where \(\mathcal{T}_{*}^{(j)}\) is the multitype Kesten tree associated to \(\tilde{\mathbf{\zeta}}\)._ Corollary 5 is now just a consequence of Theorems 4 and 23. Proof of Corollary 5.: Let us consider a critical distribution \(\tilde{\mathbf{\zeta}}\) which is \(\Gamma\)-equivalent to \(\mathbf{\zeta}\). Such a distribution exists by Theorem 4. It is clear that, since \(\mathbf{\zeta}\) is entire, nondegenerate and irreducible, the same holds for \(\tilde{\mathbf{\zeta}}\). In particular, since it is entire it has small exponential moments. The result follows. We finally prove the second part of Theorem 4. End of the proof of Theorem 4.: Observe that, for any \(j\in[K]\), the distribution of the tree \(\mathcal{T}_{*}^{(j)}\) of Theorem 23 uniquely determines \(\tilde{\mathbf{\zeta}}\). The uniqueness of a critical distribution that is \(\Gamma\)-equivalent to \(\mathbf{\zeta}\) then follows directly. ## 6 Open questions Here are some related open questions, mainly about the assumptions that we make on \(\boldsymbol{\zeta}\). **(Q.1)**: Is it possible to loosen (**B**), to allow matrices \(\Gamma\) with coefficients equal to \(0\)? This would allow us to use Miermont [9], Haas-Stephenson [5], Stephenson [12], to obtain for free new limiting results for noncritical trees. Furthermore, (**B**) is only used at one point in the proof, to prove Theorem 14. **(Q.2)**: To our knowledge, no scaling limit result exists when \(\Gamma\) is not \(\left(1,0,\ldots,0\right)\). Such results for critical trees would imply, by Theorem 4, the same result for a larger class of trees. **(Q.3)**: As in the monotype case, it is possible to loosen Assumption (**A.1**). However, this would lead to new technical difficulties that we prefer not to tackle in this paper. **(Q.4)**: It would be interesting to loosen (**A.2**), and allow some types to always have children. This assumption is not relevant in the critical case, and should not be in our case either. However it is central in the proof of Theorem 4, and it does not seem clear how to get rid of it. **(Q.5)**: The strongest assumption made in this paper is Assumption (**A.3**), about the behaviour of \(\tilde{\rho}\) on \(\mathcal{C}\) far from the origin. Although such an estimate seems to be mandatory in our case, the result of Theorem 4 seems to hold even without this assumption. Proving it would however require a different argument. **(Q.6)**: Following Remark 2, in the definition of \(\Gamma\)-equivalence for distributions, does (ii) imply (i) if \(\Gamma\in\mathcal{M}_{1,K}(\mathbb{Z}_{+})\)? **(Q.7)**: We can study the set \(A_{+}\) directly in the case \(rk(\Gamma)\geq 2\). For instance, if \(rk(\Gamma)=K\), we have \(A=\mathbb{R}^{K}\). Is \(A_{+}\) a \(rk(\Gamma)\)-dimensional manifold with boundary? What can be said about it? **(Q.8)**: Is Corollary 5 still true for \(rk(\Gamma)\geq 2\)?
2307.11789
Corrected thermodynamics of nonlinear magnetic-charged black hole surrounded by perfect fluid dark matter
In this paper, we investigate the influence of perfect fluid dark matter and quantum corrections on the thermodynamics of non-linear magnetic-charged black hole. We consider the metric of the static non-linear magnetic-charged black hole in the background of perfect fluid dark matter. Starting with the black hole temperature and the corrected entropy, we use the event horizon propriety in order to find the temperature, and based on the surface gravity definition, we find the uncorrected entropy. However, using the definition of the corrected entropy due to thermal fluctuation, we find and plot the entropy of the black hole. We find that the entropy is highly affected for smaller non-linear magnetic-charged black holes. Afterwards, we study the thermodynamic stability of the black hole by computing and plotting the evolution of heat capacity. The results show that second-order phase transition occurs, which appears more later as the dark matter parameter decreases, and leads the black hole to move from the stable phase to the unstable phase. Furthermore, we show that the heat capacity for smaller black holes are also affected, since it appears not being only an increasing function.
Ragil Brand Tsafack Ndongmo, Saleh Mahamat, Thomas Bouetou, Conrad Bertrand Tabi, Timoléon Crépin Kofané
2023-07-20T18:41:13Z
http://arxiv.org/abs/2307.11789v3
Corrected thermodynamics of non-linear magnetic-charged black hole surrounded by perfect fluid dark matter ###### Abstract In this paper, we investigate the influence of perfect fluid dark matter and quantum corrections on the thermodynamics of non-linear magnetic-charged black hole. We consider the metric of the static non-linear magnetic-charged black hole in the background of perfect fluid dark matter. Starting with the black hole temperature and the corrected entropy, we use the event horizon propriety in order to find the temperature, and based on the surface gravity definition, we find the uncorrected entropy. However, using the definition of the corrected entropy due to thermal fluctuation, we find and plot the entropy of the black hole. We find that the entropy is highly affected for smaller non-linear magnetic-charged black holes. Afterwards, we study the thermodynamic stability of the black hole by computing and plotting the evolution of heat capacity. The results show that second-order phase transition occurs, which appears more later as the dark matter parameter decreases, and leads the black hole to move from the stable phase to the unstable phase. Furthermore, we show that the heat capacity for smaller black holes are also affected, since it appears not being only an increasing function. ## 1 Introduction Black holes represent one of the most fascinating objects studied in astrophysics and cosmology. The mathematical framework necessary to study them is the General theory of relativity, established by Einstein [1]. At the center of the spacetime deformed by a black hole there is a singularity, at which both curvature and density becomes infinite, and physical laws are broken down [2]. Also, it is believed that a spacetime with singularity would appear with a well constructed quantum gravity theory. To solve this singularity problem, several models have been constructed, and they are called regular black holes. One of them, the Bardeen black hole, has an event horizon which satisfies the weak energy condition [3, 4]. The Bardeen solution has been reobtained by introducing an energy-momentum tensor, considered as the gravitational field of some sort of a non-linear magnetic monopole charge \(Q\)[5]. This kind of solution can also be called non-linear magnetic-charged black hole. Hereby, this is why this alternative has received considerable attention [6, 7, 8, 9, 10]. Since the establishment of the black holes mechanical laws and their analogy between the thermodynamic laws, it has been suggested that black holes can be studied as thermodynamic objects, having a temperature and an entropy, and moreover a volume, a heat capacity, and so on [11, 12, 13, 14]. This is why the black hole thermodynamic has been topic of study in many works [15, 16, 17, 18, 19, 20, 21]. Another interesting feature of black holes when studying the thermodynamic is the phase transition. Indeed, it has been shown that black holes undergo a phase transition ; in the AdS/CFT correspondence, through the black hole heat capacity [15]. Hence, it has been studied in several works(see [4, 15, 22, 23, 24, 25, 26]), in order to see how the black hole behaves after a phase transition. Furthermore, through the Ehrenfest classification, the black hole can undergo a first or second-order phase transition, as a discontinuity appears on the plot of the first or second derivative of the free enthalpy. For example, one of the second derivative of the free enthalpy is the heat capacity, which is necessary to study the thermodynamic stability of the black hole. Indeed if the heat capacity negative(or positive), then the black hole is unstable(or stable). Nowadays black holes are also considered as very small, especially those which have been formed right after the Big Bang, called primordial black holes [27]. According to their sizes, it is necessary to take into account quantum theory of gravity. The logarithmic approach is one of the predicted model considered as a result of quantum corrections [28, 29, 30]. Indeed, it has been introduced to investigate what the leading-order corrections are, when the size of the black hole is reduced. Therefore, this correction has been widely studied for many black holes. For example, Upadhyay et _al._ studied the effect of correction parameter on thermodynamic behavior of a static black hole in \(f(R)\) gravity [31]. Another studies have been done taking into account thermal fluctuation in charged rotating black holes [32], regular black holes [33] and Horava-Lifshitz black holes [34]. This motivates us to study how quantum corrections affect the thermodynamic behavior of the non-linear magnetic-charged black hole. According to the standard model, the Universe is filled with a strange matter called dark matter, which constitutes about 23% of the total mass-energy of the universe [35]. Its effects are present on galaxies, where it makes the outer parts of galaxies rotate faster than expected from their starlight. As theoretical candidates of dark matter, we have Cold Dark Matter (CDM) [36], Warm Dark Matter [37, 38] and Scalar Field Dark Matter [39, 40]. Another solution, the perfect fluid dark matter is also widely used, because it has been shown that perfect fluid dark matter can explain the asymptotically flat rotation curves concerning spiral galaxies [41]. This has encouraged many works to consider the perfect fluid dark matter on the study of black holes [42, 43, 44, 45]. In this paper, we put out the effects of perfect fluid dark matter and thermal fluctuation on the thermodynamic behaviour of the non-linear magnetic charged black hole. The paper is organized as follows. First, through the horizon propriety and the surface gravity definition, we will determine the black hole mass and Hawking temperature of the black hole. Secondly, we will use them in order to find the corrected entropy due to quantum fluctuations, and then analyze the effects of perfect fluid dark matter and corrected parameter. Then, we will analyze the thermodynamic stability of the black hole through the evolution of the specific heat and see which role plays the corrected parameter. ## 2 The Hawking Temperature and the corrected entropy The metric of static spherically symmetric solution for the Einstein equations describing the nonlinear magnetic-charged black hole in the background perfect fluid dark matter is expressed as [46, 45] \[ds^{2}=-f(r)dt^{2}+\frac{1}{f(r)}dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta d \phi^{2}), \tag{1}\] with \(f(r)=1-\frac{2Mr^{2}}{r^{2}+Q^{3}}+\frac{\alpha}{r}\ln\frac{r}{|\alpha|}\). Here, \(M\) the black hole mass, \(Q\) the magnetic charge and \(\alpha\) is the dark matter parameter. Using the horizon propriety [47, 25], and solving the following equation at the horizon \[f(r_{h})=0, \tag{2}\] leads to \[M=\frac{(r_{h}^{3}+Q^{3})}{2r_{h}^{2}}\left(1+\frac{\alpha}{r_{h}}\ln\frac{r _{h}}{|\alpha|}\right). \tag{3}\] Eq. (3) gives the relation between the black hole mass and its horizon radius. Here, we will focus on the event horizon to make the thermodynamic analysis. Through the surface gravity definition at the horizon [47], the Hawking temperature \(T_{h}\) is given by \[T_{h}=\frac{\kappa}{2\pi}=\frac{f^{\prime}(r_{h})}{4\pi} \tag{4}\] Taking into account Eq. (3) and the expression of the metric function \(f(r)\), we obtain the following Hawking temperature \[T_{h}=\frac{1}{4\pi(r_{h}^{3}+Q^{3})}\left[\frac{r_{h}^{3}-2Q^{3}}{r_{h}}+\frac {\alpha}{r_{h}^{2}}\left(r_{h}^{3}+Q^{3}-3Q^{3}\ln\left(\frac{r_{h}}{|\alpha|} \right)\right)\right]. \tag{5}\] In figure (1), we plot the temperature of the non-linear magnetic-charged black hole surrounded by perfect fluid dark matter. On this plot, we can see that the black hole temperature increases and reaches a maximum, before having a phase of decrease. Furthermore, this figure shows us that this maximum increases for higher values of dark matter parameter \(\alpha\). Let us notice that the case \(\alpha=0\) corresponds to the temperature of black hole without dark matter. From the black hole studied here, we need to write the first law of the black hole thermodynamics and then find out the entropy before computing the corrected entropy. The first law is expressed as [47] \[dM=T_{h}dS_{0}+\Phi_{h}dQ+\beta_{h}d\alpha, \tag{6}\] where \(S_{0}\) represents the entropy at the equilibrium without considering thermal fluctuation, or the uncorrected entropy. Let us notice that \(S_{0}\), the magnetic charge \(Q\) and the dark matter parameter \(\alpha\) form a complete set of extensible variables. \(T_{h}\) is the Hawking temperature at the horizon and \(\Phi_{h}\) is the potential. \(\beta_{h}\) is the conjugating quantity of the dark matter parameter \(\alpha\). Now, in the first expression of (6), we can find the formula of the uncorrected entropy \(S_{0}\), given by \[S_{0}=\int\frac{1}{T_{h}}dM=\int\frac{1}{T_{h}}\frac{\partial M}{\partial r_{ h}}dr_{h}. \tag{7}\] To compute this, we have to find \(\frac{\partial M}{\partial r_{h}}\) from Eq. (3). The obtained expression is as follows \[\frac{\partial M}{\partial r_{h}}=\frac{1}{2r_{h}^{2}}\left\{\frac{r_{h}^{3}- 2Q^{3}}{r_{h}}+\frac{\alpha}{r_{h}^{2}}\left[r_{h}^{3}+Q^{3}-3Q^{3}\ln\left( \frac{r_{h}}{|\alpha|}\right)\right]\right\}. \tag{8}\] Figure 1: Change of the black hole temperature \(T_{h}\). Introducing Eq.(8) and (5) into Eq. (7), we get the black hole entropy at the equilibrium expressed as \[S_{0}=2\pi\int\left(\frac{r_{h}^{3}+Q^{3}}{r_{h}^{2}}\right)dr=\pi r_{h}^{2}\left( 1-\frac{2Q^{3}}{r_{h}^{3}}\right). \tag{9}\] Here, we can notice that this result is the same as the one obtained in the presence of quintessence dark energy, found in by Nam [47]. Hereby, we can say that perfect fluid dark matter does not affect the evolution of entropy of non-linear magnetic charged black hole. Now, we will compute the corrected entropy at the equilibrium \(S\), for which the general formula is expressed as [29] \[S=S_{0}-\beta\ln(S_{0}T_{h}). \tag{10}\] Here, \(\beta\) is called the corrected parameter, and has only two values. If \(\beta=0\), Eq. (10) describes the uncorrected entropy, and for \(\beta=\frac{1}{2}\), Eq. (10) describes the corrected entropy due to thermal fluctuation. Thus, we obtain \[S=\pi r_{h}^{2}\left(1-\frac{2Q^{3}}{r_{h}^{3}}\right)-\beta\ln\left\{\frac{ \left(1-\frac{2Q^{3}}{r_{h}^{3}}\right)}{16\pi(r_{h}^{3}+Q^{3})^{2}}\left[ \frac{r_{h}^{3}-2Q^{3}}{r_{h}}+\frac{\alpha}{r_{h}^{2}}\left(r_{h}^{3}+Q^{3}- 3Q^{3}\ln\left(\frac{r_{h}}{|\alpha|}\right)\right)\right]^{2}\right\}. \tag{11}\] In order to have a better appreciation of the impact of thermal fluctuation on the black hole entropy, we plotted it in figure (2). Analyzing this plot, we remark that for higher values of the horizon radius, we have a linear increase of entropy ; as well as if the system is in equilibrium. However, at lower values of the horizon radius, while the equilibrium entropy increases(\(\beta=0\)), the corrected entropy shows a phase of decrease (\(\beta=0.5\)) with messy behavior of the entropy for smaller values of the dark matter parameter. This result means that the thermal fluctuations violates the second law of thermodynamics, as it is also showed in [29, 30, 32, 48, 49]. Furthermore, we notice that the effect of thermal fluctuation can be neglected for larger black holes. Figure 2: Variation of the corrected entropy \(S\) for different values of \(\alpha\) and \(\beta\). Heat capacity and the thermodynamic stability Here, the thermodynamic stability of the black hole will be studied through the calculus and plot the corresponding heat capacity, which is expressed as \[C=T_{h}\left(\frac{\partial S}{\partial T_{h}}\right)_{Q,\alpha}=T_{h}\left( \frac{\partial S}{\partial r_{h}}\frac{\partial r_{h}}{\partial T_{h}}\right)_{Q,\alpha}. \tag{12}\] After computing it, we get the following expression \[\begin{array}{rcl}C&=&\frac{A}{B},\\ \text{with }A&=&2\left(r_{h}^{4}-2r_{h}Q^{3}+\alpha r_{h}^{3}+\alpha Q^{3}\left(1- \ln\left(\frac{r_{h}}{|\alpha|}\right)\right)\right)\left((6\pi Q^{12}+9\pi Q^ {9}r_{h}^{3}-3\pi Q^{3}r_{h}^{6}+15Q^{9}\beta r_{h}+30Q^{6}\beta r_{h}^{4}\right. \\ &-&12Q^{3}\beta r^{7}\alpha\ln\left(\frac{r_{h}}{|\alpha|}\right)-2\pi Q^{12} \alpha+4\pi Q^{12}r_{h}-5\pi Q^{9}\alpha r_{h}^{3}+4\pi Q^{9}r_{h}^{4}-3\pi Q ^{6}\alpha r^{7}+\pi Q^{3}\alpha r_{h}^{9}-2\pi Q^{3}r_{h}^{10}\\ &+&\pi\alpha r_{h}^{12}+\pi r_{h}^{13}-11Q^{9}\alpha\beta r_{h}+6Q^{9}\beta r_{ h}^{2}-12Q^{6}\alpha r_{h}^{4}+21Q^{6}\beta r_{h}^{5}-12Q^{3}\beta r_{h}^{8}+ \alpha\beta r_{h}^{10},\\ \text{and }B&=&r_{h}(2Q^{3}-r_{h}^{3})(3Q^{3}\alpha\ln\left(\frac{r_{h}}{| \alpha|}\right)-\alpha Q^{3}+2Q^{3}r_{h}-\alpha r_{h}^{3}-r_{h}^{4})(6Q^{6} \alpha\ln\left(\frac{r_{h}}{|\alpha|}\right)+15Q^{3}r_{h}^{3}\alpha\ln\left( \frac{r_{h}}{|\alpha|}\right)\\ &-&5Q^{6}\alpha+2Q^{6}r_{h}-7Q^{3}\alpha r_{h}^{3}+10Q^{3}r_{h}^{4}-2\alpha r_ {h}^{6}-r_{h}^{7}).\end{array} \tag{13}\] In figure (3), we plot the heat capacity \(C\) of the non-linear magnetic-charged black hole in the background of perfect fluid dark matter. Analysing its plot, especially subfigure (3)(a) which corresponds to the absence of quantum corrections, we can see the presence of a discontinuity for each dark matter parameter \(\alpha\). Physically meaning, that is to say that the black hole undergoes a second-order phase transition. Then, this phase transition leads the black hole to move from the stable phase(\(C>0\)) to the unstable phase(\(C<0\)). Moreover, through subfigure (3)(b), we see that thermal fluctuation does not modify the region of occurrence of the second-order phase transition. However, taking into account quantum corrections, then looking at subfigures (3)(b) and (c), we see that the heat capacity is not only an increasing function, but evolves with two picks. This implies that thermal fluctuation affects a smaller black hole. ## 4 Conclusion In summary, we studied the effects of perfect fluid dark matter and quantum corrections on the thermodynamics of non-linear magnetic-charged black hole. First of all, we used the horizon propriety Figure 3: Change of the heat capacity \(C\) of the non-linear magnetic-charged black hole in the background of perfect fluid dark matter with and without quantum corrections for \(Q=1\). to find the black hole mass, and the surface gravity to find the temperature. From the plot of the temperature, we showed that it undergoes a phase of decrease after having increased and reached a maximum. Furthermore, we showed that perfect fluid dark matter increases the maximum of temperature. Secondly, we found the corrected entropy due to thermal fluctuation. Analyzing its behavior revealed that that thermal fluctuation impacts small size black holes, since the corrected entropy violates the second law of thermodynamics, leading a non-linear evolution and a decrease of the entropy. However, we showed that thermal fluctuation does not have a great effect on the entropy for larger black holes. Thirdly, in order to study the effects of dark matter and thermal fluctuation on the stability of black hole, we plotted the heat capacity. Hence, we showed that the black hole undergoes a second-order phase transition. Unless the phase transition appears at the same place with or without quantum corrections, we showed that the heat capacity is affected by thermal fluctuation for smaller black holes.
2305.07792
Wigner and friends, a map is not the territory! Contextuality in multi-agent paradoxes
Multi-agent scenarios, like Wigner's friend and Frauchiger-Renner scenarios, can show contradictory results when a non-classical formalism must deal with the knowledge between agents. Such paradoxes are described with multi-modal logic as violations of the structure in classical logic. Even if knowledge is treated in a relational way with the concept of trust, contradictory results can still be found in multi-agent scenarios. Contextuality deals with global inconsistencies in empirical models defined on measurement scenarios even when there is local consistency. In the present work, we take a step further to treat the scenarios in full relational language by using knowledge operators, thus showing that trust is equivalent to the Truth Axiom in these cases. A translation of measurement scenarios into multi-agent scenarios by using the topological semantics of multi-modal logic is constructed, demonstrating that logical contextuality can be understood as the violation of soundness by supposing mutual knowledge. To address the contradictions, assuming distributed knowledge is considered, which eliminates such violations but at the cost of lambda-dependence. We conclude by translating the main examples of multi-agent scenarios to their empirical model representation, contextuality is identified as the cause of their contradictory results.
Sidiney B. Montanhano
2023-05-12T22:51:13Z
http://arxiv.org/abs/2305.07792v4
# Contextuality in multi-agent paradoxes ###### Abstract Multi-agent scenarios show contradictory results when a non-classical formalism must deal with the knowledge between agents. An interesting way to observe such paradoxes is by describing these scenarios with multi-modal logic, where the paradoxes reveal violations of the structure in classical logic. Even if knowledge is treated in a relational way with the concept of trust, contradictory results can still be found. Here, I take a step further to treat the scenarios in full relational language by using knowledge operators. I show that trust is equivalent to the Truth Axiom and by rewriting the non-contextuality conditions in logical form, I also demonstrate that logical contextuality can be understood as the violation of soundness by supposing mutual knowledge. Loosening this condition by assuming distributed knowledge eliminates such violations but at the cost of lambda dependence. Finally, the main examples of multi-agent scenarios are translated to their empirical model representation, and contextuality is identified as the cause of their contradictory results. ## I Introduction Multi-agent paradoxes [1; 2; 3] are violations of agreement among agents about some global information. Famous examples are generalizations of Wigner's friend paradox, which is itself a generalization of Schrodinger's famous thought experiment with his cat. Its formal construction uses the language of modal logic to show how a certain situation, found both in quantum theory and in other non-classical theories beyond quantum theory, presents a violation in the structure of classical logic. On the other hand, contextuality in its standard definition [4] deals with global inconsistencies in measurements even when there is local consistency, that is, even if the model is non-disturbing, generalizing the famous phenomena of non-locality and the condition of non-signaling between observers. Its formal description uses the categorical language of sheaves and presheaves [5], allowing the construction of bundle diagrams for each model. Equivalently, a model being contextual implies the inability to describe it classically even with hidden variables. In this paper, we use the topological semantics of multi-modal logic (specifically the \(S4\) system) to first explore the use of trust [3; 6] when the knowledge operators are explicitly used, in the spirit that knowledge is a relational concept: something is not just known, it must be known for someone. Trust can be understood as a relational way to define the truth axiom; in fact, they are equivalent when seen by the topology induced by distributed knowledge of the agents, as shown in section II. We can thus use the knowledge operators and trust to create a translation between multi-agent scenarios and empirical models up to restrictions. The violation of soundness described in [6] that appears as the failure of classical logic to deal with quantum theory is identified as the hidden imposition of mutual knowledge on the agents, which implies the conclusion that modal logic fails to deal with multi-agent paradoxes. But such a problem disappears when distributed knowledge is imposed, which also happens with contextuality by translating the contextuality conditions to multi-modal language with the cost of lambda-dependence. In section III, we explore these points further. Next, in section IV, we work out the three examples of multi-agent paradoxes: the Wigner's friend scenario, the Frauchiger-Renner scenario, and the Vilasini-Nurgalieva-del Rio scenario, in topological semantics and translate them to their sheaf representation. We then identify contextuality as the origin of their paradoxes when they appear. In section V, we provide some commentaries about the limitations of empirical models to deal with generic multi-agent scenarios. The appendices serve to fix the notation used and give the basics of modal logic in appendix A, and the sheaf approach to contextuality in appendix B. ## II Knowledge and Trust We will work with system **S4**, in particular its topological semantics, but without assuming the true axiom **T** directly. Instead, we will use trust between agents, as suggested by [6]. For an introduction to the logical content of what follows, see Appendix A. ### Trust Knowledge, mutual knowledge, and distributed knowledge operators are important to write formulas when one imposes the following principle: there is no knowledge without an agent. Such principle can be understood as the embodiment of the obvious idea that fundamental truth1 is a philosophical position, rather than an empirical fact. Footnote 1: Here, in the sense of being absolute to all agents. One can only assume something is true for all agents, but not test such a thing. In a sense, it is the logical formalization of Alfred Korzybski’s statement “A map is not the territory”[7]. Therefore, any formula must be valued through a knowledge operator. The axioms **K** and **4** show no problem once the operator is present. But \(\mathbf{T}\) uses the notion of fundamental truth, thus some philosophical complications appear. Let's ignore them by allowing beliefs to be on the same level as knowledge, and any further mechanism beyond the scope of this paper to distinguish them, which we assume here for the sake of simplicity. Once there is no absolute notion of knowledge, we must find knowledge by trust between agents. **Axiom 1** (Trust).: _The trust relation \(\leadsto\) between agents \(i\) and \(j\) is given by_ \[(j\leadsto i)\leftrightarrow(K_{i}K_{j}\phi\to K_{i}\phi)\forall\phi, \tag{1}\] _meaning "\(i\) trusts \(j\)"._ Note that an agent \(i\) could not trust a set of agents \(G\) separately but only when seen as an entity. In this sense, "\(i\) trusts \(G\)" is defined as \[(G\leadsto i)\leftrightarrow(K_{i}D_{G}\phi\to K_{i}\phi)\forall\phi, \tag{2}\] \(i\) trusts \(G\) if and only if, for all propositions, the knowledge of \(i\) that the distributed knowledge of \(G\) implies the knowledge of \(i\). See that this is the weakest way to describe such a relation, where all agents in \(G\) could not know \(\phi\) individually. ### Topology of knowledge The topology semantics is deeply related to knowledge. The definition of the knowledge operator \(K\) in Kripke semantics can be rewritten as: \[(M,w\models K\phi)\leftrightarrow(M,U^{w}\models\phi) \tag{3}\] In other words, in the world \(w\), an agent knows something if and only if for all worlds in the element \(U^{w}\) of the topological basis of the Alexandrov topology, that something is true. Here again, we have the problem of fundamental truth, with the important property by \(\mathbf{T}\) that \(w\in U^{w}\), which allows one to interpret \(U^{w}\) as the natural neighborhood of \(w\). In this sense, an agent knows something in a world if it is true in a neighborhood of such a world. Epistemic logic with more than one agent defines an Alexandrov topology for each accessibility relation, which can be interpreted as different ways the agents see the worlds. We have the relationship \[(K_{i}\phi\to K_{j}\phi)\leftrightarrow(R_{j}\subseteq R_{i})\leftrightarrow( \tau_{j}\subseteq\tau_{i}) \tag{4}\] between the knowledge operators, the induced relation, and the topology, respectively, in the Kripke and topological semantics. In particular, one can show that the relationship \[(K_{i}\phi\to D_{I}\phi)\leftrightarrow(R_{D_{I}}\subseteq R_{i}) \leftrightarrow(\tau_{D_{I}}\subseteq\tau_{i}) \tag{5}\] holds. Also, one can show that a fundamental property of distributed knowledge is \(D_{I}\phi\rightarrow\phi\), the distributed knowledge of something implies the truth of it, which follows from the true axiom \(\mathbf{T}\). Once the finest topology generated by all agents is \(\tau_{D_{I}}\), we get the following. **Proposition 1**.: _Trust is equivalent to \(\mathbf{T}\) by defining \(\tau_{D}\) induced by \(D_{I}\) as the topology of fundamental truth, i.e. \(\phi\to D_{I}\phi\)._ By \(\phi\leftrightarrow D_{I}\phi\), the fundamental truth is defined as the distributed knowledge. With it, one can recover \(\mathbf{T}\) from trust, once \(K_{i}D_{I}\phi\to K_{i}\phi\) for all \(\phi\), all agents trust the distributed knowledge. See that there is no way to access any information deeper than the one given by \(\tau_{D_{I}}\). Therefore, one can say there is a limit of knowledge a set of agents can access, and there is no way to distinguish such a limit from a fundamental limit of reality2. Footnote 2: In this sense, it is not surprising that an isolated population that becomes in contact with another one can suffer a huge impact on their culture. If they survive, their fundamental truth usually loses its fundamentally. ## III Contextuality ### An equivalence: measurements and contexts Let's construct a way to think about an empirical model as a multi-agent scenario. For simplicity, we will deal with finite objects. A natural identification of an agent in a measurement scenario is the restriction of an agent for each measurement, as in the known multi-agent scenarios. This imposition differs from agents that have more measurements, as in the standard Bell scenario. By distributed knowledge we have \[K_{i}(K_{i}\phi)\to D_{I}(K_{i}\phi)\to K_{i}\phi, \tag{6}\] thus an agent trust itself, which makes no sense when an agent can choose between incompatible measurements3. Therefore, our first identification is Footnote 3: The agents here cannot choose their measurement. If they could, each measurement will define different agents that cannot trust each other once their measurements are incompatible. * Agents are the minimal measurements of a scenario. Here we need to define what makes someone trustworthy and what constitutes secrets. **Definition 2**.: _An agent \(j\) is trustworthy to the agent \(i\) if \(K_{i}K_{j}\phi\to K_{j}\phi\), i.e. if any information that agent \(i\) knows from agent \(j\) must also be known by agent \(j\)._ **Definition 3**.: _There are no secrets of agent \(j\) to the agent \(i\) if it holds that_ \[(K_{j}\phi\wedge(K_{i}K_{j}\phi\to K_{i}\phi))\leftrightarrow K_{i}\phi, \tag{7}\] _i.e. any information that agent \(j\) knows is also known by agent \(i\) given that \(j\leadsto i\)._ One can show that axioms \(\mathbf{T}\), \(\mathbf{K}\) and \(\mathbf{4}\) imply that no secrets and trustworthy are equivalent concepts. With no secrets and defining \(\phi=K_{i}\psi\) we get from \(\mathbf{T}\) and \(\mathbf{K}\) that \((K_{j}K_{i}\psi\wedge(K_{i}K_{j}\psi\to K_{i}\psi))\leftrightarrow K_{i}K_{i}\psi \to K_{i}\psi\), and from \(\mathbf{4}\) we have \(K_{i}\psi\to K_{i}K_{i}\psi\). These conditions are important to forbid any hidden information; thus, trust implies that the topology of the one that trusts is finer than the trustworthy part. With trust, an agent can reconstruct all the information its trustworthy part gives, which is all their information. An agent with terminal trust connection can, under these conditions, reconstruct the information of all the agents, obtaining a global vision of the knowledge. On the other hand, the definition of a context allows the construction of stochastic maps between subcontexts, and such maps can define the probabilities of the context given the marginals. It is thus natural to identify a context as a set of subcontexts that trust one another, where the trust relation is exactly the possibility of constructing such stochastic maps [8]. Therefore, our second identification is: * Trust is the existence of stochastic maps between subcontexts. With these two identifications, we can rewrite the contexts as a set of agents identified as fundamental measurements and trust relation being defined between contexts: if two contexts \(G^{\prime}\) and \(G^{\prime\prime}\) are subcontexts of a context \(G\), then they trust each other once both can reconstruct from \(G\) the information of each other by marginalization since they are trustworthy and there are no secrets4. Footnote 4: A condition called flasque beneath the cover in the literature of the sheaf approach to contextuality. ### An equivalence: events To rewrite an empirical model as a multi-agent scenario, we need to identify the events. Naturally, they must be identified with the possible worlds, but some issues appear. Worlds are defined globally, but events are not, and such distinction is related to the topology we will deal with. The strategy is to use pointless topology, following the fact that one cannot know the fundamental possible worlds, even if they exist, but only the propositions it can access. In other words, the worlds are defined by the propositions and not the other way around, thus the possible worlds must be defined by the topology an agent has access, as the elements of a basis to such topology. In a connected measurement scenario, any agent is a terminal one in the trust between sets of agents. In logical terms, \(K_{i}\phi\to E_{i}\phi\), if an agent knows, then any agent knows too, implying \(\tau_{i}=\tau_{E_{I}}\) for all \(i\in I\). Let's call \(\mathcal{B}_{E_{I}}\) the basis of \(\tau_{E_{I}}\). We define the possible worlds \(\Sigma=\mathcal{B}_{E_{I}}\) and \(R_{i}\) of each agent given by \(\tau_{i}=\tau_{E_{I}}\). One can readily see that the elements of \(\mathcal{B}_{E_{I}}\) are global and atomic objects, such as global events. * Global events define a basis topology of the mutual knowledge. Therefore, any global description of an empirical model is given by the possible worlds \(\Sigma=\mathcal{B}_{E_{I}}\) induced by the mutual knowledge. Thus, by the Fine-Abramsky-Brandenburger Theorem 16, mutual knowledge is the knowledge that explains non-contextual models. In an analogous way, we can identify local sections as the elements of the basis of the topology induced by the mutual knowledge of their respective context. Since in a context \(G\) every subcontext trust each other, we have \(D_{G}=E_{G}\), all distributed knowledge is described by mutual knowledge between the agents, and each of them has the information of all \(G\). In particular, for an agent \(i\) we have \(D_{i}=E_{i}=K_{i}\), as expected. ### Possibilistic contextuality as violation of soundness Previously, we saw that if an agent is terminal in the trust relationship, it has access to all the information of the other agents and sets of agents. Therefore, it can reconstruct the global view of the multi-agent scenario, and every other terminal agent will also agree with this description. What happens if the agents cannot agree on their global description? Well, one can argue that trust between agents and the sharing of information are not enough to access all the information of a scenario. In this case, \(D_{I}\neq E_{I}\). In other words, the fundamental truth cannot be accessed by any agent individually. In an empirical model with Boolean valuation, the equation that represent non-contextuality is as follows \[\mu_{R}^{\mathcal{O}U}(A)=\sum_{\Lambda}p\left(\lambda\right)\prod_{x\in \mathbb{U}}\mu_{R}^{\mathcal{O}}(\rho^{\prime}(U,x)(A)). \tag{8}\] This equation has every function as a Boolean function, thus outcome-determinism is satisfied. It also evaluates a formula \(\phi\) by asking if, given all the possible worlds, one can semantically evaluate \(\phi\) from them. Translating it with topological semantics, non-contextuality means \[\bigvee_{\lambda\in\Lambda}(\lambda\wedge(\lambda\to\phi))\models\phi, \tag{9}\] where it is clear, due to the fact we are in a \(\mathbf{S4}\) system, that \[\bigvee_{\lambda\in\Lambda}(\lambda\wedge(\lambda\to\phi))\vdash\phi. \tag{10}\] Therefore, it is the violation of soundness which gives contextuality in the logical form. However, it does not mean that modal logic is inadequate, as it is sound and complete in the topological semantics. The problem is that we are supposing all the global descriptions must agree, which is untrue. In other words, the worlds we are constructing in our scenario are too simple; they are given by \(E_{I}\), thereby ignoring any information outside the mutual knowledge, thus implying the wrong topological semantics. To correct this description, let's rewrite the logical equations by replacing what we are supposing. The real set of worlds is \(\Sigma=\mathcal{B}_{D_{I}}\), but we are supposing that \(\mathcal{B}_{E_{I}}\), a coarse-graining of fundamental truth, is enough, implying \[\bigvee_{E_{I}\lambda\in\mathcal{B}_{E_{I}}}(E_{I}\lambda\wedge(E_{I}\lambda \to K_{i}\phi))\vDash K_{i}\phi. \tag{11}\] This semantic equation does not always hold even if \[\bigvee_{E_{I}\lambda\in\mathcal{B}_{E_{I}}}(E_{I}\lambda\wedge(E_{I}\lambda \to K_{i}\phi))\vDash K_{i}\phi \tag{12}\] holds syntactically. This last one says that if one can describe \(\phi\) with elements of \(\mathcal{B}_{E_{I}}\) that are true, then the agents know it, which differs from the semantic equation where all \(\phi\) must be described by it. Therefore, contextual models show a difference between \(E_{I}\) and \(D_{I}\) by depending on more information. A way to codify this information is by describing every detail of the agents, including their trust relation, into the possible worlds. This is the case where the model presents \(\lambda\)-dependence, the worlds depends on the contexts. The elements of the basis are the set of local events with their respective contexts. This is the finer topology the agents can construct, thus one can identify it as \(\tau_{D_{I}}\), and one can readily satisfy due to soundness and completeness that5 Footnote 5: This answers affirmatively a claim in [6] that says the inclusion of the contexts as data of the propositions avoid logical contradictions on in the Frauchiger-Renner scenario. \[\bigvee_{D_{I}\lambda\in\mathcal{B}_{D_{I}}}(D_{I}\lambda\wedge(D_{I}\lambda \to K_{i}\phi))\vDash K_{i}\phi. \tag{13}\] if and only if \[\bigvee_{D_{I}\lambda\in\mathcal{B}_{D_{I}}}(D_{I}\lambda\wedge(D_{I}\lambda \to K_{i}\phi))\vDash K_{i}\phi. \tag{14}\] Therefore, modal logic is adequate do deal with the apparent violations if we do not restrict the knowledge to a mutual one, which we usually implicitly do. ## IV Multi-agent scenarios The known examples of multi-agent scenarios satisfy some properties: * the "information-preserving memory update", which implies the same data being accessed by all the agents, allowing the application of trustworthy and no secrets conditions; * the data in the states of the system plus the friends before and after the measurement are isomorphic, allowing us to ignore the system and deal only with agents accessing the same data; * the trust relation is defined only for agents and not sets of agents, which means a representation of such relation with a directed graph; * the trust relation is symmetric, allowing the definition of contexts with only two agents. Therefore, one can construct an empirical model from them using the previous equivalence and ask for paradoxes only by inquiring about contextuality in the sheaf approach. In particular, they are 1-contextuality scenarios [8]. The measurement scenario in this case allows only contexts with two fundamental measurements, so we can describe the cover of contexts as a graph, where the measurements are identified as the vertices and the maximal contexts as the edges6. Footnote 6: Contextuality here will only appear when dealing with loops in the graph [9], thus the term 1-contextuality. The natural generalization is \(n\)-contextual scenarios, as seen in [8]. ### Wigner's Friend scenario The standard Wigner's Friend scenario is defined with Alice \(A\) performing a measurement on the system \(R\), and with Wigner \(W\) describing \(R\) and \(A\) in an entangled state due to her previous measurement. It asks for the different points of view between Alice and Wigner in the fundamental description of the nature of the probabilities involved. The scenario deals with an initial state \(\ket{\phi}=\alpha\ket{0}+\beta\ket{1}\), with Alice's measurement in the basis \(\{\ket{0},\ket{1}\}\). The problem here is where to put the Heisenberg's cut, before or after Alice. From Alice's point of view, after her measurement, the state is in a classical probability distribution \(p_{R}(0)=\alpha^{2}\) and \(p_{R}(1)=\beta^{2}\), and if she has already observed the result, it is certain to be one given eigenvalue. However, from Wigner's point of view, \(R\) and \(A\) defines a system \(R\otimes A\) in a superposition being described by \(\ket{\phi}\), thus the system and therefore Alice are described by a quantum superposition of states. There is no empirical contradiction here, as the classical probability distribution and the quantum state will give the same probabilities, and no discordance appears between Alice and Wigner. The problem that the Wigner's Friend scenario brings up is of an ontological nature: what is really happening with Alice? Let Wigner do a measurement in the system given by \(R\otimes A\). We identify Alice and Wigner as the agents and we ignore the system \(R\). There are two possible scenarios. The first scenario deals with Wigner's measurement being compatible with Alice's one, thus both trusting each other, defining a context \[A\rightsquigarrow W. \tag{15}\] This scenario allows analysis only dealing with the measurement scenario. Once there is only one context, it must be non-contextual. This translates the fact cited before that there is no empirical contradiction. The second scenario changes the basis in which Wigner performs his measurement to an incompatible one, for example \(\ket{+}=\sqrt{\frac{1}{2}}\left(\ket{0}+\ket{1}\right)\) and \(\ket{-}=\sqrt{\frac{1}{2}}\left(\ket{0}-\ket{1}\right)\). To Wigner, Alice's measurement is represented as a unitary transformation on \(R\otimes A\) that changes Alice's state to a superposition. To him, the probability will be \(p_{R\otimes A}(+)=\frac{(\alpha+B)^{2}}{2}\) and \(p_{R\otimes A}(-)=\frac{(\alpha-\beta)^{2}}{2}\). To Alice, there is no probability at all if she already saw the measurement result, in a similar footing to the previous scenario, and Wigner's measurement will just project the reduced state to his new basis. The problem here is that she knows her result, and Wigner erased it with his measurement, allowing no contradiction once the measurement erased Alice's memory as well7. Again, it allows analysis only dealing with the measurement scenario, the difference being that there are two non-connected contexts, which once isolated must form a non-contextual empirical model. Again, there is no empirical contradiction. Footnote 7: There is the problem of how to do it with a macroscopic entity, but this is not the point here. ### Frauchiger-Renner scenario The Frauchiger-Renner scenario [1] starts with an entangled state \[\ket{\phi}=\sqrt{\frac{1}{3}}\ket{0}\otimes\ket{0}+\sqrt{\frac{2}{3}}\ket{1} \otimes\sqrt{\frac{1}{2}}(\ket{0}+\ket{1})\,. \tag{16}\] between two systems, \(R\) and \(S\), measured in the basis \(\{\ket{0},\ket{1}\}\) by a respective friend, Alice \(A\) or Bob \(B\). The system \(R\otimes A\) measured by Ursula \(U\) and the system \(S\otimes B\) measured by Wigner \(W\) are measured in the basis \(\{\ket{+},\ket{-}\}\), with \(\ket{+}=\sqrt{\frac{1}{2}}\left(\ket{0}+\ket{1}\right)\) and \(\ket{-}=\sqrt{\frac{1}{2}}\left(\ket{0}-\ket{1}\right)\). First, the trust relation. A locality argument can be used to describe who trusts whom. As we can ignore \(R\) and \(S\), the agents are Alice, Bob, Ursula, and Wigner. Trust is symmetric, and Alice's (Bob's) measurement is incompatible with Ursula's (Wigner's) measurement. Thus we get \(A\rightsquigarrow W\), \(U\rightsquigarrow B\), \(A\rightsquigarrow B\), and \(U\rightsquigarrow W\). Once we are given the outcomes of the measurements, we can define the possible worlds using the knowledge operators of each agent. The topology induced by the mutual knowledge \(E_{I}\) is generated by the elements of the basis, which consist of all \(2^{4}\) combinations of the outcomes from the four agents. The outcome of a single agent is represented by the union of all the elements of this basis that contains it. The valuation is given by the initial state and taking into account the "information-preserving memory update", but can only be calculated for the set of agents which mutually trust. \(\ket{\phi}_{A\rightsquigarrow B}\) can be written exactly like Eq. 16, while the state that will be measured by \(U\rightsquigarrow W\) will be \[\ket{\phi}_{U\rightsquigarrow W}= \sqrt{\frac{1}{12}}(\ket{+}+\ket{-})\otimes(\ket{+}+\ket{-}) \tag{17}\] \[+\sqrt{\frac{1}{3}}\left(\ket{+}-\ket{-}\right)\otimes\ket{+},\] and for \(U\rightsquigarrow B\) \[\ket{\phi}_{U\rightsquigarrow B}= \sqrt{\frac{1}{6}}\left(\ket{+}+\ket{-}\right)\otimes\ket{0} \tag{18}\] \[+\sqrt{\frac{1}{6}}\left(\ket{+}-\ket{-}\right)\otimes\left(\ket{ 0}+\ket{1}\right),\] and finally for \(A\rightsquigarrow W\) \[\ket{\phi}_{A\rightsquigarrow W}=\sqrt{\frac{1}{6}}\ket{0}\otimes(\ket{+}+\ket{ -})+\sqrt{\frac{2}{3}}\ket{1}\otimes\ket{+}. \tag{19}\] Calling the outcomes \(+\) and \(-\) of Ursula and Wigner respectively as 0 and 1, we can construct the table of the probabilities as shown in Table 1. The assumptions in [1] are as follows: * (Q) All agents use quantum theory. * (C) Agents can use the results from another agent. * (S) A measurement by an agent has an output defined for that agent. Let's follow the sequence of trust presented in [6]: \[A\rightsquigarrow B\rightsquigarrow U\rightsquigarrow W\rightsquigarrow A. \tag{20}\] If Ursula measures \(\ket{-}\), then Bob must measure \(\ket{1}\) since \(p(10\ket{U\rightsquigarrow B})=0\). Consequently, Alice must measure \(\ket{1}\) since \(p(01\ket{A\rightsquigarrow B})=0\), and Wigner must measure \(\ket{+}\) since \(p(11\ket{A\rightsquigarrow W})=0\). However, as shown in Table 1, Wigner can measure \(\ket{-}\) since \(p(11\ket{U\rightsquigarrow W})=\frac{1}{12}\), contradicting Ursula's conclusion of \(p(11\ket{U\rightsquigarrow W})=0\). This is the violation presented in [1]. The empirical model can be constructed directly from Table 1, which is equivalent to the previous equivalence. The possible worlds are defined as the basis of the topology generated by the mutual knowledge \(E_{I}\) and identified as the global events. The empirical model that results from the valuation is non-disturbing, as one can directly verify, and contextual. Using the non-contextual fraction, one can find \(NCF=\frac{5}{12}\). The possibilistic bundle diagram of Table 1 is given by Figure 1. See that section 11 of the context \(U\rightsquigarrow W\) does not have a possibilistic global event. Herein lies the similarity with Hardy's model; both show possibilistic but not strong contextuality. Imposing Ursula's conclusion, \(p(11\ket{U\rightsquigarrow W})=0\), the induced possibilistic empirical model becomes non-contextual, showing that it is the cause of the possibilistic contextuality, thus the cause of the multi-agent paradox in the Frauchiger-Renner scenario. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline & 00 & 01 & 10 & 11 \\ \hline \(A\rightsquigarrow B\) & \(\frac{1}{3}\) & 0 & \(\frac{1}{3}\) & \(\frac{1}{3}\) \\ \hline \(A\rightsquigarrow W\) & \(\frac{1}{6}\) & \(\frac{1}{6}\) & \(\frac{2}{3}\) & 0 \\ \hline \(U\rightsquigarrow W\) & \(\frac{3}{4}\) & \(\frac{1}{12}\) & \(\frac{1}{12}\) & \(\frac{1}{12}\) \\ \hline \(U\rightsquigarrow B\) & \(\frac{2}{5}\) & \(\frac{1}{6}\) & 0 & \(\frac{1}{6}\) \\ \hline \end{tabular} \end{table} Table 1: Probabilities of the Frauchiger-Renner scenario. ### Vilasini-Nurgalieva-del Rio scenario Another example is the Vilasini-Nurgalieva-del Rio scenario [3]. It generalizes the conditions to multi-agent paradoxes for generalized probability theories with the use of modal logic and explicitly constructs a paradox for the box world. The construction of the agents, trust relation, and the possible worlds is identical to the one presented in the Frauchiger-Renner scenario. The valuation follows the initial state \(R\rightsquigarrow S\) will share PR-boxes, thus satisfying \(X_{i}X_{j}=x_{i}\oplus_{mod2}x_{j}\) with \(X_{i}\) measurements and \(x_{i}\) outcomes. The authors of [3] show that all pairs of agents trusting each other can be understood as being correlated by PR-boxes. By using "information-preserving memory update" and fixing the conditions \(X_{U}=X_{A}\oplus_{mod2}1\), \(X_{W}=X_{B}\oplus_{mod2}1\), the measurements \(X_{A}=X_{B}=0\) and the outcomes \(x_{i}\in\{0,1\}\), we can obtain the possibilistic values presented in Table 2. All agents find contradiction, presenting a stronger violation than the Frauchiger-Renner scenario, by any chosen sequence of agents. The identification with an empirical model follows an analogous construction to the one for the Frauchiger-Renner scenario, but now we are dealing with possibilistic values, thus allowing a faithful representation as a bundle diagram in Figure 2. It defines the well-known PR-box empirical model, showing the Liar Cycle paradox with four agents. It is strong contextual once all local sections show violations, thus making it stronger than the previous example. ## V Commentaries For these examples, the valuation shows that there is more knowledge than the mutual one. The important point here is that we cannot define the worlds our logic will work out, but by the knowledge we can explore and refine the worlds we can have access to. The distributed knowledge is the finest way to understand what is going on, as it codifies all the data in the propositions, saving modal logic. It also shows that we have more data than the classical mutual knowledge, more worlds, and, as we can see today with quantum technology, more resources to explore. To achieve the conditions to be analyzed with the sheaf approach to contextuality, one needs to restrict the set of possible multi-agent scenarios. First, the agents must have only one measurement each, and such measurements must satisfy outcome-determinism, i.e., in quantum theory, they must be projection-valued measures8. Also, the trust relation defined between agents, and more generally between elements of the power set of the set of agents, must satisfy the structure of the category of contexts, in particular it must be symmetric. Once these conditions are satisfied, the measurement scenario is well-defined. Footnote 8: One can generalize the sheaf approach to deal with outcome-indeterminism [10], but that is outside the scope of this article since the examples satisfy outcome-determinism. To get an empirical model, the events must satisfy the sheaf conditions, while the valuation must satisfy the non-disturbance condition. Once these conditions hold, the equivalence is possible, and one can explore any multi-agent paradox as contextuality by the tools of the sheaf approach. Therefore, one can construct multi-agent scenarios that cannot be represented as an empirical model. Examples of multi-agent paradoxes that don't satisfy a sheaf approach equivalence presented in this article will be the objective of future work. The results of the present article allow the construction of new multi-agent paradoxes and the use of mathematical tools of the sheaf approach and related formalisms of contextuality. They also allow the translation of contextual empirical models as multi-agent scenarios with probabilistic paradoxes, \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline & 00 & 01 & 10 & 11 \\ \hline \hline \(A\rightsquigarrow B\) & 1 & 0 & 0 & 1 \\ \hline \(A\rightsquigarrow W\) & 1 & 0 & 0 & 1 \\ \hline \(U\rightsquigarrow W\) & 0 & 1 & 1 & 0 \\ \hline \(U\rightsquigarrow B\) & 1 & 0 & 0 & 1 \\ \hline \end{tabular} \end{table} Table 2: Possibilities of the Vilasini-Nurgalieva-del Rio scenario. Figure 1: Possibilistic bundle of the Frauchiger-Renner scenario. Figure 2: Possibilistic bundle of the Vilasini-Nurgalieva-del Rio scenario. and possibilistic contextual empirical models as logical multi-agent paradoxes. Finally the main result is that modal logic is adequate in quantum and other non-classical settings once one explicitly uses the knowledge operators and takes a more empiricist path to defining the possible worlds. ###### Acknowledgements. The author thanks the MathFoundQ - UNICAMP - Mathematical Foundations of Quantum Theory, in special Prof. Dr. Marcelo Terra Cunha, for the conversations in the preparation of this manuscript, and Vinicius Pretti Rossi for conversation about Wigner's Friend scenarios. This study was financed in part by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001. ## Appendix A Modal Logic A modal logic is defined with a set \(\Omega\) of propositional variables and the usual set of connectives \(\neg\) ("not"), \(\wedge\) ("and"), \(\vee\) ("or"), \(\leftrightarrow\) ("if and only if"), \(\rightarrow\) ("if \(\ldots\). then"), besides the use of parentheses. In addition to the usual connectives, a modal logic has a modal operator called "possibility" \(\lozenge\). When combined with \(\neg\) one can define the modal operator "necessity" \(\square\) as \(\neg\neg\neg\).9 Footnote 9: One can also start with \(\square\) and to define \(\lozenge\) as \(\neg\neg\neg\), thus they are dual, but certain care must be taken by defining modal operator in this way[11]. When dealing with a set of agents indexed by a finite set \(I\ni i\), one can define \(\lozenge_{i}\) (and consequently \(\square_{i}\)) as the necessity modal operator from the point of view of agent \(i\). This defines a multi-modal logic, with one modal logic for each agent, but all of them agreeing on the usual propositional logic structure. Once the set of propositional variables \(\Omega\) and symbols are defined, one can define the formulas as follows: * All the propositional variables are formulas. * If \(A\) is a formula, then \(\neg A\), \(\lozenge A\), and \(\square A\) are formulas. * If \(A\) and \(B\) are formulas, then \((A\wedge B)\), \((A\lor B)\), \((A\leftrightarrow B)\), and \((A\to B)\) are also formulas. * There are no other formulas. The collection of propositions \(\Phi\) is defined by the possible formulas. ### Kripke Semantics A Kripke frame \(\langle\Sigma,R\rangle\) is a pair consisting of a non-empty set of states or worlds \(\Sigma\) and a binary relation \(R\) on \(\Sigma\), called the accessibility relation, such that \(aRb\) means "\(b\) is possible given \(a\)" or "\(b\) is accessible by \(a\)". A relational structure \(\langle\Sigma,\{R_{i}\}_{i\in I}\rangle\) is a finite set of Kripke frames with the same \(\Sigma\), where each \(R_{i}\) is given by an agent \(i\). In other words, \(aR_{i}b\) is understood as "\(b\) is possible given \(a\) in the point of view of agent \(i\)" or "\(b\) is accessible by \(a\) in the point of view of agent \(i\)". A Kripke structure \(\langle\Sigma,\{R_{i}\}_{i\in I},v\rangle\) is a relational structure \(\langle\Sigma,\{R_{i}\}_{i\in I}\rangle\) equipped with a Boolean valuation \(\nu:\Omega\rightarrow\mathcal{P}(\Sigma)\), that indicate the worlds where a proposition variable is true. As one can see, each world defines a valuation. The valuation of a generic proposition in \(\Phi\) follows from the ordinary rules of propositional logic for each world, plus rules to the modal operators as we will see. ### Rules, soundness and completeness The symbol \(Q\vDash\phi\), where \(Q\subset\Phi\) and \(\phi\in\Phi\), can be read as "\(Q\) semantically entails \(\phi\)", meaning that \(\phi\) is true in every structure in which \(Q\) is true, \(Q\) models \(\phi\). With it, for \(M=\langle\Sigma,\{R_{i}\}_{i\in I},v\rangle\) one can write \(M,w\vDash\phi\), that means the proposition \(\phi\) is true for the world \(w\in\Sigma\). Another symbol \(\vdash\) can be read as "\(Q\) syntactically entails \(\phi\)", meaning \(Q\) proves \(\phi\). The ordinary rules of propositional logic holds here for each world, and one add rules for the modal operators in Kripke semantics: * \((M,w\vDash\square\phi)\leftrightarrow(\nu(\phi)=\Sigma)\leftrightarrow\forall u (wRu\rightarrow(M,u\vDash\phi))\). * \((M,w\vDash\phi)\leftrightarrow(\nu(Q)\neq\emptyset)\leftrightarrow\exists u (wRu\wedge(M,u\vDash\phi))\). A system satisfies completeness (also called semantic completeness) if \(Q\vDash\phi\) then \(Q\vDash\phi\), and a system satisfies soundness if \(Q\vDash\phi\) then \(Q\vDash\phi\). ### Knowledge The valuation \(\nu\), being unique for all agents, reflects the philosophical statement that truth is independent of any agent; it is absolute. This can be understood as a strong axiom to determine the distinction between knowledge and belief, with the former being a direct consequence of truth and the latter not needing any relation to it10. However, as one can readily see, different agents have different knowledge, which is a coarse-graining of the fundamental truth. Therefore, for multi-agent scenarios, we must use the knowledge of each agent to evaluate propositions. Footnote 10: Knowledge, as Plato’s “justified belief,” is a weaker notion since it does not impose any fundamental truth, only the justification based on the obviously incomplete data the agent has access to. One could use a Bayesian vision to justify the existence of absolute truth through an induction argument, which holds in a classical description of reality, but it must be limited by Kant’s epistemology. One can define, for an agent, the basic modal operator of epistemic logic \(K\) that means "it is known that". Let \(R(w)=\{u|wRu\}\), and for \(A\subseteq\Sigma\) denote \(M,A\vDash\phi\) as \(M,u\vDash\phi\) for all \(u\in A\). Them, in Kripke semantics, one add a new rule to define knowledge: * \((M,w\vDash K\phi)\leftrightarrow(M,R(w)\vDash\phi)\). In the case of multiple agents indexed by a set \(I\), one can define an operator \(K_{i}\) for each agent \(i\), where \(K_{i}\phi\) can be read as "agent \(i\) knows that \(\phi\)". We need to add a new item to the list of formulas: * If \(A\) is a formula, then \(K_{i}\) for all \(i\in I\) is a formula. To preserve the truth by the knowledge operators, one imposes the Knowledge generalization rule, also known as \(\mathbf{N}\) and Necessitation Rule, that says for a Kripke structure \(M\) and any \(\phi\in\Phi\), we have \[(M,w\vDash\phi)\forall w\rightarrow(M,w\vDash K_{i}\phi)\forall i. \tag{1}\] This rule can be written as well for modal operators, \[(M,w\vDash\phi)\forall w\rightarrow(M,w\vDash\phi). \tag{2}\] There are two more modal operators dealing with knowledge of a subset of agents \(U\subset I\) that are interesting to us. Mutual knowledge \(E_{G}\) means "every agent in \(G\) knows". Formally, for all \(\phi\), we define the mutual knowledge operator as follows: \[E_{G}\phi=\bigwedge_{i\in U}K_{i}\phi, \tag{3}\] which defines a relation \[R_{E_{G}}=\bigcup_{i\in G}R_{i} \tag{4}\] that allows the addition of the following rule in the Kripke semantics: * \((M,w\vDash E_{G}\phi)\leftrightarrow(M,R_{E_{G}}(w)\vDash\phi)\). Distributed knowledge \(D_{U}\) means "it is distributed knowledge to the whole \(U\)", not just describing the knowledge of individual agents but all knowledge combined of \(U\) as an entity itself. Formally, for all \(\phi\), we define mutual knowledge operator as follows: \[E_{U}\phi=\bigvee_{i\in U}K_{i}\phi, \tag{5}\] which defines a relation \[R_{E_{G}}=\bigcap_{i\in G}R_{i} \tag{6}\] that allows the addition of the following rule in the Kripke semantics: * \((M,w\vDash D_{G}\phi)\leftrightarrow(M,R_{D_{G}}(w)\vDash\phi)\). ### Axioms Different axioms can be imposed on the accessibility relation of a frame (Frame Conditions) that equivalently11 result in properties of modal (Modal Axioms) and knowledge (Axioms of Knowledge) operators, thus defining different systems of modal logic[12; 13]. Footnote 11: They follow from the preservation of such properties on the accessible worlds of each world. **Axiom 2** (Distribution Axiom or \(\mathbf{K}\)).: _It holds true for any frame. For modal operators, we have that for any \(\psi,\phi\in\Phi\) it holds that_ \[(\Box(\psi\rightarrow\phi))\rightarrow(\Box\psi\rightarrow\Box\phi) \tag{7}\] _while for knowledge operators, for any \(\psi,\phi\in\Phi\), we have_ \[(K_{i}\phi\wedge K_{i}(\phi\rightarrow\psi))\to K_{i}\psi. \tag{8}\] System \(\mathbf{K}\) is the simplest kind of logic described by Kripke semantics and establishes modus ponens for each world. An equivalent way to write it as a Modal Axiom is \[\Box(\phi\wedge(\phi\rightarrow\psi))\rightarrow\Box\psi, \tag{9}\] in a similar format to the respective Axiom of Knowledge. Normal Modal System is defined as a system \(\mathbf{K}\) satisfying Rule \(\mathbf{N}\). **Axiom 3** (Truth Axiom, or \(\mathbf{T}\), or \(\mathbf{M}\)).: _For any frame and \(\phi\in\Phi\):_ * _(Frame Condition) The accessibility relation is reflexive._ * _(Modal Axiom)_ \(\Box\phi\rightarrow\phi\)_._ * _(Axiom of Knowledge)_ \(K_{i}\phi\rightarrow\phi\)_._ As a result of this axiom, one can show that \(\phi\rightarrow\Diamond\phi\) holds. System \(\mathbf{T}\) (also known as System \(\mathbf{M}\)) is defined as a System \(\mathbf{K}\) satisfying the Truth Axiom. **Axiom 4** (Positive Introspection Axiom or \(\mathbf{4}\)).: _For any frame and \(\phi\in\Phi\):_ * _(Frame Condition) Accessibility relation is transitive._ * _(Modal Axiom)_ \(\Box\phi\rightarrow\Box\Box\phi\)_._ * _(Axiom of Knowledge)_ \(K_{i}\phi\to K_{i}K_{i}\phi\)_._ A result of this Axiom is that holds \(\Diamond\Diamond\phi\rightarrow\Diamond\phi\). System \(\mathbf{S4}\) is defined as a System \(\mathbf{T}\) satisfying Axiom \(\mathbf{4}\).12 Footnote 12: Another important axiom known as Negative Introspection Axiom or \(\mathbf{5}\) is the imposition of symmetry of the accessibility relation, resulting for any \(\phi\in\Phi\) the validity of \(-K_{i}\phi\implies K_{i}-K_{i}\phi\) for knowledge operators and \(\Diamond\phi\rightarrow\Box\phi\phi\) for modal operators. System \(\mathbf{S5}\) is defined as a System \(\mathbf{S4}\) satisfying Axiom \(\mathbf{5}\), and is exactly the system where the accessibility relation is an equivalence relation. Usually one drops \(\mathbf{5}\) once when an agent does not know something, it is hard to such agent judge its own lack of knowledge. ### Topological semantics A natural semantics for the system \(\mathbf{S4}\) is the topological [14; 15; 16; 17]. **Definition 4**.: _A topological model is a pair \((T,\nu)\) where \(T=(X,\tau)\) is a topological space and a function \(\nu:\Phi\rightarrow\mathcal{P}(X)\), called interpretation, that satisfies for any \(\phi,\psi\in\Phi\)_ \[\nu(\phi\wedge\psi) =\nu(\phi)\cap\nu(\psi)\] \[\nu(\phi\vee\psi) =\nu(\phi)\cup\nu(\psi)\] \[\nu(\neg\phi) =\nu(\phi)^{\complement}\] \[\nu(\Box\phi) =\nu(\phi)\] \[\nu(\Diamond\phi) =\overline{\nu(\phi)},\] _with \(\hat{A}\), \(\overline{A}\) and \(A^{\complement}\) respectively the topological interior, closure and complement of \(A\in\mathcal{P}(X)\)._ The elements of \(X\) are the worlds, and \(\nu\) can be understood as the valuation in the topological semantics, by giving the set of worlds where a formula is true, \(M,w\in\phi\) if and only if \(w\in\nu(\phi)\). Also, for any two formulas \(\phi,\psi\in\Phi\) one can prove \(\phi\vdash\psi\) if and only if \(\nu(\phi)\subseteq\nu(\psi)\). One can show that this semantics imposes system \(\mathbf{S4}\) to the logic. In this sense, the system \(\mathbf{S4}\) is said to be the logic of topological spaces. A well-known result is the equivalence between Kripke and topological semantics for Alexandrov topological spaces [18], i.e., every point of the space has a minimal neighborhood. Alexandrov topologies can also be defined as topological spaces where arbitrary intersections of open sets are open sets. In particular, any finite topology, i.e., only finitely many open sets, is an Alexandrov topology. **Theorem 5**.: _For each Kripke semantics \((\Sigma,R,\nu)\) satisfying \(\mathbf{S4}\) there is an Alexandrov topological space \((X,\tau)\) with equivalent topological semantics, and vice versa, i.e., they satisfy \((\Sigma,R,\nu)\vDash\phi\) if and only \((X,\tau,\nu)\vDash\phi\) for any \(\phi\in\Phi\)._ This result [19] follows from the identification of the accessibility relation \(R\) with the specialization pre-order \(\leq\), \(x\leq y\) if and only if \(\forall U\in\tau\) we have \((x\in U)\rightarrow(y\in U)\), which turns \((X,\leq)\) into a poset. Such a relation defines a topology generated by the basis of open sets \(U^{x}=\{y|y\leq x\}\), which is an equivalent definition of the Alexandrov topology13, and any Alexandrov topology has such natural pre-order that defines the semantics satisfying \(\mathbf{S4}\). Also, the system \(\mathbf{S4}\) satisfies completeness and soundness in relation to the topological semantics of Alexandrov topological spaces. Footnote 13: This is the upper Alexandrov topology, and one can think of it as defining open sets as generated by the causal past cones of points. The lower Alexandrov topology, with the basis \(U_{y}=\{x|y\leq x\}\), is given by the future causal cones [20]. ## Appendix B Sheaf approach Contextuality is, informally, the property of a physical system that cannot be explained classically, where this classicality is thought of as an ontological reality that is coarse-grained to the system14. Footnote 14: See Ref. [21; 22] for a general revision of contextuality. The Sheaf Approach codifies the system, written by its measurements, as a category. The set of outcomes of each measurement and the measure on it are codified by functors. Contextuality will appear as a property of the functor over a given system. ### Joint measurability The measurements are organized as a covering through compatibility, or joint measurability, of a set of measurements. It imposes the existence of a "mother" measurement, such that our accessible measurements have origin by classical post-processing15. Footnote 14: See Ref. [21; 22] for a general revision of contextuality. **Definition 6**.: _Let \(\{A_{i}\}\) be a set of measurements, they are jointly measurable if there exists a measurement \(G\) satisfying_ \[A_{i}(k^{(i)})=\sum_{j\neq i}G(k^{1},...,k^{(i)},...,k^{n}) \tag{16}\] _for all \(i\)._ As shown in Ref. [24], in quantum theory commuting implies jointly measurable, and the inverse holds if the measurements are sharp. ### Measurement scenario To the covering of measurements and the possible events, we give the name measurement scenario [25]. **Definition 7**.: _A measurement scenario \(\langle X,\mathcal{U}_{\gamma}(O_{x})_{x\in X}\rangle\) is a hypergraph16\(\langle X,\mathcal{U}\rangle\), where \(X\) is the set of measurements and \(\mathcal{U}\) a covering of contexts (a family of sets of compatible measurements), plus the sets \((O_{x})_{x\in X}\) for each \(x\in X\) are called outcome sets, with their elements the possible events of each measurement._ Footnote 16: Usually one imposes the hypergraph has some additional structure, usually enough to identify it as a simplicial complex. See Ref. [8] for a justified construction of the measurement scenario. For simplicity let's suppose that outcome sets are finite, and therefore one can define an outcome set \(O\) for all the measurements \(x\)17. We will also work with a measurement scenario with a simplicial complex structure of contexts. ### Presheaves and sheaves **Definition 8**.: _A presheid is a functor \(F:C^{op}\to\textbf{Set}\) of a category \(C\) to the category of sets. Let \(C\) be a site, a small category equipped with a coverage \(J\), in other words any object \(U\in C\) admits a collection of families of morphisms \(f_{i}:U_{i}\to U_{i\in I}\) called covering families. A presheaf on \((C,J)\) is a sheaf if it satisfies the following axioms_ * _Gluing: if for all_ \(i\in I\) _we have_ \(s_{i}\in F(U_{i})\) _such that_ \(s_{|U_{i}\cap U_{j}}=s_{j}|_{U_{i}\cap U_{j}}\)_, then there is_ \(s\in F(U)\) _satisfying_ \(s_{i}=s|_{U_{i}}\)_;_ * _Locality: if_ \(s,t\in F(U)\) _such that_ \(s|_{U_{i}}=t|_{U_{i}}\) _for all_ \(U_{i}\)_, then_ \(s=t\)_._ **Definition 9**.: _Elements \(s\in F(U)\) of the image of a presheaf are called local sections if \(U\neq X\), and global sections if \(U=X\)._ **Definition 10**.: _A compatible family is a family of sections \(\{s_{i}\in F(U_{i})\}_{i\in I}\) such that for all \(j,k\in I\) holds \(s_{j}|_{U_{j}\cap U_{k}}=s_{k}|_{U_{j}\cap U_{k}}\) in \(F(U_{j}\cap U_{k})\)._ ### Sheaf of events The covering \(\mathcal{U}\) can be restricted to maximal contexts, which will also be denoted by \(\mathcal{U}\), so that \(\langle X,\mathcal{U}\rangle\) can be understood as a set \(X\) with a covering \(\mathcal{U}\) of maximal contexts \(U_{j\in I}\) indexed by an ordered set \(I\)18. Since the intersection of contexts is a context, we can define the inclusion morphism \(\rho(jk,j):U_{j}\cap U_{k}\to U_{j}\), which turns the set of contexts and the inclusion morphisms into a small category19. Footnote 18: Given a covering, one can construct a locale, a pointless space, using unions and intersections. This means that the measurements are not the fundamental objects, but rather the minimal contexts become the effective measurements of the scenario, depending on how one chooses the covering. A physical example of refinement is spin degeneration, where refinement occurs by applying a suitable magnetic field. Footnote 19: From Ref. [26], we can see that the category of contexts with the inclusion is a site. **Definition 11**.: _The outcome sets are defined by a functor \(\mathcal{E}:\langle X,\mathcal{U}\rangle^{op}\to\textbf{Set}\), with \(\mathcal{E}::U\mapsto O^{U}=\bigtimes_{x\in U}O_{x}\) and \(\mathcal{E}::\rho\mapsto\rho^{\prime}\), such that for each element \(U\in\mathcal{U}\) we have an outcome set \(O^{U}\) of the context and \(\rho^{\prime}\) is the restriction to the outcome sets, \(\rho^{\prime}(j,kj):O^{i}\to O^{kj}=\mathcal{E}(U_{j}\cap U_{k}):s_{j}\mapsto s _{j}|_{kj}\)._ **Proposition 12**.: _The functor \(\mathcal{E}\) is a sheaf in the site of measurements and contexts, called the sheaf of events of a given measurement scenario._ ### Empirical model \(R\)-empirical models are defined with a semiring \(R\), such as the Boolean semiring \(\mathbb{B}\), the reals \(\mathbb{R}\), or the probability semiring \(\mathbb{R}^{+}\). The choice of an \(R\) defines a way to probe the model. To define \(R\)-empirical models, we use another functor \(\mathcal{D}_{R}:\textbf{Set}\to\textbf{Set}::O^{U}\mapsto\left\{\mu_{R}^{O^{ U}}\right\}\), taking a set of local events to the set of \(R\)-measures defined on it \(\mu_{p}^{O^{U}}:\mathbb{P}\left(O^{U}\right)\to R\) that satisfies \(\mu_{R}^{O^{U}}(O^{U})=1_{R}\), in analogy with probabilistic measure. We will denote by \(\mu_{p}::U\in\mathcal{U}\mapsto\mu_{\rho}^{O^{U}}\) a set of \(R\)-measures defined in each element of \(\mathcal{U}\), and call it a state. In the morphisms \(\mathcal{D}_{R}::\rho^{\prime}(j,kj)\mapsto\rho^{\prime\prime}(j,kj)\), with \(\rho^{\prime\prime}(j,kj)::\mu_{R}^{O^{j}}\mapsto\mu_{R}^{O^{|}k_{j}}=\mu_{R} ^{O^{j}}|_{kj}\) the marginalization of the \(R\)-measure \(j\) on the intersection \(kj\). **Definition 13**.: _The tuple \((X,\mathcal{U},\mathcal{E},\mu_{R})=e_{R}\) is called an \(R\)-empirical model over the measurement scenario \(\langle X,\mathcal{U},(O_{x})_{x\in X}\rangle=(\langle X,\mathcal{U}\rangle,\mathcal{E})\) given by the state \(\mu_{R}\), defining a set of local sections \(\left\{\mu_{R}^{O^{U}}\in\mathcal{D}_{R}\mathcal{E}(U);U\in\mathcal{U}\right\}\)._ ### Non-disturbance The non-disturbance condition is a usual condition imposed in an \(R\)-empirical model, sometimes implicitly. It says that \(\mu_{R}^{O^{l}}|_{kj}=\mu_{R}^{O^{l}}|_{kj}\) for all \(k\) and \(j\), which means there is local agreement between contexts. This condition is equivalent to the existence of a compatible family to \(\mathcal{D}_{R}\mathcal{E}\), but it doesn't imply in \(\mathcal{D}_{R}\mathcal{E}\) to be a sheaf. Since we can only have access to contexts, it is possible to define the functor \(\mathcal{D}_{R}\mathcal{E}\) through a state that can't be extended to a measure in the global events. Non-disturbance is equivalent to the notion of parameter-independence, as explained in Ref. [27], a property that, if violated, means the existence of non-trivial data between contexts [8]. As stated in Ref. [28], where disturbance is called inconsistent connectedness: "Intuitively, inconsistent connectedness is a manifestation of direct causal action of experimental set-up upon the variables measured in it". We will work with non-disturbing models. ### Contextuality Contextuality is the impossibility of describing a given \(R\)-empirical model in classical terms, but one must first define which classical notion to use. We will call it \(R\)-contextuality to make explicit the chosen semiring. First, we know that any measure can be described as the marginalization of another one, \[\mu_{R}^{O^{U}}(A)=\sum_{\lambda\in\Lambda}k^{O^{U}}\left(\lambda,A\right), \tag{10}\] for all \(A\in\mathbb{P}(O^{U})\), where \(k^{O^{U}}:\Lambda\times\mathbb{P}(O^{U})\to R\) is an \(R\)-measure that satisfies \(\sum_{\lambda\in\Lambda}k^{O^{U}}\left(\lambda,O^{U}\right)=1_{R}\). In the literature of contextuality and non-locality, \(\Lambda\) is called the set of hidden variables, which is statistically taken into account but is empirically inaccessible. To impose a classical behavior, the hidden variables must be independent of the contexts, a property called independence20. To reflect such behavior of independence, our model must show independence between measurements, in other words, be factorizable. Such independence allows us to write Footnote 20: \(\lambda\)-independence is related to the concept of free choice in non-locality [29; 30]. It can be understood as a dependence of the hidden variables, sometimes called ontic variables, on the contexts. Such dependence can store contextuality, as free choice can be understood as storing non-locality [31]. For more details on the classification of hidden variables in the subject of non-locality, see Ref. [32]. \[\mu_{R}^{O^{U}}(A)=\sum_{\Lambda}p\left(\lambda,A\right)\prod_{x\in U}\mu_{R}^ {O^{x}}(\rho^{\prime}(U,x)(A)), \tag{21}\] with the assistance of the set of hidden variables \(\Lambda\) being statistically taken into account by a measure \(p:\Lambda\times\mathbb{P}(O^{U})\to R\). Summing it up with \(\lambda\)-independence implies \[\mu_{R}^{O^{U}}(A)=\sum_{\Lambda}p\left(\lambda\right)\prod_{x\in U}\mu_{R}^{ O^{x}}(\rho^{\prime}(U,x)(A)), \tag{22}\] with \(p(\Lambda)=1_{R}\), closing the representation of a \(R\)-empirical model as a classical system. **Definition 14**.: _An \(R\)-empirical model is said to be \(R\)-non-contextual if there is an \(R\)-measure \(p\) and a set of hidden variables \(\Lambda\) such that Equation 22 holds for all \(U\in\mathcal{U}\)._ Another property we can impose is called outcome-determinism, which is the property of logically distinguish between outcomes21. In combination with non-disturbance, we get the following result [5; 33]. Footnote 21: It can be defined as for all \(\lambda\in\Lambda\) there is an outcome \(o\in O^{U}\) such that \(k^{O^{U}}(\lambda,A)=\delta_{\Lambda}(\lambda)\). It can be translated by imposing \(\prod_{x\in U}\mu_{R}^{O^{x}}(\rho^{\prime}(U,x)(A))\in\{0,1\}\). **Proposition 15**.: _A non-disturbing \(R\)-empirical model that satisfies the outcome-determinism condition has as its hidden variables exactly its global events._ This result allows us to develop measures of contextuality through the use of linear programming [33; 27], once we know the set \(\Lambda\). With it one can also prove the Fine-Abramsky-Brandenburger Theorem [5], where \(R\)-contextuality can be understood as the non-extendability of a local section to a global section of \(\mathcal{D}_{R}\mathcal{E}\), or in other words, as the nonexistence of a global \(R\)-measure with marginalization to a context \(U\in\mathcal{U}\). **Theorem 16** (Fine-Abramsky-Brandenburger).: _For an empirical model, it is equivalent to:_ * _be described by deterministic hidden variables described by equation_ 22_;_ * _all local sections extending to global sections;_ * _a measure_ \(\mu^{O^{X}}\) _that marginalizes to_ \(\mu^{O^{U}}\)_._ We can graphically describe non-contextual behavior as the commutation of the diagram (23) The global events define a global \(R\)-measure \(\mu_{R}^{OX}\), and the commutation implies the realization of the \(R\)-empirical model by it. Here, \(i^{\prime}\) is the inclusion of local events in global events. As explored in Ref. [34], the failure of commutativity can be seen in two independent ways: the first due to \(i^{\prime}\), which is linked to anti-realist interpretations, and the second due to \(\nu_{R}\), linked to realist interpretations.
2304.08788
Intrinsic mono-chromatic emission of x and gamma-rays in symmetric electron-photon beam collisions
This paper explores the transition between Compton Scattering and Inverse Compton Scattering (ICS), which is characterized by an equal exchange of energy and momentum between the colliding particles (electrons and photons). This regime has been called Symmetric Compton Scattering (SCS) and has the unique property of cancelling the energy-angle correlation of scattered photons, and, when the electron recoil is large, transferring mono-chromaticity from one colliding beam to the other, resulting in back-scattered photon beams that are intrinsically monochromatic. The paper suggests that large-recoil SCS or quasi-SCS can be used to design compact intrinsic monochromatic gamma-ray sources based on compact linacs, thus avoiding the use of GeV-class electron beams together with powerful laser/optical systems as those typically required for ICS sources.
L. Serafini, V. Petrillo, A. Bacci, C. Curatolo, I. Drebot, M. Rossetti Conti, A. R. Rossi
2023-04-18T07:49:32Z
http://arxiv.org/abs/2304.08788v1
# Intrinsic mono-chromatic emission of X and gamma-rays in symmetric electron-photon beam collisions ###### Abstract This paper explores the transition between Compton Scattering and Inverse Compton Scattering (ICS), which is characterized by an equal exchange of energy and momentum between the colliding particles (electrons and photons). This regime has been called Symmetric Compton Scattering (SCS) and has the unique property of cancelling the energy-angle correlation of scattered photons, and, when the electron recoil is large, transferring mono-chromaticity from one colliding beam to the other, resulting in back-scattered photon beams that are intrinsically monochromatic. The paper suggests that large-recoil SCS or quasi-SCS can be used to design compact intrinsic monochromatic gamma-ray sources based on compact linacs, thus avoiding the use of GeV-class electron beams together with powerful laser/optical systems as those typically required for ICS sources. ## 1 Introduction The spectral red-shift observed when a X-ray pulse interacts with a carbon target was observed by Arthur Compton in 1922 [1] and interpreted as an effect of the collision between the photons of the X-rays and the electrons of the solid, both assumed as point-like particles. The scattering of energetic photons by electrons at rest in the laboratory was called Compton effect after him. Much later, the Inverse Compton Scattering (ICS) effect was studied [2] and experimentally demonstrated at particle accelerators [3], using highly relativistic electrons colliding with laser beams, within an inverse kinematics set-up where the electron looses energy and momentum in favor of the incident photon, that is back-scattered and up-shifted to much larger energies. While the Compton effect cannot be explained classically, the low recoil regime of ICS, in which the electron energy/momentum loss is negligible, has been described in the framework of the classical electro-dynamics and it is known as Thomson effect [4].In this paper we analyze the transition between Compton effect and ICS, occurring when the colliding particles exchange an equal amount of energy and momentum, and we call this regime Symmetric Compton Scattering (SCS). In this particular condition, the properties of the scattered photons are unique: unlike in all other radiations emitted with a Lorentz boost, SCS scattered photon energy indeed no longer depends on the scattering angle, so that the back-scattered radiation beam becomes intrinsically monochromatic. Extending the analyses on large electron recoil ICS carried out in Ref. [5] to this particular regime, we find that SCS is characterized by the transfer of mono-chromaticity from one colliding beam to the other, so that when a large bandwidth photon beam collides under SCS conditions with a mono-energetic electron beam, the back-scattered photon beam results to be mono-chromatized. SCS or quasi-SCS at large recoil could allow to design compact sources of intrinsic mono-chromatic gamma-rays alimented by low energy MeV electron bunches, thus avoiding the use of GeV-class accelerators and powerful laser/optical systems, actually needed by ICS sources. ## 2 Compton interaction regimes Considering the Compton interaction between photon pulses and counter-propagating electron beams, we can derive the well-known equation for the photon energy (\(E^{\prime}_{\rm ph}=\hbar\omega^{\prime}\), with \(\omega^{\prime}\) being the photon associated angular frequency and \(\hbar\) the reduced Planck constant) scattered at an angle \(\theta\). Following the notation of eq. 3 in Ref. [6], we can write: \[E^{\prime}_{\rm ph}(\theta)=\frac{(1+\beta)\gamma^{2}}{\gamma^{2}(1-\beta \cos\theta)+\frac{X}{4}(1+\cos\theta)}E_{\rm ph}, \tag{1}\] where the incident photon energy is \(E_{\rm ph}=\hbar\omega\), \(\beta=v_{e}/c\) is the ratio between electron velocity \(v_{e}\) and light speed \(c\), \(\gamma=1/\sqrt{1-\beta^{2}}\) is electron Lorentz factor and \(X\) is the electron recoil factor that introduces an important contribution at high energy of both incident photons and electrons. \(X\) has been defined in [5] (eq. 4) as: \[X=\frac{4E_{e}E_{\rm ph}}{(m_{0}c^{2})^{2}}=\frac{4\gamma E_{\rm ph}}{m_{0}c^{ 2}}=4\gamma^{2}\frac{E_{\rm ph}}{E_{e}}, \tag{2}\] with \(m_{0}\) the electron rest mass ans \(E_{e}=\gamma m_{0}c^{2}\). We can distinguish the following different interaction regimes: Direct, Inverse and Symmetric Compton Scattering. ### Direct Compton The collision between high energy photons and electrons at rest (\(E_{\rm ph}\gg T_{e}\), where \(T_{e}=(\gamma-1)\,m_{0}c^{2}\)) is usually called Direct Compton (DC) scattering. In this process, the photon loses energy, being red-shifted, while the collided electron gains energy, being recoiled. The interaction studied in Arthur Compton's original experiment exploited X-rays incident on fixed target (\(\beta=0\), \(\gamma=1\)). eq. 1 reduces therefore to: \[E^{\prime}_{\rm ph}(\theta)=\frac{E_{\rm ph}}{1+\frac{X_{DC}}{4}(1+\cos\theta )}. \tag{3}\] In such a case, the electron recoil factor can be rewritten as a function of the well-known electron Compton wavelength \(\lambda_{C}=h/(m_{0}c)\) and the colliding photon wavelength \(\lambda\): \[X_{DC}=\frac{4E_{\rm ph}}{m_{0}c^{2}}=\frac{4\lambda_{C}}{\lambda}. \tag{4}\] where \(\lambda=hc/E_{\rm ph}\), leading directly to Compton's relationship for the scattered photon wavelength \(\lambda^{\prime}\): \[\lambda^{\prime}-\lambda=\lambda_{C}(1+\cos\theta). \tag{5}\] Here we start seeing a clear signature of radiation emission in collisions: the angular dependence of the scattered photon energy. Also, this is the fundamental expression for the photon wavelength red-shift that allowed Arthur Compton to invoke the quantum nature of the photon-electron collision in order to explain the experimental data. ### Inverse Compton Scattering The Inverse Compton Scattering (ICS) regime is instead characterized by collisions of highly energetic electrons and low energy photons (usually delivered by a laser) with \(E_{\rm ph}\ll T_{e}\). In the interaction the photon receives energy from the electron. ICS sources are characterized by high electron beam energy (from tens to hundreds MeV) i.e. \(\beta\to 1\) and \(\gamma\gg 1\). In this conditions, and in low recoil regime, the angular aperture of the cone containing half of the emitted radiation scales as \(\gamma^{-1}\). The boost effect contracts the angular coordinate of the photons, compressing such a cone. For \(\theta\) small, \(cos(\theta)\approx 1-\theta^{2}/2\) and eq. 1 can be approximated with: \[E^{\prime}_{\rm ph}(\theta)=\frac{4E_{\rm ph}\gamma^{2}}{1+X+\gamma^{2}\theta^ {2}}. \tag{6}\] Equations 1 and 6 show the intrinsic dependence of the scattered photon energy on the scattering angle through the term \(\theta\gamma\) in the denominator. In the DC regime, \(\gamma\simeq 1\) and the angular dependence appears without any boost effect. The maximum photon energy is achieved by fully back-scattered photons at \(\theta=0\): \(E^{\prime}_{\rm ph}(\theta=0)=4E_{\rm ph}\gamma^{2}/(1+X)\). In case of negligible recoil, that is in Thomson regime, the maximum photon energy just reduces to \(E^{\prime}_{\rm ph}(\theta=0)=4E_{\rm ph}\gamma^{2}\). ### Symmetric Compton Scattering In the transition between the two regimes previously discussed, the energy/momentum transfer between photons and electrons is balanced. We call Symmetric Compton Scattering (SCS) this regime of transition between DC and ICS. SCS is uniquely characterized by the disappearance of the energy-angle correlation of the scattered radiation that becomes monochromatic. Regarding the angular distribution, the radiation fills the whole solid angle with a different probability set by the angular dependence of the cross-section. Referring to equation 1, the condition for eliminating the \(\theta\) dependence in \(E^{\prime}_{\rm ph}\) is \[\frac{X}{4}=\beta\gamma^{2}, \tag{7}\] a condition achieved when photon and electron energies satisfy the relation \(E_{\rm ph}=\beta E_{e}\), which is equivalent to a photon and an electron with equal momenta and opposite directions \(\overrightarrow{p}_{e}=-\overrightarrow{p}_{\rm ph}\). Moreover, we introduce here an asymmetry factor \[A=\beta\gamma^{2}-\frac{X}{4} \tag{8}\] that vanishes (\(A=0\)) in SCS regime and assumes large positive values (\(A\rightarrow\gamma^{2}\)) in ICS regime. It is worth noting that the condition \(A=0\) actually deletes the angular dependence shown in eq. 1. In the SCS regime, the center of mass of the collisions is at rest in the laboratory reference frame (see next chapter), therefore the produced radiation is not Lorentz transformed and its frequency is no more boosted. As a result, the energy of the scattered photons is: \[E^{\prime}_{\rm ph}=E_{\rm ph}\,\forall\theta. \tag{9}\] For convenience we call \(E^{\prime}_{0}\) the value that eq. 1 assumes for \(\theta=0\): \[E^{\prime}_{0}=E^{\prime}_{\rm ph}(\theta=0)=\frac{2\gamma^{2}\left(1+\beta \right)}{2\gamma^{2}\left(1-\beta\right)+X}E_{\rm ph}. \tag{10}\] Expressing eq. 1 in Taylor Expansion of \(\theta\) around \(\theta=0\) we find the following first two terms: \[E^{\prime}_{\rm ph}(\theta)=E^{\prime}_{0}-\frac{E^{\prime}_{0}}{\gamma^{2}} \frac{A\gamma^{2}\theta^{2}}{2\gamma^{2}\left(1-\beta\right)+X}+A\,O(\gamma^ {4}\theta^{4}), \tag{11}\] where the higher order terms confirm that the symmetry condition (\(A=0\)) cancels the angular correlation. Note that the asymmetry factor A is negative in DC regime, where \(\beta=0\) and \(\gamma=1\), and \(A=-\lambda_{C}/\lambda\). On the other side, in ICS regime the asymmetry factor A is positive and scales like \(\gamma^{2}\). Equation 11 is actually a generalization of a well known formula in ICS, that reads \(dE^{\prime}_{\rm ph}/E^{\prime}_{0}=\gamma^{2}\theta^{2}/(1+X)\). Figures 1 and 2 show the dependence of \(E^{\prime}_{0}\) vs. \(T_{e}\) and of the recoil factor \(X\) in different regimes (DC, SCS, ICS), that are associated to the sign of the asymmetry factor A. ## 3 A Four-vector description The four-momentum of a particle is defined as \(\textbf{p}=\left(\frac{E}{c},p_{x},p_{y},p_{z}\right)\), where \(E\) is the total energy of the particle, \(c\) is the speed of light in vacuum, and \(p_{x},p_{y},p_{z}\) are the components of the particle's momentum along the \(x\), \(y\), \(z\) axes respectively. Let us consider the case of an head-on collision between a photon and a counter-propagating electron along the z-axis. Before the collision, the electron and the photon have the following four-momenta: \[\begin{array}{l}\textbf{p}_{\textbf{e}}=\left(\gamma m_{0}c,0,0,\beta\gamma m _{0}c\right),\\ \textbf{p}_{\rm ph}=\left(\frac{E_{\rm ph}}{c},0,0,-\frac{E_{\rm ph}}{c} \right).\end{array} \tag{12}\] and the total four-momentum is: \[\mathbf{p_{tot}}=\left(\gamma m_{0}c+\frac{E_{\mathrm{ph}}}{c},0,0,\beta\gamma m_{ 0}c-\frac{E_{\mathrm{ph}}}{c}\right). \tag{13}\] The energy available in the center of mass \(E_{cm}\), in terms of the recoil factor introduced in eq. 2, is: \[E_{cm}=c\sqrt{\mathbf{p_{tot}}\cdot\mathbf{p_{tot}}}=m_{0}c^{2}\sqrt{(1+\beta )\frac{X}{2}+1}=m_{0}c^{2}\sqrt{(1+\beta)\frac{2E_{e}E_{\mathrm{ph}}}{(m_{0}c ^{2})^{2}}+1}. \tag{14}\] The different regimes of Compton scattering can be analyzed in terms of their center of mass energy \(E_{cm}\). For the DC regime (\(\beta=0\), \(\gamma=1\)): \[E_{cm-DC}=m_{0}c^{2}\sqrt{\frac{2E_{\mathrm{ph}}}{m_{0}c^{2}}+1}. \tag{15}\] On the opposite side, in the ICS regime (\(\beta\simeq 1\)), we obtain: \[E_{cm-ICS}=m_{0}c^{2}\sqrt{X+1}=m_{0}c^{2}\sqrt{\frac{4\gamma E_{\mathrm{ph}}} {m_{0}c^{2}}+1}. \tag{16}\] Finally, for the SCS regime (\(E_{\mathrm{ph}}=\beta E_{e}=\beta\gamma m_{0}c^{2}\)): \[E_{cm-SCS}=(1+\beta)\gamma m_{0}c^{2}. \tag{17}\] In this peculiar situation \(E_{cm}\propto\gamma\) like in a collider. Being \(\gamma_{cm}\equiv E_{lab}/E_{cm}\) the Lorentz boost factor associated to the center of mass reference frame, in SCS we have \(\gamma_{cm}=1\) (because \(E_{lab-SCS}=E_{cm-SCS}\)), meaning that the center of mass of the system is at rest in the laboratory system, and the radiation produced here has the same angular and spectral distribution seen by a detector at rest in the lab. On the other hand, DC and ICS exhibit a dependence of the available energy \(E_{cm}\) typical of a fixed target collision, where \(E_{cm}\) scales like \(E_{cm}\propto\sqrt{T_{p}}\), where \(T_{p}\) is the projectile kinetic energy. ICS regime is characterized by \(\gamma_{cm}\gg 1\) since the center of mass reference frame is almost traveling with the electron (as shown in ref. [5]\(\gamma_{cm}=\gamma/(1+X)\)). Figure 1: Evolution of the recoil factor X value and of the scattered photon energy in SCS regime as a function of the electron kinetic energy \(T_{e}\). Colored areas identify the possible Compton Scattering regimes and the relative asymmetry factor A sign, DC in yellow (\(A<0\)), ICS in blue (\(A>0\)) and in green the SCS divide line (\(A=0\)). ## 4 Effects of deep recoil in Compton scattering The energy spread of the scattered photon beam (\(dE^{\prime}_{\rm ph}/E^{\prime}_{\rm ph}\), that is typically referred as relative bandwidth) has a vanishing dependence on the energy spread of incident photon beam (\(dE_{\rm ph}/E_{\rm ph}\)) whenever the recoil factor is very large. This effect is clearly illustrated in ref [6], eq. 9 for the Compton scattering interaction: \[\frac{dE^{\prime}_{ph}}{E^{\prime}_{ph}}=\frac{2+X}{1+X}\frac{d\gamma}{\gamma}+ \frac{1}{1+X}\frac{dE_{ph}}{E_{ph}} \tag{18}\] In this equation, the impact of high recoil factor values of X can be seen in the form of damping of the dependence of energy spread for the scattered photon beam (\(\frac{dE^{\prime}_{ph}}{E_{ph}}\)) on the energy spread of the incident photon beam (\(\frac{dE_{ph}}{E_{ph}}\)) and a resulting that's equal to the energy spread of the incident electrons (\(\frac{d\gamma}{\gamma}\)). We derive the dependence of the outgoing photon energy spread on the incident photon energy spread to study the effect of deep recoil: \[\frac{dE^{\prime}_{0}}{E^{\prime}_{0}}=\frac{\frac{2}{1+\beta}}{2(1-\beta) \gamma^{2}+X}\frac{dE_{0}}{E_{0}}=\frac{1}{1+\frac{1+\beta}{2}X}\frac{dE_{0}} {E_{0}} \tag{19}\] for \(\beta\to 1\) this result reproduces the second term of the right hand side of eq. 18. We also derive the dependence of the outgoing photon energy spread on the energy spread of the incoming electron beam under the approximation of \(\gamma\gg 1\), \(\beta\simeq 1-1/2\gamma^{2}\) \[\frac{dE^{\prime}_{0}}{E^{\prime}_{0}}=\frac{1}{1+\frac{1}{4\gamma^{2}}}\frac {2X+X^{2}+X/\gamma}{X\left(1+X\right)}\frac{d\gamma}{\gamma} \tag{20}\] This result reproduces the first term of the right hand side of eq. 18 at the limit \(\gamma\gg 1\). Figure 2: 3D representation of the value of the recoil factor X as a function of the interacting electron kinetic energy (\(T_{e}\)) and of the incident photon energy. The line shows the recoil value in SCS conditions ## 5 Symmetric Compton Scattering simulation This chapter is focused on the simulations of the Symmetric Compton Scattering using two different codes to quantify the phenomenon. Our simulation approach investigates energy transfer, scattering angles and bandwidth variation of the particles that interacted. ### Whizard We used the WHIZARD code [7], a universal parton-level Monte Carlo event generator, to perform simulations of SCS. An almost monochromatic (with an rms energy spread of the order of \(10^{-4}\)) 10 MeV electron beam (\(\beta\to 1\)) collided head-on with an incoming photon beam characterized by large bandwidth (20 percent rms spread). The recoil in this interaction is large, the computed value being \(X=1533\). The results of the interaction, shown in fig. 3, confirmed the theoretical predictions: the outgoing photons showed no correlation between energy and emission angle and featured a significant narrowing of the bandwidth (\(2\cdot 10^{-4}\) rms spread, i.e. a reduction of the energy spread by about 3 orders of magnitude from incident photon beam to the scattered photon beam). On the other hand, the electron beam emerging from the interaction inherited an high energy spread (of the order of \(10^{-1}\)) from the original interacting photon beam, displaying an entropy exchange. WHIZARD was also used to perform an analysis of the SCS effect in presence of angular divergence of the incident photon beam, shown in fig. 4, by mixing several runs with different scattering angle. The result confirms the SCS mono-chromatization also in interactions characterized by small incidence angles. Figure 3: Simulations of SCS are shown through 9 plots arranged in three rows representing the energy and angular distributions of the particles involved in the interaction. The rows depict the incident photons, outgoing photons, and outgoing electrons, respectively. In the first column, the energy distributions of the three particle species are displayed. In the second column, the angular distributions of the outgoing particles are shown. In the third column, the energy of the particles are shown as a function of their propagation angle. ### Monte Carlo code A home made multitasking Monte Carlo code has been developed, validated for different type of collisions and applied to the Compton scattering process [8]. As an additional internal feature, the code allows to consider the energy and angular (polar and azimuth) spread of both incident beams. To confirm the occurrence of the effect, we conducted the same simulation of the deep-recoil SCS interaction (at \(X=1533\)). Our findings confirm the exchange of entropy, resulting in a reduction of the bandwidth of the emitted radiation and an enlargement of the electron's bandwidth (as summarized in fig. 5). Furthermore, we examined the transition from the SCS regime to the ICS regime, with particular focus on the angular distribution of the scattered radiation. To explore the transition regime, we started with the deep-recoil SCS interaction (\(X=1533\)) and slightly increased the energy of the incident electron bunch while reducing the energy of the photon bunch. We investigated three cases, specifically with electron-photon energies of (\(E_{e}\simeq E_{ph}=10\) MeV), (\(E_{e}=11\) MeV, \(E_{ph}=9.08\) MeV), and (\(E_{e}=12\) MeV, \(E_{ph}=8.33\) MeV). The results, depicted in fig. 7, show the distribution shifting from an uncorrelated energy-angle pattern to a more correlated one, resembling the typical "mustache" shaped curve observed in ICS experiments. Finally we investigated the SCS regime with low recoil factor (\(A=0\), \(X=3.13\)). In this peculiar situation (see fig. 6), a milder mono-chromatization of the incident photons occurs, compared with what happens in deep recoil regime represented in. 5. Figure 4: Simulations of SCS with an incoming photon beam displaying a correlation between angle of propagation and photon energy. The results are shown through 9 plots arranged in three rows as in fig. 3. The angular correlation of the incoming photon beam is removed in the interaction thanks to the high recoil factor (\(X\sim 1500\)). Figure 5: This figure illustrates the simulations of the SCS regime for a recoil factor of \(X=1533\). The plot consists of six subplots organized into two rows, with the first row showing the photon behavior and the second row showing the electron behavior. In each row, the leftmost column displays the bandwidth of the incoming bunch, the central column shows the resulting particles after the interaction, and the rightmost column displays the angular distribution of the particles. This arrangement allows for a detailed analysis of the behavior of photons and electrons during the SCS regime. Figure 6: This figure demonstrates the regime transitions between SCS and ICS for three different sets of photons and electrons energy. The upper row displays the produced photon energy distribution for each set, while the bottom row shows the angular distribution of the particles (i.e., energy as a function of emission angle) for each set. This arrangement allows for a comprehensive comparison of the behavior of the particles during the transition between SCS and ICS regimes. ## 6 Conclusions We explore the transition between Compton Scattering and Inverse Compton Scattering (ICS), a regime characterized by an equal exchange of energy and momentum between the colliding particles. This regime has been called Symmetric Compton Scattering (SCS) and has the unique property of transferring mono-chromaticity from one beam to the other, resulting in back-scattered photons that are intrinsically monochromatic. The paper suggests that large-recoil SCS or quasi-SCS can be used to design compact intrinsic monochromatic gamma-ray sources, thus avoiding the use of GeV-class electron beams and powerful laser/optical systems typically required for ICS sources. Indeed the capability of SCS regime to vanish the photon energy-angle correlation, married to the large recoil beneficial effects on the scattered photon energy spread, as shown in previous chapters, makes possible to conceive a mono-chromatic gamma ray beam source based on the collision between a bremsstrahlung radiation beam (or a coherent bremsstrahlung beam from a channeling source, [9]) and a mono-energetic electron beam of similar energy, say in the 2-10 MeV range, so to employ a compact source that is much more sustainable than typical ICS sources for nuclear physics/photonics like ELI-NP-GBS [10], that all envisage the use of GeV-class linear accelerators. An Energy Recovery Linac with 10-20 MeV electron beam energy would allow to sustain a much larger average current (in the range of tens of mA) than a room temperature Linac like in ELI-NP-GBS, to the collision point with the broad-band bremsstrahlung photon beam, compensating the decrease of total cross section \(\sigma\) with the recoil factor \(X\) typical of Klein-Nishina formula, as shown here below (see ref. [5]) \[\left\{\begin{aligned} \lim_{X\to 0}\sigma&=\frac{8 \pi r_{e}^{2}}{3}(1-X)=\sigma_{T}(1-X)\\ \lim_{X\rightarrow\infty}&=\frac{2\pi r_{e}^{2}}{X }\left(\log X+\frac{1}{2}\right)\end{aligned}\right. \tag{21}\] A detailed study of a SCS \(\gamma\)-ray source will be the subject of a future work, that will have to take into account the compensation of the cross section decrease for large recoils with the positive effects of reducing the photon bandwidth by large recoils, as shown by equations 18, 19 and 20 (assuming the capability to bring to collision a good quality electron beam with small energy spread \(\frac{\Delta_{z}}{\gamma}\) below \(10^{-3}\)[11]). The guidelines of such a design study should be oriented Figure 7: This figure shows the simulations of the SCS regime for a small recoil factor of \(X\sim 3\). The plot consists of six subplots organized into two rows, with the first row showing the photon behavior and the second row showing the electron behavior. In each row, the leftmost column displays the bandwidth of the incoming bunch, the central column shows the resulting particles after the interaction, and the rightmost column displays the angular distribution of the particles. This arrangement allows to underline the incomplete exchange of energy spread due to the reduced recoil factor. to optimize the SCS \(\gamma\)-ray source in terms of maximum Spectral Density \(S_{d}\), as illustrated in [11] and typically requested by nuclear photonics and photo-nuclear physics applications. \(S_{d}\) is actually defined as \(S_{d}\equiv N_{\mathrm{ph}}(\Delta E^{\prime}_{ph}/E^{\prime}_{ph})^{-1}\), where \(N_{\mathrm{ph}}\) is the number of gamma ray photons generated per second within the relative bandwidth \(\Delta E^{\prime}_{ph}/E^{\prime}_{ph}\) around the nominal average energy \(E^{\prime}_{ph}\). Since \(N_{\mathrm{ph}}\) scales with the product \(L\sigma\), where \(L\) is the collision luminosity and \(\sigma\) is given in eq. 21, we see that for large recoil collisions \(S_{d}\) scales like \(L\log X/X\). On the other side \(\Delta E^{\prime}_{ph}\) becomes very small in case of SCS at large recoil, as stated by equations 11, 19 and 20, and well illustrated in figures 3, 4 and 5, showing the potentiality of SCS to attain photon beams with relative bandwidths smaller than \(10^{-3}\).